patent_id
stringlengths 7
8
| description
stringlengths 125
2.47M
| length
int64 125
2.47M
|
---|---|---|
11863134 | DETAILED DESCRIPTION The technical content of the present invention is further described in detail below with reference to the accompanying drawings and specific embodiments. The present invention provides a balanced radio frequency power amplifier configured to support related indexes of mobile high-power user equipment (HPUE), to improve a maximum output linear power of the radio frequency power amplifier and reduce the sensitivity of the radio frequency power amplifier to a load. The structure and the working principle of the balanced radio frequency power amplifier are described in detail below by using different embodiments. Embodiment 1 As shown inFIG.1, a balanced radio frequency power amplifier provided in this embodiment includes a control unit100, a first driver stage unit110, a 90-degree power splitter unit160, a first power stage unit120, a second power stage unit121, and an adjustable 90-degree power combiner unit170, and a switch module unit130. The control unit100is separately connected to the first driver stage unit110, the first power stage unit120, the second power stage unit121, the adjustable 90-degree power combiner unit170, and the switch module unit130, and may be configured to control quiescent currents of the first driver stage unit110, the first power stage unit120, and the second power stage unit121. An input end of the first driver stage unit110is connected to a radio frequency signal input end, an output end of the first driver stage unit110is connected to an input end of the 90-degree power splitter unit160, an output end of the 90-degree power splitter unit160is separately connected to input ends of the first power stage unit120and the second power stage unit121, output ends of the first power stage unit120and the second power stage unit121are separately connected to an input end of the adjustable 90-degree power combiner unit170, an output end of the adjustable 90-degree power combiner unit170is connected to an input end of the switch module unit130, and an output end of the switch module unit130is separately connected to a radio frequency transmission path (A1to An) and a radio frequency receiving path (B1to Bm), where m and n are positive integers. When a radio frequency input signal enters the balanced radio frequency power amplifier, the radio frequency input signal is inputted to the first driver stage unit110through the radio frequency signal input end to amplify, and is inputted to the 90-degree power splitter unit160after being amplified by the first driver stage unit110, and the 90-degree power splitter unit160divides the radio frequency input signal into two equal-amplitude radio frequency input signals that have a phase difference of 90 degrees (or approximately 90 degrees, which is the same below), correspondingly inputs the two equal-amplitude radio frequency input signals to the first power stage unit120and the second power stage unit121to amplify, correspondingly inputs the two equal-amplitude radio frequency input signals that are amplified by the first power stage unit120and the second power stage unit121to the adjustable 90-degree power combiner unit170, and controls the adjustable 90-degree power combiner unit170by using the control unit100, so that when the two equal-amplitude radio frequency input signals at different frequencies have a minimum phase difference and a minimum amplitude difference (preferably, the phase difference is 0 degrees or approximately 0 degrees, and the amplitude difference is 0 dBc or approximately 0 dBc), the two radio frequency input signals are synthesized into one radio frequency input signal to be inputted to the switch module unit130. In this case, the control unit100control a switch state of the switch module unit130according to a frequency band requirement, and inputs the synthesized radio frequency input signal to a next-stage circuit through a specified radio frequency transmission path. To input the radio frequency input signal that is amplified by the first driver stage unit110to the 90-degree power splitter unit160to the maximum extent, further participate in matching the 90-degree power splitter unit160to divide the radio frequency input signal into the two equal-amplitude radio frequency input signals that have the phase difference of 90 degrees, input the two equal-amplitude radio frequency input signals to the first power stage unit120and the second power stage unit121to the maximum extent to amplify, and reduce the design complexity of the balanced radio frequency power amplifier as much as possible, a first matching network may be disposed in the 90-degree power splitter unit160, to implement impedance matching between the 90-degree power splitter unit160and the first driver stage unit110, the first power stage unit120, and the second power stage unit121respectively by using the first matching network. Similarly, to input the radio frequency input signal that is amplified by the first power stage unit120and the second power stage unit121to the adjustable 90-degree power combiner unit170to the maximum extent, further participate in matching the adjustable 90-degree power combiner unit170, so that the two equal-amplitude radio frequency input signals have a phase difference of 0 degrees or approximately 0 degrees, and an amplitude difference of 0 dBc or approximately 0 dBc at different frequencies, synthesize the two equal-amplitude radio frequency input signals into one radio frequency input signal to be inputted to the switch module unit130to the maximum extent, and reduce the design complexity of the balanced radio frequency power amplifier as much as possible, a second matching network may also be disposed in the adjustable 90-degree power combiner unit170, to implement impedance matching between the adjustable 90-degree power combiner unit170and the first power stage unit120, the second power stage unit121, and the switch module unit130respectively by using the second matching network. As shown inFIG.1, the switch module unit130includes n groups of transmit/receive switches (single-pole double-throw switches S1to Sn, where n is a positive integer). Common ends of each group of transmit/receive switches are separately connected to the output end of the adjustable 90-degree power combiner unit170, one output end of each group of transmit/receive switches is respectively connected to a corresponding radio frequency transmission path, and the other output end of each group of transmit/receive switches is respectively connected to a corresponding radio frequency receiving path. To reduce a package size of the balanced radio frequency power amplifier, optimize a match between integrated filter devices, and simplify a design of a communication terminal, any one of a Band1 duplexer, a Band38 filter, a Band40 filter, and a Band41 filter may be disposed between the switch module unit130and the radio frequency transmission path in this embodiment. For example, as shown inFIG.2, the Band1 duplexer may be disposed between a first group of transmit/receive switches S1and the radio frequency transmission path, and any one of the Band38 filter, the Band40 filter, and the Band41 filter is disposed between a final group of transmit/receive switches Sn and the radio frequency transmission path. To facilitate user-defined application of the communication terminal, as shown inFIG.3, the switch module unit130in this embodiment may be further removed, and the two equal-amplitude radio frequency input signals may be synthesized by the adjustable 90-degree power combiner unit170into one radio frequency input signal to be directly inputted to the specified radio frequency transmission path, and then inputted to the next-stage circuit. Embodiment 2 As shown inFIG.4, a difference between a balanced radio frequency power amplifier provided in this embodiment and the balanced radio frequency power amplifier provide in Embodiment 1 is that the 90-degree power splitter unit160is disposed in front of the first driver stage unit110, and the first driver stage unit110replaced with a second driver stage unit111and a third driver stage unit112. Therefore, the input end of the 90-degree power splitter unit160is connected to the radio frequency signal input end, the output end of the 90-degree power splitter unit160is correspondingly connected to input ends of the second driver stage unit111and the third driver stage unit112, and output ends of the second driver stage unit111and the third driver stage unit112are correspondingly connected to input ends of the first power stage unit120and the second power stage unit121. The same part between the balanced radio frequency power amplifier provided in this embodiment and the balanced radio frequency power amplifier provide in Embodiment 1 is not described again. As described in Embodiment 1, a first matching network may be disposed in the 90-degree power splitter unit, and a second matching network may be disposed in the adjustable 90-degree power combiner unit170. When a radio frequency input signal enters the balanced radio frequency power amplifier, the radio frequency input signal is inputted to the 90-degree power splitter unit160through the radio frequency signal input end, the 90-degree power splitter unit160divides the radio frequency input signal into two equal-amplitude radio frequency input signals that have a phase difference of 90 degrees, correspondingly inputs the two equal-amplitude radio frequency input signals to the second driver stage unit111and the third driver stage unit112to amplify, correspondingly inputs the two equal-amplitude radio frequency input signals that are amplified by the second driver stage unit111and the third driver stage unit112to the first power stage unit120and the second power stage unit121to further amplify, correspondingly inputs the two equal-amplitude radio frequency input signals to the adjustable 90-degree power combiner unit170, and controls the adjustable 90-degree power combiner unit170by using the control unit100, so that when a phase difference of the two equal-amplitude radio frequency input signals at different frequencies changes to 0 degrees or approximately 0 degrees, and an amplitude difference is 0 dBc or approximately 0 dBc, the two radio frequency input signals are synthesized into one radio frequency input signal to be inputted to the switch module unit130. In this case, the control unit100control a switch state of the switch module unit130according to a frequency band requirement, and inputs the synthesized radio frequency input signal to a next-stage circuit through a specified radio frequency transmission path. Similarly, to reduce a package size of the balanced radio frequency power amplifier, optimize a match between integrated filter devices, and simplify a design of a communication terminal, any one of a Band1 duplexer, a Band38 filter, a Band40 filter, and a Band41 filter may be disposed between the switch module unit130and the radio frequency transmission path in this embodiment. For example, as shown inFIG.5, the Band1 duplexer may be disposed between a first group of transmit/receive switches S1and the radio frequency transmission path, and any one of the Band38 filter, the Band40 filter, and the Band41 filter is disposed between a final group of transmit/receive switches Sn and the radio frequency transmission path. To facilitate user-defined application of the communication terminal, as shown inFIG.6, the switch module unit130in this embodiment may be further removed, and the two equal-amplitude radio frequency input signals may be synthesized by the adjustable 90-degree power combiner unit170into one radio frequency input signal to be directly inputted to the specified radio frequency transmission path, and then inputted to the next-stage circuit. In addition, the first driver stage unit110, the second driver stage unit111, and the third driver stage unit112in Embodiment 1 and Embodiment 2 may be single-stage driver stage units or two-stage driver stage units. The first power stage unit120and the second power stage unit121may be single-stage power stage units or two-stage power stage units. In addition, the first power stage unit120and the second power stage unit121may be heterojunction bipolar transistors (HBTs) or high electron mobility transistors (HEMTs) or pseudomorphic high electron mobility transistors (pHEMTs) made on a gallium arsenide (GaAs) substrate or a silicon germanium (SiGe) substrate, or bipolar junction transistors (BJTs), or complementary metal-oxide-semiconductor (CMOS) transistors made on a silicon substrate. As shown inFIG.7, in the balanced radio frequency power amplifier provided in Embodiment 1 and Embodiment 2, the adjustable 90-degree power combiner unit170includes a 90-degree phase shifter1and a Wilkinson power combiner2. The 90-degree phase shifter1includes a phase lag impedance transformation network3and a phase lead impedance transformation network4. An input end of the phase lag impedance transformation network3is connected to the output end of the first power stage unit120, an output end of the phase lag impedance transformation network3is connected to one input end of the Wilkinson power combiner2, an input end of the phase lead impedance transformation network4is connected to the output end of the second power stage unit121, and an output end of the phase lead impedance transformation network4is connected to the other input end of the Wilkinson power combiner2. The two equal-amplitude radio frequency input signals that have the phase difference of 90 degrees and that are amplified by the first power stage unit120and the second power stage unit121enter the phase lag impedance transformation network3and the phase lead impedance transformation network4respectively according to a phase relationship between the two equal-amplitude radio frequency input signals, so that after the phase difference between the two equal-amplitude radio frequency input signals changes to 0 degrees and the amplitude difference is close to (equal to or approximately equal to) 0 dBc at different frequencies, the two equal-amplitude radio frequency input signals are inputted to the Wilkinson power combiner2. The Wilkinson power combiner2synthesizes the two equal-amplitude radio frequency input signals that have the phase difference of 0 degrees and the amplitude difference close to (equal to or approximately equal to) 0 dBc into one radio frequency input signal to be inputted to the switch module unit130, or directly inputted to the specified radio frequency transmission path. As shown inFIG.7, the phase lag impedance transformation network3includes a first inductor301, a first variable capacitor302, and a second variable capacitor309. One end of the first inductor301is used as the input end of the phase lag impedance transformation network3, to be connected to the output end of the first power stage unit120, the other end of the first inductor301is separately connected to one end of the first variable capacitor302and one end of the second variable capacitor309, the other end of the first variable capacitor302is grounded, and the other end of the second variable capacitor309is used as the output end of the phase lag impedance transformation network3, to be connected to one input end of the Wilkinson power combiner2. The phase lead impedance transformation network4includes a third variable capacitor303and a second inductor304. One end of the third variable capacitor303is used as the input end of the phase lead impedance transformation network4, to be connected to the output end of the second power stage unit121, the other end of the third variable capacitor303is used as the output end of the phase lead impedance transformation network4, to be separately connected to one end of the second inductor304and the other input end of the Wilkinson power combiner2, and the other end of the second inductor304is grounded. The Wilkinson power combiner2includes a fourth variable capacitor310, a variable resistor308, a fifth variable capacitor311, a third inductor305, a fourth inductor306, and a sixth variable capacitor307. One end of the fourth variable capacitor310is used as one input end of the Wilkinson power combiner2, to be separately connected to the output end of the phase lag impedance transformation network3, the other end of the variable resistor308, and one end of the third inductor305; one end of the variable resistor308is used as the other input end of the Wilkinson power combiner2, to be separately connected to the output end of the phase lead impedance transformation network4, one end of the fifth variable capacitor311, and one end of the fourth inductor306; the other end of the third inductor305and the other end of the fourth inductor306are used as the output ends of the adjustable 90-degree power combiner unit170, to be respectively connected to one end of the sixth variable capacitor307and the input end of the switch module unit130; and the other ends of the fourth variable capacitor310, the fifth variable capacitor311, and the sixth variable capacitor307are separately grounded. Therefore, according to different frequency values, the control unit100changes values of a plurality of variable capacitors and variable resistors in the adjustable 90-degree power combiner unit170, so that the adjustable 90-degree power combiner can maintain the phase difference between the two equal-amplitude radio frequency input signals that have the phase difference of 90 degrees at different frequencies at 0 degrees, and the amplitude difference close to (equal to or approximately equal to) 0 dBc. According to actual design requirements, the variable capacitors in the adjustable 90-degree power combiner unit170may be replaced with fixed capacitors, and the variable resistors may be replaced with fixed resistors, that is, non-adjustable structures (as shown inFIG.8, the adjustable 90-degree power combiner unit170is replaced with a 90-degree power combiner unit171with a non-adjustable structure). For example, the 90-degree power combiner unit with a non-adjustable structure may be a matching network and a phase-shift network built by inductors, capacitors and resistance devices, or impedance and phase transformation networks built by metal coupled devices, or impedance and phase transformation networks built for a transmission line network. In addition, to reduce a quantity of devices in the adjustable 90-degree power combiner as many as possible, the variable capacitors in the adjustable 90-degree power combiner may be further combined with a variable capacitor, a fixed capacitor or an inductor nearby. As shown inFIG.9A, each variable capacitor in the adjustable 90-degree power combiner unit170may be formed by parallel connection of a capacitor C0and n groups of switched capacitor, or independent parallel connection of n groups of switched capacitors. In the n groups of switched capacitors, each group of switched capacitors is formed by series connection of one capacitor and one switch, and switches in the each group of switched capacitors are separately connected to the control unit100. For example, the group of switched capacitors may be formed by parallel connection of n switched capacitors with the same structures such as a capacitor C1connected to a switch K1in series, a capacitor C2connected to a switch K2in series . . . and a capacitor Cn connected to a switch Kn in series. The switch K1, the switch K2, . . . and the switch Kn are separately connected to the control unit100, and the control unit100switches a specified quantity of switches in the switch K1to the switch Kn on or off, to obtain capacitances of corresponding switched capacitors, so as to optimize the phase difference and the amplitude difference between the two radio frequency input signals in the adjustable 90-degree power combiner unit, so that the phase difference between the two radio frequency input signals at different frequencies maintains at 0 degrees, and the amplitude difference is close to (equal to or approximately equal to) 0 dBc. This not only improves the maximum output linear power of the balanced radio frequency power amplifier, but also minimizes the output power of the balanced radio frequency power amplifier along with the change of the load phase. As shown inFIG.9B, each variable capacitor in the adjustable 90-degree power combiner unit170may be further formed by series connection of a capacitor C0and n groups of switched capacitor, or independent series connection of n groups of switched capacitors. In the n groups of switched capacitors, each group of switched capacitors is formed by parallel connection of one capacitor and one switch, and switches in the each group of switched capacitors are separately connected to the control unit100. For example, the group of switched capacitors may be formed by series connection of n switched capacitors with the same structures such as a capacitor C1connected to a switch K1in parallel, a capacitor C2connected to a switch K2in parallel . . . and a capacitor Cn connected to a switch Kn in parallel. The switch K1, the switch K2, . . . and the switch Kn are separately connected to the control unit100, and the control unit100switches a specified quantity of switches in the switch K1to the switch Kn on or off, to obtain capacitances of corresponding switched capacitors, so as to optimize the phase difference and the amplitude difference between the two radio frequency input signals in the adjustable 90-degree power combiner unit, so that the phase difference between the two radio frequency input signals at different frequencies maintains at 0 degrees or approximately 0 degrees, and the amplitude difference is close to (equal to or approximately equal to) 0 dBc. This not only improves the maximum output linear power of the balanced radio frequency power amplifier, but also minimizes the output power of the balanced radio frequency power amplifier along with the change of the load phase. As shown inFIG.10A, each variable resistor in the adjustable 90-degree power combiner unit170may be formed by parallel connection of a resistor R0and n groups of switched resistors, or independent parallel connection of n groups of switched resistors. In the n groups of switched resistors, each group of switched resistors is formed by series connection of one resistor and one switch, and switches in the each group of switched resistors are separately connected to the control unit100. For example, the group of switched resistors may be formed by parallel connection of n switched resistors with the same structures such as a resistor R1connected to a switch K1in series, a resistor R2connected to a switch K2in series . . . and a resistor Rn connected to a switch Kn in series. The switch K1, the switch K2, . . . and the switch Kn are separately connected to the control unit100, and the control unit100switches a specified quantity of switches in the switch K1to the switch Kn on or off, to obtain resistances of corresponding switched resistors, so as to optimize the phase difference and the amplitude difference between the two radio frequency input signals in the adjustable 90-degree power combiner unit, so that the phase difference between two radio frequency input signals at different frequencies maintains at 0 degrees or approximately 0 degrees, and the amplitude difference is close to (equal to or approximately equal to) 0 dBc. This not only improves the maximum output linear power of the balanced radio frequency power amplifier, but also minimizes the output power of the balanced radio frequency power amplifier along with the change of the load phase. As shown inFIG.9B, each variable resistor in the adjustable 90-degree power combiner unit170may be formed by series connection of a resistor R0and n groups of switched resistors, or independent series connection of n groups of switched resistors. In the n groups of switched resistors, each group of switched resistors is formed by parallel connection of one resistor and one switch, and switches in the each group of switched resistors are separately connected to the control unit100. For example, the group of switched resistors may be formed by series connection of n switched resistors with the same structures such as a resistor R1connected to a switch K1in parallel, a resistor R2connected to a switch K2in parallel . . . and a resistor Rn connected to a switch Kn in parallel. The switch K1, the switch K2, . . . and the switch Kn are separately connected to the control unit100, and the control unit100switches a specified quantity of switches in the switch K1to the switch Kn on or off, to obtain resistances of corresponding switched resistors, so as to optimize the phase difference and the amplitude difference between the two radio frequency input signals in the adjustable 90-degree power combiner unit, so that the phase difference between two radio frequency input signals at different frequencies maintains at 0 degrees or approximately 0 degrees, and the amplitude difference is close to (equal to or approximately equal to) 0 dBc. This not only improves the maximum output linear power of the balanced radio frequency power amplifier, but also minimizes the output power of the balanced radio frequency power amplifier along with the change of the load phase. Each switch inFIG.9AtoFIG.10Bmay be designed on a silicon on insulator (SOI for short) chip, or may be designed on a GaAs chip, or may be designed on a SiGe chip. Each resistor and capacitor inFIG.9AtoFIG.10Bmay be designed on an integrated circuit chip, or may be implemented by using a discrete device. Because frequency bands: Band7 (2.5 GHz to 2.57 GHz), Band38 (2.57 GHz to 2.62 GHz), Band40 (2.3 GHz to 2.4 GHz) and Band41 (2.496 GHz to 2.69 GHz) all fall within a frequency range of 2.3 GHz to 2.69 GHz, the communication terminal integrates all radio frequency power amplifiers in the frequency bands: Band7, Band38, Band40, and Band41 into one integrated circuit chip. FIG.11shows a curve that output power (Pout) changes with a load phase in a case that the balanced radio frequency power amplifier is in a load impedance voltage standing wave ratio (VSWR) of 3:1, and phase differences between two radio frequency input signals inputted to the adjustable 90-degree power combiner unit170are 80 degrees, 90 degrees, and 100 degrees. It can be seen from the figure that, a curve207shows that the output power changes with a load phase, and is approximately 1.1 dBc, in a case that the phase difference between the two radio frequency input signals inputted to the adjustable 90-degree power combiner unit170maintains at 90 degrees, and the balanced radio frequency power amplifier is in the load impedance VSWR of 3:1. A curve208shows that the output power of the balanced radio frequency power amplifier changes with the load phase, and is approximately 2 dBc, in a case that the phase difference between the two radio frequency input signals inputted to the adjustable 90-degree power combiner is reduced to 80 degrees, and the balanced radio frequency power amplifier is in the load impedance VSWR of 3:1. A curve209shows that the output power changes with the load phase, and is approximately 2.6 dBc, in a case that the phase difference between the two radio frequency input signals inputted to the adjustable 90-degree power combiner is increased to 100 degrees, and the balanced radio frequency power amplifier is in the load impedance VSWR of 3:1. Therefore, when the adjustable 90-degree power combiner is designed, the output power changes with the load phase minimally in a case that the phase difference between the two radio frequency input signals inputted to the adjustable 90-degree power combiner maintains at 90 degrees, and the balanced radio frequency power amplifier is in the load impedance VSWR of 3:1. FIG.12shows a curve that output power (Pout) changes with a load phase in a case that the balanced radio frequency power amplifier is in a load impedance VSWR of 3:1, and amplitude differences between two radio frequency input signals inputted to the adjustable 90-degree power combiner are −1 dBc, 0 dBc, and +1 dBc. It can be seen from the figure that, a curve210shows that the output power changes with the load phase, and is approximately 1.1 dBc, in a case that the amplitude difference between the two radio frequency input signals inputted to the adjustable 90-degree power combiner is 0 dBc, and the balanced radio frequency power amplifier is in the load impedance VSWR of 3:1. Curve211shows that the output power changes with the load phase, and is approximately 1.4 dBc, in a case that the amplitude difference between the two radio frequency input signals inputted to the adjustable 90-degree power combiner is −1 dBc, and the load impedance VSWR of 3:1. Curve212shows that the output power changes with the load phase, and is approximately 1.6 dBc, in a case that the phase difference between the two radio frequency input signals inputted to the adjustable 90-degree power combiner is +1 dBc, and the balanced radio frequency power amplifier is in the load impedance VSWR of 3:1. Therefore, when the adjustable 90-degree power combiner is designed, the output power changes with the load phase minimally in a case that the phase difference between the two radio frequency input signals inputted to the adjustable 90-degree power combiner is reduced to 0 dBc, and the balanced radio frequency power amplifier is in the load impedance VSWR of 3:1. Based on the foregoing, when the two radio frequency input signals inputted to the adjustable 90-degree power combiner unit170has a larger phase difference in 90 degrees or a larger amplitude difference, the output power changes more greatly with the load phase in a case that the radio frequency power amplifier is in the load impedance VSWR of 3:1. Therefore, the balanced radio frequency power amplifier is within the frequency range of 2.3 GHz to 2.69 GHz, the phase difference between the two radio frequency input signals inputted to the adjustable 90-degree power combiner maintains near 90 degrees, and the amplitude difference is close to 0 dBc. As shown inFIG.13, within the frequency range whose center frequency band is 2.4 GHz to 2.6 GHz, the phase difference between the two radio frequency input signals inputted to the adjustable 90-degree power combiner is 86 degrees (curve213) to 90 degrees (curve214), and the amplitude difference is +0.5 dBc to −0.5 dBc. Therefore, the balanced radio frequency power amplifier in the frequency band of Band1 can maintain good performance. As shown inFIG.14, by changing the values of the variable capacitors and the variable resistors in the adjustable 90-degree power combiner, the center frequency band is moved to the frequency range of Band40 (2.3 Ghz to 2.4 Ghz), the phase difference between the two radio frequency input signals inputted to the adjustable 90-degree power combiner is 88 degrees (curve213) to 90 degrees (curve214), and the amplitude difference is +0.18 dBc to +0.23 dBc. Therefore, the balanced radio frequency power amplifier in the frequency band of Band40 can maintain good performance. As shown inFIG.15, by changing the values of the variable capacitors and the variable resistors in the adjustable 90-degree power combiner, the center frequency band is moved to the frequency range of Band41 (2.496 Ghz to 2.69 Ghz), the phase difference between the two radio frequency input signals inputted to the adjustable 90-degree power combiner is 87 degrees (curve213) to 89 degrees (curve214), and the amplitude difference is −0.21 dBc to −0.05 dBc. Therefore, the balanced radio frequency power amplifier in the frequency band of Band38 and the frequency band of Band41 can maintain good performance. As shown inFIG.16, the existing single-end structure radio frequency power amplifier includes a control unit100, a driver stage unit110, a power stage unit120, an output matching network unit150, and a switch module unit130. The control unit100is separately connected to the driver stage unit110, the power stage unit120, and the switch module unit130. An input end of the driver stage unit110is connected to a radio frequency signal input end, an output end of the driver stage unit110is connected to an input end of the power stage unit120, an output end of the power stage unit120is connected to an input end of the output matching network unit150, and an output end of the output matching network unit150is connected to an input end of the switch module unit130. A received radio frequency input signal is amplified by using the driver stage unit110, and after being transmitted to the power stage unit120to amplify, the received radio frequency input signal is transmitted to the output matching network unit150. The output matching network unit150participates in impedance transformation and suppression of harmonic energy in the radio frequency input signal, and transmits the radio frequency input signal to the switch module unit130, and the control unit100controls a switch state of the switch module unit130according to a frequency band requirement, so that the radio frequency input signal is inputted to a next-stage circuit through a specified radio frequency transmission path. The advantage of the single-end structure radio frequency power amplifier is the simple structure, but it is difficult to meet a maximum linear power requirement proposed by the HPUE. In addition, when a load of the radio frequency antenna of the communication terminal changes greatly, the maximum linear power of the single-end structure radio frequency power amplifier also changes greatly. The performance of the balanced radio frequency power amplifier is compared with that of the existing single-end structure radio frequency power amplifier by usingFIG.17andFIG.18below. It is well known that, situations of the radio frequency antenna of the communication terminal are quite complex. For example, in different holding manners of a mobile phone, the load phase of the radio frequency antenna changes greatly. The communication protocol has clear requirements for the maximum linear power of the communication terminal. For example, in the frequency band of Band1, and in a standard of a power class of 3 (PC3), the maximum output linear power of the antenna of the communication terminal is not less than 23 dBm. Therefore, the communication terminal needs to output the maximum linear power that meets the requirement on condition that the load phase of the radio frequency antenna changes greatly. FIG.17shows a curve of a gain comparison between the existing single-end structure radio frequency power amplifier and the balanced radio frequency power amplifier in a case that the balanced radio frequency power amplifier is in a load of 500 hm. Because the existing single-end structure radio frequency power amplifier shown in curve201only has a single power stage amplification unit, the maximum output linear power of the single-end structure radio frequency power amplifier is 34 dBm (2.51 watts). Because the balanced radio frequency power amplifier shown in curve202has two power stage amplification units, the maximum output linear power of the balanced radio frequency power amplifier is 37 dBm (5.01 watts), and is close to two times that of the existing single-end structure radio frequency power amplifier. Therefore, the balanced radio frequency power amplifier relative to the existing single-end structure radio frequency power amplifier can better support the linear power requirement of the mobile HPUE function. As shown inFIG.18, curve203shows a relationship between a maximum output linear power and a load phase of the radio frequency antenna in a case that that the existing single-end structure radio frequency power amplifier is in a load impedance VSWR of 3:1. When the load phase of the radio frequency antenna is 80 degrees, the maximum linear power outputted by the existing single-end structure radio frequency power amplifier is 27. 2 dBm, and when the load phase of the radio frequency antenna is 160 degrees, the maximum linear power outputted by the existing single-end structure radio frequency power amplifier is 36.4 dBm. Therefore, a load phase change of the radio frequency antenna causes a maximum linear power change of 9.2 dBm. When the load phase of the radio frequency antenna is 60 degrees, the maximum linear power outputted by the existing single-end structure radio frequency power amplifier is 32.9 dBm, and when the load phase of the radio frequency antenna is 100 degrees, the maximum linear power outputted by the existing single-end structure radio frequency power amplifier is 33.8 dBm. Therefore, a load impedance change of the radio frequency antenna causes a maximum linear power change of 0.9 dBm. Curve204shows a relationship between a maximum output linear power and a load phase of the radio frequency antenna in a case that that the balanced radio frequency power amplifier is in a load impedance VSWR of 3:1. When the load of the radio frequency antenna changes, a maximum output linear power change of the balanced radio frequency power amplifier is far less than a maximum output linear power change of the existing single-end structure radio frequency power amplifier. In the load phase changes of different radio frequency antennas, a minimum saturation power of the balanced radio frequency power amplifier is far greater than that of the existing single-end structure radio frequency power amplifier. As shown inFIG.19, when the load of the radio frequency antenna changes, loads of the first power stage unit120and the second power stage unit121of the balanced radio frequency power amplifier also change. Because the 90-degree power splitter unit divides the radio frequency input signal into two equal-amplitude radio frequency input signals that have a phase difference of 90 degrees, load change trends of the first power stage unit120and the second power stage unit121are opposite, causing that change trends of maximum output linear powers of the first power stage unit120and the second power stage unit121are opposite. Curve205is a curve of a maximum linear power outputted by the first power stage unit120of the balanced radio frequency power amplifier at the load phases of different radio frequency antennas. Curve206is a curve of a maximum linear power outputted by the second power stage unit121of the balanced radio frequency power amplifier at the load phases of different radio frequency antennas. When the load phase is between 0 degrees and 60 degrees, curve205rises, and curve206declines. When the load phase is between 60 degrees and 100 degrees, curve205is in a high power section, and curve206is in a low power section. When the load phase is between 100 degrees and 180 degrees, curve205declines, and curve206rises. Because change trends of curve205and curve206at the load phase between 0 degrees and 180 degrees are opposite, an overlay result of the maximum output linear powers of the first power stage unit120and the second power stage unit121does not change greatly, thereby implementing the characteristics that the balanced radio frequency power amplifier is not sensitive to the load of the radio frequency antenna. The balanced radio frequency power amplifier provided in the present invention divides, by using a 90-degree power splitter unit, a radio frequency input signal into two equal-amplitude signals that have a phase difference of 90 degrees, the two equal-amplitude radio frequency input signals are inputted to an adjustable 90-degree power combiner unit after being amplified, and a control unit controls values of adjustable capacitors and adjustable resistors in the adjustable 90-degree power combiner, so that when a phase difference of the two radio frequency input signals at different frequencies changes to 0 degrees or approximately 0 degrees, and an amplitude difference is approximately 0 degrees, the two radio frequency input signals are synthesized into one radio frequency input signal to be inputted to a next-stage circuit through a specified radio frequency transmission path. Therefore, the balanced radio frequency power amplifier not only improves a maximum linear power of an output, but also reduces the sensitivity to a load change of a radio frequency antenna, thereby implementing support on a mobile HPUE function. The balanced radio frequency power amplifier provided in the present invention may be applied to a power amplifier circuit modules of a plurality of modulation signals. The modulation signals include, but are not limited to Wideband Code Division Multiple Access (WCDMA), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), Code Division Multiple Access (CDMA)2000, Long Term Evolution (LTE), and WiFi. The balanced radio frequency power amplifier may alternatively be applied to frequency bands of different standards, which are Band1, Band38, Band40, and Band41 currently, or may be applied to 5G frequency bands, for example, Band42 and Band43. The balanced radio frequency power amplifier provided in the present invention may alternatively be applied to an integrated circuit chip. For a specific structure of the balanced radio frequency power amplifier in the integrated circuit chip, details are not described one by one herein again. In addition, the balanced radio frequency power amplifier may alternatively be applied to a communication terminal, and is used as an important component of the radio frequency integrated circuit. The communication terminal described herein is a computer device that may be used in a mobile environment, and supports a plurality of communication standards such as Global system for mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), Time Division Long Term Evolution (TDD-LTE), and Frequency Division Duplexing-Long Term Evolution (FDD-LTE). The computer device includes a mobile phone, a notebook computer, a tablet computer, a vehicle-mounted computer, or the like. In addition, the technical solution provided in the present invention is also applicable to application scenarios of other radio frequency integrated circuits, for example, a communication base station. The balanced radio frequency power amplifier, the chip, and the communication terminal provided in the present invention are described above in detail. Any obvious modification made by a person of ordinary skill in the art falls within the protection scope of the patent of the present invention without departing from the essence of the present invention. | 42,915 |
11863135 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS In order to clearly describe the technical content of the present disclosure, the following combined with specific embodiments to be further described. Before describing the embodiments of the present disclosure in detail, it should be noted that, terms such as “first”, “second” and the like are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply any such actual relationship or order between these entities or operations. The term “includes,” “including,” or any other variation are intended to encompass a non-exclusive inclusion, such that a process, method, article, or apparatus including a list of elements includes not only those elements, but also other elements not expressly listed or inherent to such process, method, article, or apparatus. Please referring toFIG.2, the present disclosure provides a Class D power amplification modulation system for self-adaptive adjustment of an audio signal. The Class D power amplification modulation system includes: an amplification circuit module, connected to the audio signal, for amplifying the audio signal; a pulse width modulation (PWM) circuit module, connected to the amplification circuit module, for performing PWM processing on an amplified audio signal generated by the amplification circuit module to generate a PWM signal; a drive circuit module, connected to the PWM circuit module, for performing system drive processing on the PWM signal to feed back to the audio signal; a frequency detection circuit module, connected to the audio signal, for performing frequency detection processing on the audio signal; a carrier generator module, connected to the frequency detection circuit module and the PWM circuit module, for loading a signal generated after the frequency detection processing onto a carrier modulation signal to transmit the carrier modulation signal and the signal generated after the frequency detection processing to the PWM circuit module; an amplitude detection circuit module, connected to the audio signal, for performing amplitude detection processing on the audio signal; and a direct current (DC) potential adjustment module, connected to the amplitude detection circuit module, for performing DC potential analysis processing on a signal generated after the amplitude detection processing to generate a DC potential of the audio signal. In an embodiment, the amplitude detection circuit module includes: an amplitude detection analogy-to-digital converter (ADC) unit, for detecting an amplitude of the audio signal and generating a digital signal corresponding to the amplitude of the audio signal; and an amplitude detection digital-to-analogy converter (DAC) unit, for performing analysis processing on the digital signal to generate an analog signal corresponding to the digital signal. In an embodiment, a sampling frequency of the amplitude detection ADC unit is 1 MHz The present disclosure provides a method for realizing self-adaptive adjustment and modulation of an audio signal of a Class D power amplifier according to the Class D power amplification modulation system. The method includes: step (1), inputting the audio signal to the amplification circuit module, and performing amplification processing, by the amplification circuit module, on the audio signal; step (2), inputting the audio signal to the frequency detection circuit module, and performing frequency detection processing, by the frequency detection circuit module, on the audio signal; step (3), inputting the audio signal to the amplitude detection circuit module, and performing amplitude detection processing, by the amplitude detection circuit module, on the audio signal; step (4), transmitting, a signal generated after the frequency detection processing to the carrier generator module, loading, the signal generated after the frequency detection processing onto the carrier modulation signal, and transmitting the signal generated after the frequency detection processing and the carrier modulation signal to the PWM circuit module; step (5), performing, by the PWM circuit module, PWM processing on a signal generated after the amplification processing and a signal generated after the frequency detection processing to generate the PWM signal, and performing, by a drive circuit module, system drive processing on the PWM signal to feed back to the audio signal; and step (6), transmitting a signal generated after the amplitude detection processing to the DC potential adjustment module, performing, by the DC potential adjustment module, DC potential analysis on the signal generated after the amplitude detection processing to generate the DC potential of the audio signal. In an embodiment, the amplitude detection circuit module includes an amplitude detection ADC unit and an amplitude detection DAC unit. Step (3) includes: detecting, by the amplitude detection ADC unit, an amplitude of the audio signal and generating a digital signal corresponding to the amplitude of the audio signal; and receiving and analyzing, by the amplitude detection DAC unit, the digital signal to generate an analog signal corresponding to the digital signal. The present disclosure provides a method for realizing self-adaptive adjustment and modulation of an audio signal of a Class D power amplifier. The method includes: step (A), performing amplification processing, frequency detection processing and amplitude detection processing on the audio signal respectively; step (B), loading a signal generated after the frequency detection processing onto a carrier modulation signal, performing PWM processing on the signal generated after the frequency detection processing and the carrier modulation signal and a signal generated after the amplification processing, performing system drive processing on a signal generated after the PWM processing, and combining a signal generated after the system drive processing and the audio signal; and step (C), performing DC potential adjustment processing on a signal generated after the amplitude detection processing to generate a DC potential of the audio signal. In an embodiment, the amplitude detection processing includes: detecting an amplitude of the audio signal to generate a digital signal corresponding to the amplitude of the audio signal, and performing analysis processing on the digital signal to generate an analog signal corresponding to the digital signal. The present disclosure provides a device for self-adaptive adjustment and modulation of an audio signal of a Class D amplifier. The device includes: a processor, configured to execute computer-executable instructions; and a memory, storing one or more of the computer-executable instructions. When the computer-executable instructions are executed by the processor, the memory implements steps of the method for self-adaptive adjustment and modulation of the audio signal of the Class D power amplifier as mentioned above. The present disclosure provides a processor for realizing self-adaptive adjustment and modulation of an audio signal of a Class D power amplifier. The processor is configured to execute computer-executable instructions, when the computer-executable instructions are executed by the processor, steps of the method for realizing self-adaptive adjustment and modulation of the audio signal of the Class D power amplifier as mentioned above. The present disclosure provides a computer-readable storage medium storing a computer program. When the computer program is executed by a processor, the steps of the method for realizing self-adaptive adjustment and modulation of the audio signal of the Class D power amplifier as mentioned above are implemented. In an embodiment, the amplification circuit module performs amplification processing on the audio signal, and the frequency detection circuit module detects the frequency of the audio signal. The frequency of the carrier wave generated by the carrier generator module is adjusted according to the frequency of the audio signal. The higher the frequency of the audio signal is, the higher the frequency of the carrier wave generated by the carrier generator module is. The audio signal processed respectively by the amplification circuit module and the frequency detection circuit module is sent to the PWM circuit module for modulation of the duty cycle of the PWM signal, and the signal processed by the PWM circuit module is sent to the drive circuit module for driving, and the signal processed by the drive circuit module is combined with the original audio signal. Meanwhile, the amplitude detection circuit module detects the amplitude of the audio signal. The amplitude detection circuit module includes: the amplitude detection ADC unit, for detecting the amplitude of the audio signal and generating the digital signal corresponding to the amplitude of the audio signal; and the amplitude detection DAC unit, for performing analysis processing on the digital signal to generate the analog signal corresponding to the digital signal, and sending the analog signal to the DC potential adjustment module to generate the DC potential of the audio signal. The amplitude of the audio signal is proportional to a conduction time of the drive transistor in the drive circuit module. That is, the greater the amplitude of the audio signal is, the longer the conduction time of the drive transistor is. Meanwhile, the frequency of the carrier wave of the carrier generator module is the same as that of the drive transistor, that is, the higher the frequency of the carrier wave is, the faster the switching frequency of the driver transistor is. The method for realizing self-adaptive adjustment and modulation of the audio signal of the Class D power amplifier of the present disclosure improves the characteristics of the circuit in the signal time domain and the frequency, minimizes power consumption of the signals in different amplitudes and frequencies, and improves EMI performance, or balances the power consumption and EMI characteristics. In an embodiment, amplification processing, frequency detection processing and amplitude detection processing are performed on the audio signal respectively. The frequency of the carrier wave is adjusted according to the frequency of the audio signal. The higher the frequency of the audio signal is, the higher the frequency of the carrier wave is. The modulation of the duty cycle of the PWM signal is performed on the signal generated after the amplification processing and the signal generated after the frequency detection processing. Meanwhile, the amplitude of the audio signal is detected, a digital signal corresponding to the amplitude of the audio signal is generated, the digital signal is modulated into the analogy signal corresponding to the digital signal, and the DC potential adjustment processing is performed on the analogy signal to generate the DC potential of the audio signal. By adjusting the DC potential of the audio signal in real time, the duty cycle of the PWM signal varies with the DC potential of the audio signal. Meanwhile, when the DC potential of the audio signal is changed, the frequency of the carrier wave will be detected and adjusted, so that the power consumption and EMI of the circuit can be improved at the same time. In an embodiment, referring toFIG.3, the principle of the method for realizing self-adaptive adjustment and modulation of the audio signal of the Class D power amplifier is as follows: the amplitude detection ADC unit of the amplitude detection circuit module detects the amplitude of the analogy audio signal. The sampling frequency of the amplitude detection ADC unit is 1 MHz, which is much greater than the highest frequency (e.g., 20 KHz) of the audio signal. The sampling frequency should ensure the integrity of the collected data. The digital signal (including least significant bit (LSB) and most significant bit (MSB)) is obtained through detecting the amplitude of the audio signal. The amplitude detection DAC unit performs analysis processing on the digital signal to generate the analogy signal VO, and the analog signal VO acts as the DC potential of the audio signal. The smaller the amplitude of the audio signal is, the smaller the DC potential VO is. The duty cycle of the PWM signal obtained after the audio signal is processed by the amplification circuit and the PWM circuit is changed. That is, the smaller the DC potential of the audio signals with the same amplitude and frequency is, the smaller the duty cycle of the PWM signal is, so that the conduction time of the drive transistor is short and the efficiency of the circuit is improved. The frequency detection circuit module detects the frequency of the audio signal, and the frequency of the carrier wave of the carrier generator module is adjusted according to the frequency of the audio signal. The higher the frequency of the audio signal is, the greater the frequency of the carrier wave of the carrier generator module is. By using ADC/DAC to detect the amplitude of the input signal, and the amplitude of the input signal (the audio signal) is analyzed into an analog signal to act as the DC potential of the input signal. The smaller the amplitude of the input signal is, the smaller the DC potential is. When the amplitude of the signal is detected, the frequency of the carrier wave signal is adjusted according to the frequency of the input signal. The greater the frequency of the input signal is, the greater the frequency of the carrier wave is. It will be understood that the same or similar parts of the above embodiments may refer to each other, and what is not described in detail in some embodiments can be seen as the same or similar in other embodiments. It should be noted that in the description of the present disclosure, the terms “first”, “second”, etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present disclosure, unless otherwise stated, “plurality” is meant to refer to at least two. It should be understood that the various parts of the present disclosure may be implemented with hardware, software, firmware, or a combination thereof. In the above embodiments, multiple steps or methods may be implemented with software or firmware stored in memory and executed by a suitable instruction execution device. For example, if implemented in hardware, as in another embodiment, any one of the following techniques or a combination thereof known in the art may be used: discrete logic circuits with logic gate circuits for implementing logic functions on data signals, specialized integrated circuits with suitable combinations of logic gate circuits, Programmable Gate Arrays (PGAs), Field Programmable gate Arrays (FPGAs), etc. Those skilled in the art can understand that all or part of the steps carried out to implement the method of the above embodiments can be accomplished by instructing the associated hardware through a program, and the program can be stored in a computer-readable storage medium. When executed, the program includes one or a combination of the steps of a method embodiment. In addition, each functional unit in each embodiment of the present disclosure may be integrated in a processing module, or each unit may be physically present alone, or two or more units may be integrated in a single module. The above integrated module can be implemented either in the form of hardware or in the form of a software functional module. The integrated module can also be stored in a computer-readable storage medium if it is implemented as a software function module and sold or used as a stand-alone product. The storage medium mentioned above can be Read-Only Memory, disks or CD-ROMs, etc. In the description of this specification, reference to the terms “an embodiment,” “some embodiments,” “an example,” “specific examples,” or “some examples” means that specific features, structures, materials, or characteristics described in connection with the embodiments or examples are included in at least one embodiment or example of the present disclosure. In this specification, the schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the specific features, structures, materials, or characteristics described may be combined in any one or more embodiments or examples in a suitable manner. Although embodiments of the present disclosure have been shown and described above, it is understood that the above embodiments are exemplary and are not to be construed as limiting the present disclosure, and that variations, modifications, replacements and variants of the above embodiments may be made by those of ordinary skill in the art within the scope of the present invention. The present disclosure provides the Class D power amplification modulation system and the method for self-adaptive adjustment of the audio signal, the method, the device, and the processor for realizing self-adaptive adjustment of the audio signal of the Class D power amplifier and the non-transitory computer-readable storage medium. The frequency detection circuit module detects the frequency of the audio signal, and the frequency of the carrier wave generated by the carrier generator module is adjusted according to the frequency of the audio signal. The higher the frequency of the audio signal, the higher the frequency of the carrier wave generated by the carrier generator module. By simultaneously controlling the amplitude and frequency of the audio signal, the following beneficial effects are achieved:1. When the amplitude of the audio signal is small and the frequency of the audio signal is low, the duty cycle is reduced and the frequency of the carrier wave is reduced, which can achieve higher efficiency and good EMI characteristics;2. When the amplitude of the audio signal is large and the frequency of the audio signal is high, the duty cycle of the PWM signal is increased, and the high frequency performance of the circuit is improved by the carrier wave;3. When the amplitude of the audio signal is small and the frequency of the audio signal is high, the frequency of the carrier wave is improved by reducing the duty cycle, thereby improving the power consumption of the carrier wave with high frequency and improving the high frequency performance of the audio signal with small amplitude;4. When the amplitude of the audio signal is large and the frequency of the audio signal is low, the frequency of the PWM wave is reduced, thereby increasing the efficiency and improving EMI characteristics. At the same time, compared with the prior art where a high threshold comparator and a low threshold comparator are set for comparing the amplified audio signal, the technical solution of the present disclosure adjusts the DC potential of the signal in real time, the duty cycle of PWM signal varies with the DC potential of the audio signal, and the efficiency of the circuit is improved. In addition, when the amplitude of the audio signal is detected, the frequency of the carrier wave signal is adjusted according to the frequency of the signal. Compared with adjusting the frequency of the signal only, the power consumption and EMI of the circuit of the present disclosure can be improved at the same time. In this specification, the present disclosure has been described with reference to particular embodiments thereof. However, it is clear that various modifications and transformations can still be made without departing from the spirit and scope of the present disclosure. Accordingly, the specification and accompanying drawings should be considered as illustrative rather than limiting. | 19,934 |
11863136 | DETAILED DESCRIPTION The following disclosure provides many different embodiments or examples for implementing different features of the provided subject matter. Specific examples of components and arrangements are described below. Certainly, these descriptions are merely examples and are not intended to be limiting. In the present disclosure, in the following descriptions, the description of the first feature being formed on or above the second feature may include an embodiment formed by direct contact between the first feature and the second feature, and may further include an embodiment in which an additional feature may be formed between the first feature and the second feature to enable the first feature and the second feature to not be in direct contact. In addition, in the present disclosure, reference numerals and/or letters may be repeated in examples. This repetition is for the purpose of simplification and clarity, and does not indicate a relationship between the described various embodiments and/or configurations. The embodiments of the present disclosure are described in detail below. However, it should be understood that many applicable concepts provided by the present disclosure may be implemented in a plurality of specific environments. The described specific embodiments are only illustrative and do not limit the scope of the present disclosure. FIG.1is a block diagram of a wireless charging system100according to some embodiments of the present disclosure. The wireless charging system100includes a power transmitting unit110and a power receiving unit120. The power transmitting unit110may include a wireless charge pad. The power receiving unit120may be disposed in a mobile phone, a smart watch, an electric car, or other electronic device(s). The power transmitting unit110may include a power supply111, a signaling module112, a power amplifier113, a matching circuit114, and transmitting resonator115. The power supply111may be coupled with the power amplifier113to provide power. The power amplifier113may be coupled with the matching circuit114to provide power. The matching circuit114may be coupled with the transmitting resonator115to provide power. The signaling module112may be in communication with the power supply111and the power amplifier113to send and receive signals. The power supply111may provide AC power for charging. The power supply111may provide AC power to the AC (alternating current) power amplifier113. The power amplifier113may convert the AC power from the power supply into relatively high frequency power, e.g., HF (high frequency) power. HF power may indicate the power having frequencies ranged from approximately tens of kilohertz to approximately several megahertz. For example, HF power may indicate the power having frequency of approximately 6.8 MHz or approximately 13.6 MHz. The signaling module112may receive signals indicating the configurations of the power supply111and the power amplifier113. The signaling module112may send signals to modify the configurations of the power supply111and power amplifier113. The HF power may be provided to the matching circuit114. The matching circuit114may be used for impedance matching. After impedance matching, a minimum amount of HF power would be reflected backward and the power efficiency would be increased. The HF power may then be provided to a transmitting resonator115. The transmitting resonator115may be a coil. The HF power may be transmitted through electromagnetic induction. The power transmitting unit120may include a load121, a signaling module122, a DC (direct current)/DC converter123, an HF/DC rectifier124, and a receiving resonator125. The receiving resonator125may receive power through electromagnetic induction. The HF/DC rectifier124may be coupled with the receiving resonator125to receive power. The DC/DC converter123may be coupled with the HF/DC rectifier124to receive power. The load121may be coupled with the DC/DC converter123to receive power. The signaling module122may be in communication with the load121and the DC/DC converter123to send and receive signals. The receiving resonator125may be a coil matching with a transmitting resonator (e.g., the transmitting resonator115of the power transmitting unit110). The receiving resonator125receives the power from the transmitting resonator115(e.g., through magnetic induction or magnetic resonance). The resonating coupling between the transmitting resonator115and the receiving resonator125may be about 6.78 MHz or other frequency. The power received by the receiving resonator125may be provided to the HF/DC rectifier124. The HF/DC rectifier124may rectify the received AC power and provide the rectified DC power. The rectified DC power may be provided to a DC/DC converter123. The DC/DC converter123may convert the DC power into a suitable voltage for charging the load121. The signaling module122may receive signals indicating the configurations of the load121and DC/DC converter123. The signaling module122may send signals to modify the configurations of the load121and DC/DC converter123. Signaling modules112and122may be in communication with each other. The signaling modules112and122may be in communication through an unlicensed band communication (e.g., a 2.4 GHz communication technology, a 5 GHz communication technology, Bluetooth Low Energy technology, or an LTE-Unlicensed communication technology) or a licensed band communication (e.g., Narrowband Internet of Things technology, LTE-M technology, or 5G communication technology). The signaling module112may be coupled to and in communication with the power supply111and the power amplifier113. The signaling module112may configure the power amplifier113based on the configuration of the power supply111. The signaling module112may configure the power amplifier113based on the configuration of the power supply111. The signaling module112may send information about the power supply111and the power amplifier113to the signaling module122. The signaling module112may receive information about load121and the DC/DC converter123from the signaling module122. The signaling module112may configure the power supply111and the power amplifier113based on the information received from the signaling module122. The signaling module122may be coupled to and in communication with the load121and the DC/DC converter123. The signaling module122may configure the DC/DC converter123based on the configuration of the load121. The signaling module122may configure the DC/DC converter123based on the configuration of the load121. The signaling module122may send information about load121and the DC/DC converter123to the signaling module122. The signaling module122may receive information about the power supply111and the power amplifier113from the signaling module112. The signaling module122may configure the load121and the DC/DC converter123based on the information received from the signaling module112. FIG.2is a schematic circuit diagram of an amplifier400according to some embodiments of the present disclosure. The amplifier400may be used as the power amplifier113shown inFIG.1. The amplifier400includes transistors Q1, Q2, Q3and Q4. Each of the transistors Q1, Q2, Q3, and Q4of the amplifier400may include a NMOS, a PMOS, or an HEMT (high electron mobility transistor). The transistors Q1-Q4may be formed of or include a direct bandgap material, such as an III-V compound, which includes, but is not limited to, for example, GaAs, InP, GaN, InGaAs and AlGaAs. The transistors Q1-Q4may be GaN-based transistors. The transistors Q1-Q4may include a high-electron-mobility transistor (HEMT). The transistors Q1-Q4may be power devices (e.g., power transistors) or a part of a power device. For example, the transistors Q1-Q4may be configured to conduct a relatively large amount of current (e.g., hundreds of milliamps or more) compared with general transistors. For example, the transistors Q1-Q4may have a relatively large breakdown voltage (e.g., hundreds of volts or more) compared with a general transistor. The drain of the transistor Q1may be coupled with the source of transistor Q3. The drain of the transistor Q1may be coupled with the capacitor C1. The drain of the transistor Q1may be coupled with the cathode of the capacitor C1. The source of the transistor Q1may be coupled with the drain of the transistor Q2. The drain of the transistor Q2may be coupled with the source of the transistor Q1. The source of the transistor Q2may be coupled with the anode of diode D2. The source of the transistor Q2may be coupled with the drain of transistor Q4. The source of the transistor Q2may be coupled with the capacitor C1. The drain of the transistor Q3may be coupled with the input voltage. The drain of the transistor Q3may be coupled with the capacitor C2. The source of the transistor Q3may be coupled with the drain of the transistor Q1. The source of the transistor Q3may be coupled with the capacitor C1. The source of the transistor Q3may be coupled with the cathode of the diode D1. The drain of the transistor Q4may be coupled with the source of the transistor Q2. The drain of the transistor Q4may be coupled with the capacitor C1. The drain of the transistor Q4may be coupled with the anode of the diode D2. The source of the transistor Q4may be coupled with the ground G. The source of the transistor Q4may be coupled with the capacitor C3. The transistors Q1and Q2may be coupled with each other in serial. The transistors Q3and Q1may be coupled in serial. The transistors Q2and Q4may be coupled in serial. The transistors Q3, Q1, Q2and Q4may be coupled in serial. The amplifier400includes capacitors C1, C2, and C3. The amplifier includes diodes D1and D2. The amplifier400includes input voltage Vin and ground G. The amplifier400includes nodes A and B. The node A is between the transistors Q1and Q2. The node A may be between the source of the transistor Q1and the drain of the transistor Q2. The node B is between the diodes D1and D2. The node B is also between the capacitors C2and C3. The cathode of the diode D1may be coupled with the drain of the transistor Q1. The cathode of the diode D1may be coupled with the capacitor C1. The cathode of the diode D1may be coupled with the source of the transistor Q3. The anode of the diode D1may be coupled with the cathode of the diode D2. The anode of the diode D1may be coupled with the capacitor C2. The anode of the diode D1may be coupled with the capacitor C3. The cathode of the diode D2may be coupled with the capacitor C2. The cathode of the diode D2may be coupled with the capacitor C3. The anode of the diode D2may be coupled with the capacitor C1. The anode of the diode D2may be coupled with the source of the transistor Q2. The anode of the diode D2may be coupled with the drain of the transistor Q4. The capacitor C2may be coupled between the drain of the transistor Q3and the anode of the diode D1. The capacitor C2may be coupled between the input voltage Vin and the anode of the diode D1. The capacitor C3may be coupled between the cathode of the diode D2and the source of the transistor Q4. The capacitor C3may be coupled between the cathode of the diode D2and the ground G. The capacitor C3may be coupled to the diode D2in parallel. InFIG.2, the capacitors C2and C3may be used as voltage-dividing capacitors. The input voltage Vin is divided by the capacitors C2and C3. The input voltage Vin is equally divided by the capacitors C2and C3, and the voltage across each of the capacitors C2and C3may be half of the input voltage (i.e., Vin/2). The voltage of each of the capacitors C2and C3may be smaller than half of the input voltage (i.e., Vin/2). The transistors Q1and Q2may be complementarily conducted. The transistors Q1and Q2may be alternatively conducted. The transistors Q3and Q4may be complementarily conducted. The transistors Q3and Q4may be alternatively conducted. The diodes D1and D2may be used as clamping diodes. The diodes D1and D2are clamping diodes and ensure that the voltage stress (i.e., the maximum voltage VDS) over each of the transistors Q1, Q2, Q3, and Q4is not greater than half of the input voltage (i.e., Vin/2), so as to prevent the transistors Q1, Q2, Q3, and Q4from burning out. The capacitor C1is a flying capacitor. When the amplifier400is operated at a steady state, the voltage of the capacitor C1may be half of the input voltage (i.e., Vin/2). When the transistors Q1, Q2, Q3, and Q4are switching, the switching processes of the transistors Q1and Q2may be decoupled with the switching processes of the transistors Q3and Q4by the capacitor C1. When the transistors Q1and Q2are switching, the junction capacitance of the transistors Q3and Q4is not involved. When the transistors Q3and Q4are switching, the junction capacitance of the transistors Q1and Q2is not involved. During operations of the amplifier400, the maximum voltage VDSmay equal to half of the input voltage (i.e., Vin/2). During operations of the amplifier400, the maximum voltage VDSmay be smaller than half of the input voltage (i.e., Vin/2). During operations of the amplifier400, the voltage VDSmay be smaller than half of the input voltage (i.e., Vin/2). In a wireless charging system for electric cars, the input voltage Vin may be approximately 250 volts, and the maximum voltage VDSof the transistors Q1, Q2, Q3, and Q4may be approximately 125 volts. Therefore, while designing a wireless charging system for electric cars, the breakdown threshold of voltage VDSof the transistor used in the amplifier400may be relatively low. For example, the breakdown threshold of voltage VDSof the transistor used in the amplifier400may be half. If a transistor has a high breakdown threshold of voltage VDS, the transistor may cost much or may have a large volume. If an amplifier may use a transistor having a relatively lower breakdown threshold of voltage VDS, the cost or volume of the amplifier may be decreased. On the other hand, since the breakdown threshold of voltage VDSis not critical, the range of input voltage of the amplifier is relatively broad. FIG.3illustrates a waveform of voltage of the amplifier400according to some embodiments of the present disclosure. The X-axis of the waveform inFIG.3represents time. The Y-axis of the waveform inFIG.3represents voltage.FIG.3illustrates the waveform of voltage fed to the transistor Q1. In particular,FIG.3illustrates the waveform of voltage fed to the gate of the transistor Q1. FIG.4illustrates a waveform of voltage of the amplifier400according to some embodiments of the present disclosure. The X-axis of the waveform inFIG.4represents time. The Y-axis of the waveform inFIG.4represents voltage.FIG.4illustrates the waveform of voltage fed to the transistor Q2. In particular,FIG.4illustrates the waveform of voltage fed to the gate of the transistor Q2. There may be a phase difference of approximately 180 degrees between the waveforms inFIGS.3and4. When the waveforms inFIGS.3and4are fed to the gates of transistors Q1and Q2, respectively, the transistors Q1and Q2are conducted alternatively or complementarily. FIG.5illustrates a waveform of voltage of the amplifier400according to some embodiments of the present disclosure. The X-axis of the waveform inFIG.5represents time. The Y-axis of the waveform inFIG.5represents voltage.FIG.5illustrates the waveform of voltage fed to the transistor Q3. In particular,FIG.5illustrates the waveform of voltage fed to the gate of the transistor Q3. FIG.6illustrates a waveform of voltage of the amplifier400according to some embodiments of the present disclosure. The X-axis of the waveform inFIG.6represents time. The Y-axis of the waveform inFIG.6represents voltage.FIG.6illustrates the waveform of voltage fed to the transistor Q4. In particular,FIG.6illustrates the waveform of voltage fed to the gate of the transistor Q4. There may be a phase difference of approximately 180 degrees between the waveforms inFIGS.5and6. When the waveforms inFIGS.5and6are fed to the gates of transistors Q3and Q4, respectively, the transistors Q3and Q4are conducted alternatively or complementarily. FIG.7illustrates a waveform of the amplifier400according to some embodiments of the present disclosure. The X-axis of the waveform inFIG.7represents time. The Y-axis of the waveform inFIG.7represents voltage.FIG.7illustrates the waveform of voltage (or the waveform of electric potential) across nodes A and B shown inFIG.2. As shown inFIG.7, voltage VABmay include positive voltage and negative voltage. As shown inFIG.7, voltage VABmay include alternating positive portions and negative portions. The Y-axes inFIGS.3-7may be aligned. The wave of voltage may include segments a, b, c, and d. Corresponding to segment a, transistors Q2and Q3are conducted. When the transistor Q2is conducted, it may contribute negative voltage of voltage VAB. When the transistor Q3is conducted, it may contribute positive voltage of voltage VAB. As a result of transistors Q2and Q3being conducted, the corresponding segment a of voltage VABmay be zero. Corresponding to segment b, transistors Q1and Q3are conducted. When the transistor Q1is conducted, it may contribute positive voltage of voltage VAB. When the transistor Q3is conducted, it may contribute positive voltage of voltage VAB. As a result of transistors Q1and Q3being conducted, the corresponding segment a of voltage VABis positive. Corresponding to segment c, transistors Q1and Q4are conducted. When the transistor Q1is conducted, it may contribute positive voltage of voltage VAB. When the transistor Q4is conducted, it may contribute negative voltage of voltage VAB. As a result of transistors Q1and Q4being conducted, the corresponding segment a of voltage VABmay be zero. Corresponding to segment d, transistors Q2and Q4are conducted. When the transistor Q2is conducted, it may contribute negative voltage of voltage VAB. When the transistor Q4is conducted, it may contribute negative voltage of voltage VAB. As a result of transistors Q2and Q4being conducted, the corresponding segment a of voltage VABmay be negative. Due to the phase difference between the waveforms inFIGS.3and5, the transistor Q3may be conducted before the transistor Q1. The transistor Q1may be conducted after the transistor Q3. Due to the phase difference between waveforms inFIGS.4and6, the transistor Q4may be conducted before the transistor Q2. The transistor Q2may be conducted after the transistor Q4. The transistors Q3and Q4may be the leading transistors. The transistors Q1and Q2may be the lagging transistors. There may be an offset Φ between the waveforms fed to the transistors Q1and Q3(i.e., waveforms inFIGS.3and5). There may be an offset Φ between the conductions of the transistors Q1and Q3. There may be a phase difference Φ between the conduction angles of the transistors Q1and Q3. The phase offset angle Φ may be between the conductions of the transistors Q1and Q3. There may be an offset Φ between the waveforms fed to the transistors Q2and Q4(i.e., waveforms inFIGS.4and6). There may be an offset Φ between the conductions of the transistors Q2and Q4. There may be a phase difference Φ between the conduction angles of the transistors Q2and Q4. The phase offset angle Φ may be between the conductions of the transistors Q2and Q4. The phase offset angle Φ may be smaller than approximately 90 degrees. The waveform of voltage VABinFIG.7may be adjusted by adjusting the phase offset angle Φ, and the transmitting current IOUT(as shown inFIG.8) and the transmitting power of the amplifier400may be adjusted accordingly. If the input voltage is varied within a wide range, the pulse width of the waveform of voltage VABmay be adjusted through adjusting the phase offset angle Φ, and the transmitting current IOUT(as shown inFIG.8) and the transmitting power may be controlled accordingly. The waveform inFIG.7is more similar to a sine wave due to the phase offset angle Φ. Thus, the waveform inFIG.7has less second harmonic than the waveform inFIG.5. FIG.8is a schematic circuit diagram of the amplifier400coupled to impedances Z1, Z2, Z3and a coil Lt according to some embodiments of the present disclosure. The amplifier400may be used as the power amplifier113shown inFIG.1. The combination of the impedances Z1, Z2, and Z3may be used as the matching circuit114shown inFIG.1. The coil Lt may be used as the transmitting resonator115shown inFIG.1. Current IZmay indicate the current flows through the impedance Z1. Current IOUTmay indicate the current flows through the coil Lt. The current IOUTmay determine the power transmitted from the coil Lt. The coil Lt may include a corresponding impedance Zt. An impedance circuit may include the impedances Z1, Z2, Z3. The combination of the impedance circuit and the coil Lt may be coupled with the nodes A and B of the amplifier400. The combination of the impedance circuit and the coil Lt may be coupled between the anode of the diode D1and the source of the transistor Q1. The combination of the impedance circuit and the coil Lt may be coupled between the cathode of the diode D2and the drain of the transistor Q2. FIG.9illustrates an equivalent circuit of the circuit as described and illustrated with reference toFIG.8according to some embodiments of the present disclosure. Voltage VABinFIG.9may correspond to voltage VABinFIG.8. The impedance Z1inFIG.9may correspond to the impedance Z1inFIG.8. The impedance Z2inFIG.9may correspond to the impedance Z2inFIG.8. ZINmay indicate the equivalent input impedance with respect to voltage VAB. The current IZinFIG.9may correspond to the current IZinFIG.8. The current IOUTinFIG.9may correspond to the current IOUTinFIG.8. The output voltage VOUTmay be a potential difference across the impedance ZOUT. The impedance ZOUTmay be the combination of the impedance Z3and the impedance of the coil Lt, which are coupled in series. For example, ZOUT=Z3+Zt, where the impedance Zt indicates a corresponding impedance of the coil Lt. From the circuit ofFIG.9, the output voltage VOUTmay be represented as: VOUT=Z2×ZOUT×VAB[ZOUT×(Z1+Z2)]+(Z1×Z2)(1) When Z1+Z2=0, the output voltage VOUTmay be represented as: VOUT=ZOUT×VABZ1(2) According to Ohm's law, the output current IOUTmay be represented as: IOUT=VOUTZOUT(3) Thus, when Z1+Z2=0, based on equations (2) and (3), the output current IOUTmay be represented as: IOUT=VABZ1(4) When Z1+Z2=0, according to equation (4), the output current IOUTmay be a ratio of voltage VABto impedance Z1. The output current IOUTmay be controlled by controlling voltage VAB. The impedance Z3may be an impedance of a compensation network. A proper impedance Z3may be selected to ensure the input impedance Zin=Z1×Z2Z2+Zout, in which Zout=Z3+Zt, and the input impedance ZINmay represent a characteristic of inductive reactance. Due to the inductance of ZIN, the phase of IZis lagging behind the phase of voltage VAB, and it is helpful to implement zero voltage switching of the transistors Q1to Q4. FIG.10illustrates a waveform of voltage of the amplifier400according to some embodiments of the present disclosure. In particular,FIG.10illustrates a waveform of voltage of the amplifier400coupled with the impedances Z1, Z2, Z3and the coil Lt inFIG.8according to some embodiments of the present disclosure. The X-axis of the waveform inFIG.10represents time. The Y-axis of the waveform inFIG.10represents voltage. The waveform inFIG.10may be identical to the waveform inFIG.7.FIG.10illustrates the waveform of voltage (or the waveform of electric potential) between nodes A and B shown inFIG.8. As shown inFIG.10, the waveform of voltage VABmay include positive and negative voltages. As shown inFIG.10, the waveform of voltage VABmay include alternating positive portions and negative portions. As shown inFIG.10, the waveform of voltage VABmay be analog to a sine wave. FIG.11illustrates a waveform of current of the amplifier400according to some embodiments of the present disclosure. In particular,FIG.11shows a waveform of the amplifier400coupled with the impedances Z1, Z2, Z3and the coil Lt inFIG.8. The X-axis of the waveform inFIG.11represents time. The Y-axis of the waveform inFIG.11represents the current.FIG.11illustrates the waveform of the current IZflowing through the impedance Z1shown inFIG.8. As shown inFIG.11, the current IZmay be analog to a sine wave. The Y-axes inFIGS.10and11may be aligned. The current IZmay be associated with the voltage VAB. FIG.12is a schematic circuit diagram of an amplifier900coupled with the impedances Z1to Z3and a coil Lt according to some embodiments of the present disclosure. The amplifier900may be used as the power amplifier113shown inFIG.1. The combination of the impedances Z1, Z2, and Z3may be used as the matching circuit114shown inFIG.1. The coil Lt may be used as the transmitting resonator115shown inFIG.1. Based on the circuit diagram shown inFIG.8, the circuit diagram shown inFIG.12further includes the inductors L1and L2and the capacitor C4and C5. Based on the amplifier400shown inFIG.8, the amplifier900inFIG.12further includes the inductors L1and L2and the capacitor C4and C5. Based on the amplifier400shown inFIG.6, the amplifier900inFIG.4further includes two zero voltage switching (ZVS) circuits. The inductor L1and the capacitor C4may be coupled in series and then coupled with the transistor Q1in parallel. The inductor L1and the capacitor C4in series may be coupled between the drain of the transistor Q1and the source of the transistor Q1. The inductor L2and the capacitor C5may be coupled in series and then coupled with the transistor Q4in parallel. The inductor L2and the capacitor C5in series may be coupled between the drain of the transistor Q4and the source of the transistor Q4. With the ZVS circuits, the switching loss of the amplifier900may be decreased, and the performance of the amplifier900may be increased. Additionally, the EMI (electromagnetic Interference) of the amplifier900may be improved, the switching frequency of the amplifier900may be increased, and the inductance and capacitance of the amplifier900may be decreased. One of the schematic circuit diagrams of the present disclosure may be entirely or partly implemented as a semiconductor device. For example, the amplifier900, the impedances Z1to Z3, and the coil Lt shown inFIG.12may be implemented as a semiconductor device. Or, the amplifier900may be implemented as a semiconductor device. The amplifier400, the impedances Z1to Z3, and the coil Lt shown inFIG.8may be implemented as a semiconductor device. The amplifier400may be implemented as a semiconductor device. FIG.13is a cross-sectional view of a semiconductor device1000according to some embodiments of the present disclosure. The amplifier900, the impedances Z1to Z3, and the coil Lt shown inFIG.12may be implemented as the semiconductor device1000. The power amplifier113, the matching circuit114, and the transmitting resonator115shown inFIG.1may be implemented as the semiconductor device1000. The amplifier400, the impedances Z1to Z3, and the coil Lt shown inFIG.8may be implemented as a semiconductor device similar to the semiconductor device1000. The amplifier400may be implemented as a semiconductor device similar to the semiconductor device1000. The semiconductor device1000ofFIG.13includes a substrate1002, semiconductor layers1004and1006, and passivation layers1008and1010. The gallium nitride (GaN) transverses high-electron-mobility transistors Q1to Q4and may be formed on the substrate1002. The substrate1002may include, for example, but is not limited to, silicon (Si), doped Si, silicon carbide (SiC), germanium silicide (SiGe), gallium arsenide (GaAs), or other semiconductor materials. The substrate1002may also include, for example, but is not limited to, sapphire, silicon on insulator (SOI) or other suitable materials. The substrate1002may further include a doped region (not marked inFIG.13), such as p-well and n-well. The semiconductor layer1004may include a III-V material. The semiconductor layer1004may include, but is not limited to, III nitrides, such as, but not limited to, GaN, AlN, InN and a compound InxAlyGa1-x-yN where x+y is less than or equal to 1, or a compound AlyGa(1-y)N where y is less than or equal to 1. The semiconductor layer106may include a III-V material with a bandgap greater than that of the semiconductor layer1004. The semiconductor layer1006may include, but is not limited to, III nitrides, such as, but not limited to, GaN, AlN, InN and a compound InxAlyGa1-x-yN where x+y is less than or equal to 1, or a compound AlyGa(1-y)N where y is less than or equal to 1. The semiconductor layers1004and1006may form heterojunctions. The polarization of the heterojunctions of different nitrides may form two-dimensional electron gas (2DEG) (not marked inFIG.13) on interfaces of the semiconductor layers1004and1006. The passivation layers1008and1010may include dielectric materials. For example, the passivation layers1008and1010may include, but are not limited to, silicon nitride (SiNx), silicon dioxide (SiO2), Al2O3, or HfO2. The GaN transistors Q1to Q4inFIG.13may include a gate structure Ga, a drain structure D and a source structure S arranged on the semiconductor layer1006. The GaN transistors Q1to Q4inFIG.13may be turned on or turned off according to changes of an input signal of a gate thereof. The cross-sectional view ofFIG.13may only illustrate one of the transistors Q1to Q4. This is because the transistors Q1to Q4may be arranged into one column on the semiconductor device1000in the present disclosure. The semiconductor device1000ofFIG.13is also provided with the inductors L1, L2, and Lt arranged on the semiconductor layer1006, and the diodes D1and D2arranged on the semiconductor layer1006. Each of inductors L1, L2, and Lt may include terminals Lp and Ln. Each of diodes D1and D2may include an anode AN and a cathode C. The cross-sectional view ofFIG.13may only illustrate one of the inductors L1, L2, and Lt. This is because the inductors L1, L2, and Lt may be arranged into one column on the semiconductor device1000in the present disclosure. In addition, the inductors L1, L2, and Lt may be arranged near to the transistors Q1to Q4or near to the diodes D1and D2. The cross-sectional view ofFIG.13may only illustrate one of the diodes D1and D2. This is because the diodes D1and D2may be arranged into one column on the semiconductor device1000in the present disclosure. In addition, the diodes D1and D2may be arranged near to the transistors Q1to Q4or near to the diodes D1and D2. The semiconductor device1000ofFIG.13may further include impedances Z1to Z3. Each of the impedances Z1to Z3may include two terminals Zp and Zn. The impedances Z1to Z3may be arranged on the semiconductor layer1006, and are close to the transistors Q1to Q4, or close to the inductors L1, L2, and Lt. The side view ofFIG.13may only illustrate one of the impedances Z1to Z3. This is because the impedances Z1to Z3may be arranged into one column on the semiconductor device1000in the present disclosure. The semiconductor device1000ofFIG.13may further include capacitors C1to C5. Each of the capacitors C1to C5may include two terminals Ctop and Cbottom. The capacitors C1to C5may be arranged on the semiconductor layer1006, and are close to the impedances Z1to Z3, close to the diodes D1and D2, or close to the inductors L1, L2, and Lt. The side view ofFIG.13may only illustrate one of the capacitors C1to C5. This is because the capacitors C1to C5may be arranged into one column on the semiconductor device1000in the present disclosure. One of the schematic circuit diagrams of the present disclosure may be entirely or partly implemented as an integrated circuit. For example, the amplifier900, the impedances Z1to Z3, and the coil Lt shown inFIG.12may be implemented as an integrated circuit. The amplifier900may be implemented as an integrated circuit. The amplifier400, the impedances Z1to Z3, and the coil Lt shown inFIG.8may be implemented as an integrated circuit. The amplifier400may be implemented as an integrated circuit. One of the schematic circuit diagrams of the present disclosure may be entirely or partly implemented as a circuit board. For example, the amplifier900, the impedances Z1to Z3, and the coil Lt shown inFIG.12may be implemented as a circuit board. The amplifier900may be implemented as a circuit board. The amplifier400, the impedances Z1to Z3, and the coil Lt shown inFIG.8may be implemented as a circuit board. The amplifier400may be implemented as an integrated circuit. FIG.14is a schematic circuit diagram of an amplifier1100according to some embodiments of the present disclosure. The amplifier1100may be used as the power amplifier113shown inFIG.1. The amplifier1100may include a class-E amplifier. The amplifier1100may include an input voltage1101that provides the voltage VDD, the ground1104, a transistor1103, inductors1102and1106, capacitors1105and1107and a load1108. FIG.15illustrates a waveform of voltage of the amplifier1100according to some embodiments of the present disclosure, where the X-axis represents time and the Y-axis represent voltage.FIG.15illustrates the waveform of voltage VDS, where the voltage VDSindicates the voltage difference (or electric potential difference) between the drain and the source of the transistor1103. FIG.16illustrates a waveform of current of the amplifier1100according to some embodiments of the present disclosure, where the X-axis represents time and the Y-axis represent current.FIG.16illustrates the waveform of current ID, where the current IDindicates the current flows through the load1108. The Y-axes inFIGS.15and16may be aligned. The waveforms inFIGS.15and16both show a duty cycle of the amplifier1100. At the half of the duty cycle (e.g., the position marked with “50%), both voltage VDSand current IDare zero. At the end of the duty cycle, the magnitude of current IDdrops to zero. FIG.15shows that the maximum voltage VDSis equal to approximately 3.56 times of voltage VDD. In a class-E amplifier, the voltage stress of the switching element (i.e., voltage VDSof a transistor) may be approximately 3 times to approximately 4 times of the input voltage (i.e., VDDinFIG.7A). In a wireless charging system for electric cars, the input voltage (i.e., VDD)1101of the amplifier1100may be approximately 250 volts, and the maximum voltage VDSof the transistor1103of the amplifier1100may be approximately 890 volts. Therefore, when designing a wireless charging system for an electric car, the breakdown threshold of voltage VDSof the transistor used in the amplifier would be critical. If a transistor has a higher breakdown threshold of voltage VD, the cost of the transistor would be higher or the volume of the transistor would be relatively greater. On the other hand, since the breakdown threshold of voltage VDSis critical, the range of input voltage of the amplifier is limited. In a class-E amplifier, the range of a load for realizing a zero voltage switch (ZVS) is narrow. A class-E amplifier with a ZVS circuit may only be operated at a fixed frequency and at a fixed duty cycle. The output of a class-E amplifier with a ZVS circuit may be adjusted only by adjusting the output of a pre-stage amplifier or a pre-stage convertor (i.e., adjusting the input voltage of the class-E amplifier). Additionally, voltage VDSand current IDare only positive. When voltage VDS, which only includes a positive voltage, is provided to a resonator, second harmonic become significant. Harmonics may decrease the performance of generation, transmission, or use of electric power. The second harmonic may decrease the performance of transmission between two resonators. On the other hand, the output voltages with both positive and negative values, such as the output voltages of amplifiers400and900inFIGS.2,8and12may have less second harmonic generation. FIG.17is a schematic circuit diagram of an amplifier1200according to some embodiments of the present disclosure. The amplifier1200may be used as the power amplifier113shown inFIG.1. The amplifier1200may include a class-D amplifier. The amplifier1200may include an input voltage1201that provides the voltage VDD, the ground1204, transistors1202and1203, an inductor1205, capacitors1206and1207and a load1208. FIG.18illustrates a waveform of the amplifier1200according to some embodiments of the present disclosure, where the X-axis represents time and the Y-axis represents voltage.FIG.18illustrates the waveform of voltage VDS, where the voltage VDSindicates the voltage difference (or electric potential difference) between the drain and the source of the transistor1203. FIG.19illustrates a waveform of current of the amplifier1200according to some embodiments of the present disclosure, where the X-axis represents time and the Y-axis represent current.FIG.19illustrates the waveform of current ID, where the current IDindicates the current flows through the load1208. FIG.20illustrates a waveform of current of the amplifier1200according to some embodiments of the present disclosure, where the X-axis represents time and the Y-axis represent current.FIG.20illustrates the waveform of current ILZVS, where the current ILZVSindicates the current flows through the inductor1205. The Y-axes inFIGS.18-20may be aligned. The waveforms inFIGS.18-20show a duty cycle of the amplifier1200. At half of the duty cycle (e.g., the position marked with “50%), voltage VDSis zero, and current IDis negative. At the end of the duty cycle, the magnitude of current IDdrops to zero. In a class-D amplifier, the output current (e.g., current IDinFIG.19) may be adjusted by adjusting the duty cycle of the transistors (e.g., transistor1202and1230inFIG.17). Adjusting the out current through adjusting the duty cycle of the transistors may cause second harmonic. The second harmonic may decrease the performance of transmission between two resonators. The output voltage of a class-D amplifier thus may be adjusted through adjusting the input voltage of the class-D amplifier. FIG.18shows that the maximum voltage VDSis approximately equal to voltage VDD. In a wireless charging system for electric cars, the input voltage1201may be approximately 250 volts, and the maximum voltage VDSof the transistor1203may be approximately 250 volts. Therefore, when designing a wireless charging system for an electric car, the breakdown threshold of voltage VDSof the transistor used in the amplifier would be critical. The combination of the inductor1205and the capacitor1206may act as a zero-voltage-switching (ZVS) tank. ILZVSshown inFIG.24indicates that the current flows through the inductor1205. The ZVS tank makes the amplifier1200switch when voltage VDSis zero. In particular, the ZVS tank makes the current IDflow when voltage VDSis zero. With the ZVS tank, the switching loss may be decreased, and the performance may be increased. Additionally, the EMI (electromagnetic Interference) may be improved, the switching frequency may be increased, and the inductance and capacitance of the amplifier may be decreased. FIG.21is a waveform of voltage of the amplifier1200according to some embodiments of the present disclosure. In particular,FIG.21illustrates a waveform of voltage VDSin several duty cycles. Voltage VDSis only positive. When voltage VDS, which only includes a positive voltage, is provided to a resonator, second harmonic may be generated. Harmonics may decrease the performance of generation, transmission, or use of electric power. The second harmonic may decrease the performance of transmission between two resonators. On the other hand, the output voltages with both positive and negative values, such as the output voltages of amplifiers400and900inFIGS.2,8and12may have less second harmonic generation. As used herein, for ease of description, space-related terms such as “under,” “below,” “lower portion,” “above,” “upper portion,” “lower portion,” “left side,” “right side,” and the like may be used to describe a relationship between one component or feature and another component or feature as shown in the figures. In addition to orientation shown in the figures, space-related terms are intended to encompass different orientations of the device in use or operation. An apparatus may be oriented in other ways (rotated approximately 90 degrees or at other orientations), and the space-related descriptors used herein may also be used for explanation accordingly. It should be understood that when a component is “connected” or “coupled” to/with another component, the component may be directly connected to or coupled to another component, or an intermediate component may exist. As used in the present disclosure, the terms “approximately,” “basically,” “substantially,” and “about” are used for describing and explaining a small variation. When used in combination with an event or circumstance, the terms may refer to a case in which the event or circumstance occurs precisely, or a case in which the event or circumstance occurs approximately. As used herein with respect to a given value or range, the term “about” generally means in the range of ±10%, ±5%, ±1%, or ±0.5% of the given value or range. The range may be indicated herein as from one endpoint to another endpoint or between two endpoints. Unless otherwise specified, all the ranges disclosed in the present disclosure include endpoints. The term “substantially coplanar” may refer to two surfaces within a few micrometers (μm) positioned along the same plane, for example, within 10 μm, within 5 μm, within 1 μm, or within 0.5 μm located along the same plane. When reference is made to “substantially” the same numerical value or characteristic, the term may refer to a value within ±10%, ±5%, ±1%, or ±0.5% of the average of the values. Several embodiments of the present disclosure, features and details thereof are briefly described above. The embodiments described in the present disclosure may be easily used as a basis for designing or modifying other processes and structures for realizing the same or similar objectives and/or obtaining the same or similar advantages introduced in the embodiments of the present disclosure. Such equivalent construction does not depart from the spirit and scope of the present disclosure, and various variations, replacements, and modifications can be made without departing from the spirit and scope of the present disclosure. | 42,554 |
11863137 | DETAILED DESCRIPTION Various example implementations are explained in detail below. These example implementations serve merely for illustration and should not be interpreted as restrictive. While specific implementation details are described in some example implementations, other implementations having other features (for example components, method sequences, elements and the like) can also be used in other example implementations. Features of different example implementations can be combined with one another, unless indicated otherwise. Variations and variant forms that are described for one of the example implementations can also be applied to other example implementations and are therefore not explained repeatedly. In addition to the features explicitly depicted and described, other features, for example features used in conventional systems having chopped apparatuses, can be provided. Connections and couplings that are described here are electrical connections or couplings unless explicitly indicated otherwise. Such connections or couplings can be modified so long as the fundamental way in which the connection or coupling works remains substantially unchanged. FIG.1shows a system according to an example implementation. The system ofFIG.1comprises a signal source10. The signal source10can have for example a sensor for capturing a physical quantity and possibly further components such as filters and amplifiers for processing a signal that is output by the sensor. Other types of signal sources, for example audio signal sources, can also be used. A signal from the signal source10is supplied to a chopped apparatus11. The chopped apparatus11has a chopper modulator13at its input and a chopper demodulator14at its output. The chopped apparatus11can be any type of apparatus in which chopping is conventionally used, for example an analog-to-digital converter for converting an analog signal that is output by the signal source10into a digital signal, or an amplifier in order to amplify the signal delivered by the signal source10. An output signal of the chopped apparatus11is supplied to a signal sink12following demodulation by the chopper demodulator14. In the case of a signal source10that comprises a sensor, the sensor signals can then be processed further in the signal sink12, for example, and other apparatuses can be controlled based on the sensor signals, for example. Other types of signal sinks that process signals from a signal source can likewise be used. The system ofFIG.1additionally has a device15for generating a chopper signal c having a variable chopper frequency fchop. The signal c can alternately have values of +1 and −1, for example, by which the signal from the signal source10is multiplied in the chopper modulator13and the signal from the chopped apparatus11is multiplied in the chopper demodulator14. Other signal sequences used in conventional choppers can also be used. The chopper signal c has a variable chopper frequency, that is to say that for example the change from +1 to −1 (or other signal values of the chopper signal c) does not take place at a fixed frequency, but rather varies about a certain frequency. This variation can occur according to a predefined scheme (that is to say a predefined succession of frequencies), randomly or pseudorandomly. Possible implementations of such pseudorandom generation of a variable chopper frequency will be explained later with reference toFIGS.4to6. The use of a variable chopper frequency of this kind allows interference signals at high frequencies to be decreased, for example distributed over a larger frequency range, resulting in lower intensity. Without further measures, however, a ripple or an offset in the output signal that is supplied to the signal sink12would remain, or would increase as a result of the variable chopper frequency. To suppress or decrease such effects, a feedback path16from the output of the chopper demodulator14to the chopped apparatus11is furthermore provided in the system ofFIG.1. The feedback path16is used to adjust an offset of the chopped apparatus11. The possibility of adjusting an offset is already afforded by many chopped apparatuses such as analog-to-digital converters or amplifiers. In the case of analog-to-digital converters, it can be done by adding or subtracting an adjustable digital value of the output; in the case of amplifiers, it can be done by setting a bias voltage of the amplifier, for example. In other example implementations, an input signal of the chopped apparatus11can also be modified using a signal from the feedback path. The feedback path can be analog, digital or a mixture of the two and, by processing the signal that is output by the chopper demodulator14, can generate a compensation signal that can be used to eliminate or at least decrease such ripples and remaining offsets as a result of the variable chopper frequency. To this end, the feedback path can have an integrator. The feedback path can have a filter function that rejects all or some of the useful signal of the system (e.g. the useful signal coming from the signal source10and processed by the chopped apparatus11) and passes the ripples and/or offsets generated by the variable chopper frequency, which means that the chopped apparatus11can be controlled in the opposite sense based on these ripples and/or offsets so as then to reject them at the output of the system. If the chopped apparatus11is an amplifier having an open gain of G, for example, a feedback function H on the amplifier leads to an inverse function G/(1+GH) at the amplifier output. If the gain G is high enough, this leads to a response that is approximately proportional to 1/H. If the feedback signal now represents ripples and/or offsets, these can be rejected. Since the useful signal is rejected in the feedback path, on the other hand, the useful signal at the output of the amplifier is influenced little by the feedback. This combination of variable chopper frequency and feedback path allows the advantages of the variable chopper frequency, in particular decreased interference at multiples of a fixed chopper frequency, to be used and at the same time allows ripples and the offset to be greatly reduced to or close to zero. Additionally, the modulation effects in downstream systems such as the signal sink12can be decreased, and filtering-out of useful signals close to the chopper frequency can be reduced. Without the feedback path16, on the other hand, the use of the variable chopper frequency would entail the disadvantages described at the outset. There may be no need for a lowpass filter or notch filter at the output of the chopper14, and the associated disadvantages do not arise. Additionally, a sensitivity toward radio-frequency interference signals can be decreased. The feedback path16also allows an increase in noise in the useful signal range that is associated with a variable chopper frequency in conventional approaches to be decreased or avoided. As explained above, the feedback path16leads to effective filtering of the output signal. In some example implementations, a filter frequency of this filtering (for example a cutoff frequency of a lowpass filtering for filtering out the useful signal as described above) by the feedback path is lower than a repetition rate of the variable chopper frequency by at least a factor of 2. This repetition rate indicates how quickly the succession of frequencies repeats in the case of a pseudorandom sequence or a predefined order. This can increase a stability of the system in some implementations, in particular can lead to a signal that is output by the feedback path16being controlled to a stable value better. In particular, this leads to the feedback path not being able to follow the changing chopper frequencies. This can have the additional effect that a useful signal close to the chopper frequency is rejected less. Additionally or alternatively, the filter frequency can be lower than a minimum chopper frequency of the variable chopper frequency by at least a factor of 2. In some example implementations, the frequency range in which the variable chopper frequency of the chopper signal c varies is above a useful signal range, that is to say a frequency range of the signal that is output by the signal source10. This can prevent the energy of ripples in the useful signal range from increasing. Above the useful signal range, a comparatively wide frequency range is used in some example implementations in order to distribute energy at multiples of the chopper frequency as widely as possible. By way of example, a useful frequency range can extend up to 200 kHz, and a mid-range chopper frequency can be 300 kHz, the variable chopper frequency varying between 200 kHz and 400 kHz. In such a case, the feedback path can then have a filter frequency lower than 100 kHz. These numerical examples serve only for illustration, however, and other values are also possible, depending on the implementation. FIG.2shows a flowchart to illustrate methods according to some example implementations. The method ofFIG.2can be implemented in the system ofFIG.1, for example, and is described with reference thereto in order to avoid repetition. The method ofFIG.2can also be implemented in other systems, however, for example the systems described below, and is therefore not limited to a specific apparatus. While the method ofFIG.2is described using two parts at20and21, the processes described can essentially be performed simultaneously, as is also the case inFIG.1. In step20, a chopper arrangement, for example the chopper modulator13and the chopper demodulator14, is provided with a chopper signal having variable chopper frequency, such as for example the chopper signal c ofFIG.1. At21feedback from an output to the chopped apparatus is provided, for example in order to set an offset of the chopped apparatus, as was described for the feedback path16ofFIG.1. FIGS.3A to3Dshow a system according to a further example implementation, in which the techniques discussed with reference toFIGS.1and2are applied to a specific case of processing a signal from a Hall sensor operated using a spinning current technique. The spinning current technique involves connections of a Hall sensor, which are supplied with a bias current, and connections at which a Hall voltage is tapped off being cyclically interchanged, which can be used to compensate for offsets of the Hall sensor.FIGS.3A to3Dshow the application of a four-phase spinning current technique, each ofFIGS.3A to3Dshowing one phase, denoted by PH1to PH4in the figures. The apparatus is first described with reference toFIG.3A. The same apparatus is shown inFIGS.3B to3D, just for other phases. The apparatus ofFIG.3has a Hall sensor30, which is symbolized by a circuit comprising four resistors31to34. Nodes between the resistors are denoted by hnw, hne, hse and hsw. When a bias current is applied between two opposite nodes, a Hall voltage can be tapped off between the other two nodes. In the figures, an arrow314indicates the respective applied bias current, in the case ofFIG.3Afrom the node hnw to the node hse. A Hall voltage is then accordingly tapped off between the nodes hne and hsw in the case ofFIG.3A, and supplied to a chopper modulator35. An output of the first chopper35is connected to a differential amplifier37via a DC voltage coupling36, which transmits DC components of the output signal of the chopper modulator35. The differential amplifier37is an example of a chopped apparatus. An output signal of the differential amplifier37is supplied to a chopper demodulator38. An output signal of the chopper demodulator38is buffered in an operational amplifier39and output. Secondly, the output signal is supplied to a feedback path comprising an analog demodulator310, an analog-to-digital converter311, a feedback controller312and a digital-to-analog converter313. An output signal of the digital-to-analog converter313changes an offset of the differential amplifier37. During operation, a chopper signal having a nonconstant chopper frequency fchop (symbolized by “fchop≠const”) is applied to the chopper modulator35and the chopper demodulator38. In sync therewith, the spinning current method is also operated at a nonconstant frequency fspin, which can be an integer multiple of fchop (for example 2*fchop). In phase1ofFIG.3Athe bias current is applied between the connections hnw and hse, as already explained, and the Hall voltage is tapped off between the connections hne and hsw. This results in a voltage +Vs+Voh−Vnl1, where Vs is the voltage that is actually to be measured, which is caused by a magnetic field, Voh is an offset of the Hall sensor30and Vnl1 is a voltage that is produced as a result of the resistors31to34not exhibiting the same response and is caused here by the resistor32. This voltage is amplified by the differential amplifier37, with an offset Voa of the amplifier37additionally being added. A voltage +Vs+Voh+Voa−Vnl1 is then present at the output of the chopper38. The output signal is once again demodulated at the chopper frequency in the demodulator310, and the feedback controller312is then used to set the offset of the differential amplifier37based on this modulated signal. This feedback path can essentially operate as described for the feedback path16above, which means that the feedback controller312is used to reject the useful signal and the differential amplifier37is then controlled based on the ripples and/or offsets. Lines in the chopper modulator35show examples of the connections between the Hall sensor30and the differential amplifier37in each phase PH1to PH4. In essence, this is implemented in the depicted example implementation such that Voh (half-bridge offset) and Voa (amplifier offset) are in sync, whereas the arithmetic sign of the signal Vs changes, on the other hand, so that, as described below, in the end the two can be separated and offsets can be rejected. Lines in the chopper demodulator38show the connections between the differential amplifier37and the output of the system (or a positive output, if a differential output is implemented). In phase2ofFIG.3Bthe bias current is applied between the connections hsw and hne, that is to say in the opposite direction to that in phase1ofFIG.3A, and the output signal is accordingly −Vs−Voh−Vnl1. The output signal obtained from the differential amplifier37is +Vs+Voh+Voa+Vnl1, and the output signal obtained from the chopper demodulator38is +Vs+Voh+Voa+Vnl1. In the phases ofFIGS.3C and3Dthe bias current314is then applied between the connections hne and hsw, with different polarity, and the Hall voltage is tapped off between the connections hne and hsw. This results in an output voltage +Vs−Voh−Voa+Vnl2 in the case ofFIG.3Cand also in an output signal +Vs−Voh−Voa−Vnl2 in the case ofFIG.3D. If the four output signals are added, the components of Voh and Voa compensate for one another, and a value of 4 Vs is left. The above description initially applies to constant (DC) signals averaged over time. Here already there remains an AC component (AC) at the up-modulating chopper frequency (e.g. the component of Voa disappears when averaged over time, but it appears at the output as AC voltage signal with +Voa and −Voa. The alternating frequency between +Voa/−Voa is the same as the chopper frequency. The feedback path310to313now operates as negative feedback for this AC voltage signal in the differential amplifier37, as a result of which the AC component at the output of the sensor disappears following stabilization of the feedback path (in accordance with its filter frequency). The variable chopper frequency in conjunction with the feedback can additionally achieve interference signals at higher frequencies in the event of simultaneous rejection of an offset or a ripple caused by the variable chopper frequency. FIGS.4A and4Bshow examples of variable chopper frequencies over time, the chopper frequency being plotted over time in each case. InFIG.4A, eleven periods of the chopper signal1to11are plotted over time, at frequencies between 300 and 180 kHz. Such frequencies can be generated using a pseudorandom number generator, for example. Another example of variable frequencies is depicted inFIG.4B. Here the pattern of frequencies repeats after a certain time, two repetitions being shown inFIG.4B. This repetition thus has a repetition rate, which was explained above. Ways of generating variable chopper frequencies will now be explained. FIG.5Ashows an apparatus having an oscillator section50, which is supplied with a supply voltage VDDA, and a section51, which is supplied with a supply voltage VDDD and has frequency dividers52to55and pseudorandom generators56and57. One way of generating a chopper signal having variable chopper frequency is to supply a random sequence from the pseudorandom generator57to a dither register of the oscillator section, as a result of which the frequency thereof is varied. This can be done for example by selecting binary-weighted current sources in accordance with the dither register for an integrator current of a relaxation oscillator. In the case of a digitally controlled oscillator (DCO), the dither register can be part of the control word and can therefore likewise vary the frequency. Otherwise, a frequency is set by a signal Trim_OSC. The oscillator section50can output for example a signal having a frequency of around 6 MHz, possibly varied by the control of the dither register. In addition or as an alternative to the variation using the dither register, an output signal of the pseudorandom generator56can control a variable frequency divider52that variably divides the frequency of its input signal by a factor of between 5 and 8 (5, 6, 7, 8 in the example depicted) and therefore generates variable frequencies. In the example implementation ofFIG.5Athe variable frequency divider also has fixed frequency dividers53,54connected downstream of it for dividing the frequency by two, in order to generate a chopper signal Clk_chop that, in the numerical example, can have frequencies of 250 kHz, 187.5 kHz, 214.28 kHz or 300 kHz. Between the dividers53and54, a signal Clk_spin is additionally also branched off, which can be used as clock signal for a spinning current method and has twice the frequency of the signal Clk_chop. Additionally, the oscillator signal is supplied directly to a frequency divider55that divides the signal by two in order to generate a signal Clk_adc, which can be used for example as clock signal for an analog-to-digital converter. All numbers in the example ofFIG.5Aserve merely for illustration, and it is also possible for other values, other numbers of frequency dividers or other division ratios to be used. Additionally, the dither register of the oscillator section50and the variable frequency divider52can be used jointly, but also independently of one another, in order to generate a variable chopper frequency. FIG.5Bshows an example of signals that can be generated using the apparatus ofFIG.5Awithout the dither register, but with the variable frequency divider52. A curve58inFIG.5Bshows an example of the chopper signal Clk_chop, the division ratio of the variable frequency divider52being shown at the top. Additionally, the phases of the spinning current technique are indicated according to the signal Clk_spin operating at twice the frequency. This too serves merely as an illustrative example. FIG.6shows an example of a pseudorandom generator that is implemented with a chain of shift registers and controls a programmable frequency divider60, which can correspond to the variable frequency divider52ofFIG.5A, so as to generate a signal having a variable chopper frequency fchop from an oscillator signal fosc. Other implementations of pseudorandom number generators can likewise be used. FIG.7shows an implementation example of a feedback path as can be used in the example implementation ofFIG.3. The feedback path ofFIG.7receives an output signal rm from a chopped apparatus downstream of a chopper demodulator (and possibly downstream of an amplification as by the operational amplifier39ofFIG.3) at a demodulator70, the operation of which corresponds to that of the analog demodulator310ofFIG.3C. An output signal of the demodulator70is supplied to an analog-to-digital converter71, which in the depicted example is a 1-bit analog-to-digital converter, that is to say an analog-to-digital converter that outputs a 1-bit signal. The output signal is denoted by comprr_o inFIG.7. An example of this signal for multiple phases of the spinning current method is depicted (PH1to PH4twice). The signal comprr_o is supplied to an integrator72, which can be implemented as a simple up/down counter and counts either upward or downward depending on the value of the signal comprr_o. In order to illustrate the operation of the integrator, amplitudes78of the example signal show e.g. the offset signal of the amplifier Voa in the phases PH1to PH4. Amplitudes79show the useful signal in the phases PH1to PH4. Since the useful signal appears sometimes positive, sometimes negative in equal proportions, the useful signal is averaged and hence rejected in the integrator72, while the Voa components (offset) according to the amplitudes78keep on being integrated upward. During operation of the respective chopped apparatus (an amplifier77in the case ofFIG.7) using the feedback path the upward integration comes to an end when the undesirable component in the output signal disappears as a result of the negative feedback caused by the feedback path. The system adapts to this. The signal that is output by the integrator72can be a 12-bit signal, for example. An output signal of the integrator72is supplied to a digital-to-analog converter, in the depicted example a 13-bit ODAG (objective digital-to-analog converter). The signal thus generated is then overlaid via a resistor76on an input of the amplifier77, which is an example of a chopped apparatus. Here, the signal of the feedback signal is thus fed in at the input of the amplifier77, which effectively likewise changes the offset of the amplifier77. A signal “Ripple meas” is then in turn tapped off from the amplifier as input signal for the feedback path. This can also be done at an output of the amplifier77or downstream of a chopper demodulator as explained above. FIG.7shows only one possible implementation of a feedback path, and other implementations, for example implementations as are similarly also used in systems having constant chopper frequency, can be used. The effect of apparatuses discussed here will now also be illustrated with reference toFIGS.8to10. The exact shape of the depicted curves is dependent on the respective implementation, which means that the depicted curves should be understood merely as an example.FIG.8shows a signal characteristic of an output signal of a system such as the system ofFIG.3for a fixed chopper frequency in a curve81and for a variable chopper frequency in a curve80. As can be seen, the rejection of the signal in the case of the fixed chopper frequency fchop is very great (visible as a distinct downward peak). With variable chopper frequency, which varies about fchop, the rejection of the signal close to fchop is less (visible by virtue of smaller and distributed downward peaks). This lesser rejection stems from the fact that each chopper frequency appears only for a short time. As can additionally be seen in a curve inFIG.9, in particular susceptibilities to interference at specific high frequencies are much more pronounced in a curve90having constant chopper frequency than in a curve91having random chopper frequency. Moreover, a curve1002inFIG.10shows a noise density and a curve1001shows an accumulated noise density, for which, in particular at high frequencies, much smaller spikes occur than in the case of conventional methods and in particular no frequencies are highly prominent. In curve1001, it can be seen that remaining ripples produce no abrupt increase close to the chopper frequency, but rather only a rise of 10 dB/dec in accordance with the bandwidth, as is theoretically predictable. If just a variable chopper frequency without feedback is used, ripples would undesirably manifest themselves at high frequencies and amplify the total accumulated noise. Thus, remaining ripples that arise as a result of chopping using variable chopper frequency are also significantly reduced by the feedback. Some example implementations are defined by the examples that follow: Example 1. System, comprising: a chopped apparatus having a chopper modulator at an input and a chopper demodulator at an output, a device for providing a chopper signal having a variable chopper frequency to the chopper modulator and the chopper demodulator, and a feedback path from an output of the chopper demodulator to the chopped apparatus, configured to reduce ripples or offsets caused by the variable chopper frequency. Example 2. System according to example 1, wherein an output signal of the feedback path is configured to set an offset of the chopped apparatus. Example 3. System according to example 1 or 2, wherein the variable chopper frequency has a repetition rate at which chopper frequencies repeat, and wherein the feedback path has a filter frequency that is lower than the repetition rate by at least a factor of 2. Example 4. System according to one of examples 1 to 3, wherein the variable chopper frequency is in a frequency range above a useful frequency range of the chopped apparatus. Example 5. System according to one of examples 1 to 4, wherein the feedback path has a filter frequency that is lower than a minimum chopper frequency of the variable chopper frequency by at least a factor of 2. Example 6. System according to one of examples 1 to 5, wherein the feedback path comprises a demodulator, which is configured to operate based on the chopper signal, and an integrator. Example 7. System according to one of examples 1 to 6, moreover comprising a spinning current Hall sensor, wherein an output of the spinning current Hall sensor is coupled to an input of the chopper modulator, wherein the spinning current Hall sensor is configured to be operated at a variable spinning frequency that is an integer multiple of the variable chopper frequency. Example 8. Method, comprising: providing a chopper signal having a variable chopper frequency to a chopper arrangement, and providing feedback from an output of the chopper arrangement to a chopped apparatus in order to compensate for ripples or offsets caused by the variable chopper frequency. Example 9. Method according to example 8, wherein the feedback is configured to set an offset of the chopped apparatus. Example 10. Method according to example 8 or 9, wherein the variable chopper frequency has a repetition rate at which chopper frequencies repeat, and wherein the feedback has a filter frequency that is lower than the repetition rate by at least a factor of 2. Example 11. Method according to one of examples 8 to 10, wherein the variable chopper frequency is in a frequency range above a useful frequency range of a chopped apparatus, which useful frequency range is associated with the chopper arrangement. Example 12. Method according to one of examples 8 to 11, wherein the feedback has a filter frequency that is lower than a minimum chopper frequency of the variable chopper frequency by at least a factor of 2. Example 13. Method according to one of examples 8 to 12, wherein the providing of the feedback comprises demodulating an output signal of the chopper arrangement based on the chopper signal, and integrating the demodulated output signal. Example 14. Method according to one of examples 8 to 13, moreover comprising operating a Hall sensor using a spinning current technique, wherein an output of the Hall sensor is coupled to an input of the chopper arrangement, wherein the spinning current technique is operated at a variable spinning frequency that is an integer multiple of the variable chopper frequency. Although this description has illustrated and described specific example implementations, persons with standard knowledge in the art will recognize that a multiplicity of alternative and/or equivalent implementations can be selected as substitution for the specific example implementations that are shown and described in this description, without departing from the scope of the implementation shown. The intention is for this application to cover all adaptations or variations of the specific example implementations that are discussed here. It is therefore intended that this implementation be limited only by the claims and the equivalents of the claims. | 28,895 |
11863138 | DESCRIPTION OF EXAMPLE EMBODIMENTS OF THE DISCLOSURE Overview The systems, methods and devices of this disclosure each have several innovative aspects, no single one of which is solely responsible for all of the desirable attributes disclosed herein. Details of one or more implementations of the subject matter described in this specification are set forth in the description below and the accompanying drawings. Embodiments of the present disclosure provide transconductance circuits with degeneration transistors. In some implementations, a transconductance circuit with degeneration transistors may be used as a transconductance input stage of an amplification stage, where the transconductance input stage is configured to convert an input voltage signal to an output current signal. In some implementations, the amplification stage may further include a load stage, configured to convert the input stage output current back to an output voltage. One or more of such amplification stages may be included in a device such as an amplifier or a comparator. The exact design of transconductance circuits with degeneration transistors, described herein, may be realized in many different ways, all of which being within the scope of the present disclosure. In one example of design variations according to various embodiments of the present disclosure, a choice can be made to employ N-type or P-type field-effect transistors (FETs), e.g., to employ N-type metal-oxide-semiconductor (NMOS) or P-type metal-oxide-semiconductor (PMOS) transistors, as transistors of a given transconductance circuit with degeneration transistors. In another example, in various embodiments, a choice can be made as to what type of transistor architecture to employ. For example, any of the transistors of the transconductance circuits with degeneration transistors as described herein may be planar transistors or may be non-planar transistors (some examples of the latter including FinFETs, nanowire transistors or nanoribbon transistors). One aspect of the present disclosure provides a transconductance circuit that includes a first portion and a second portion. The first portion includes a first degeneration transistor (e.g., transistor MDP, shown inFIG.3), coupled to a first input and configured to receive a first input voltage (e.g., voltage VIP, shown inFIG.3) at the first input. The first portion further includes a first input transistor (e.g., transistor MP, shown inFIG.3), coupled to the first degeneration transistor and to a first output, and configured to provide a first output current (e.g., current IOP, shown inFIG.3) at the first output. The second portion includes a second degeneration transistor (e.g., transistor MDM, shown inFIG.3), coupled to the first degeneration transistor and to a second input, and configured to receive a second input voltage (e.g., voltage VIM, shown inFIG.3) at the second input. The second portion further includes a second input transistor (e.g., transistor MM, shown inFIG.3), coupled to the second degeneration transistor and to a second output, and configured to provide a second output current (e.g., current IOM, shown inFIG.3) at the second output. Such a transconductance circuit may be used as an input stage capable of reliably operating within drain-source (DS) breakdown voltage (BV) (BVDS) of the transistors employed therein even in absence of any other protection devices, and may be significantly faster, consume lower power, and occupy smaller die area compared to conventional transconductance circuits. The reason why transistors MDP and MDM are referred to as “degeneration transistors” may be explained as follows. These transistors function as nonlinear resistors, meaning that each of transistors MDP and MDM operates as a resistor between their drain and source terminals, where the value of the drain-source resistance is based on the voltage difference between the first and second input voltages. In some embodiments, the variation of the drain-source resistance may be a nonlinear function of the voltage difference between the first and second voltage inputs. Since transistors MDP and MDM operate like resistors, they are basically degenerating the input differential pair formed by the first and second input transistors, MP and MM. Consequently, transistors MDP and MDM are referred to as “degeneration transistors.” Other aspects of the present disclosure provide devices (e.g., amplifiers, comparators, etc.) and systems (e.g., electronic testing systems, etc.) that may include one or more transconductance circuits with degeneration transistors as described herein. While some embodiments of the present disclosure refer to amplifiers and comparators as example devices, and further refer to electronic testing systems as example systems in which transconductance circuits with degeneration transistors as described herein may be implemented, in other embodiments, transconductance circuits with degeneration transistors as described herein may be implemented in devices other than amplifiers or comparators, and/or in systems other than electronic testing systems, all of which embodiments being within the scope of the present disclosure. As will be appreciated by one skilled in the art, aspects of the present disclosure, in particular aspects of transconductance circuits with degeneration transistors as proposed herein, may be embodied in various manners—e.g. as a method, a device, or a system. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, at least partially software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” The following detailed description presents various descriptions of specific certain embodiments. However, the innovations described herein can be embodied in a multitude of different ways, for example, as defined and covered by the select examples. In the following description, reference is made to the drawings, where like reference numerals can indicate identical or functionally similar elements. It will be understood that elements illustrated in the drawings are not necessarily drawn to scale. Moreover, it will be understood that certain embodiments can include more elements than illustrated in a drawing and/or a subset of the elements illustrated in a drawing. Further, some embodiments can incorporate any suitable combination of features from two or more drawings. The description may use the phrases “in an embodiment” or “in embodiments,” which may each refer to one or more of the same or different embodiments. Unless otherwise specified, the use of the ordinal adjectives “first,” “second,” and “third,” etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking or in any other manner. Furthermore, for the purposes of the present disclosure, the phrase “A and/or B” or notation “A/B” means (A), (B), or (A and B), while the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C). As used herein, the notation “A/B/C” means (A, B, and/or C). The term “between,” when used with reference to measurement ranges, is inclusive of the ends of the measurement ranges. Various aspects of the illustrative embodiments are described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. For example, the term “connected” means a direct electrical connection between the things that are connected, without any intermediary devices/components, while the term “coupled” means either a direct electrical connection between the things that are connected, or an indirect connection through one or more passive or active intermediary devices/components. In another example, the term “circuit” means one or more passive and/or active components that are arranged to cooperate with one another to provide a desired function. Sometimes, in the present descriptions, the term “circuit” may be omitted (e.g., a current mirror circuit may be referred to simply as a “current mirror,” etc.). If used, the terms “substantially,” “approximately,” “about,” etc., may be used to generally refer to being within +/−20% of a target value, e.g., within +/−10% of a target value, based on the context of a particular value as described herein or as known in the art. Example Applications of and Requirements for Transconductance Circuits For purposes of illustrating transconductance circuits with degeneration transistors, proposed herein, it might be useful to first understand phenomena that may come into play when transconductance circuits are involved. The following foundational information may be viewed as a basis from which the present disclosure may be properly explained. Such information is offered for purposes of explanation only and, accordingly, should not be construed in any way to limit the broad scope of the present disclosure and its potential applications. High-voltage electronic components such as amplifiers or comparators can be found in many modern electronic systems utilized within wide range of markets, such as industrial, military, automotive, and automatic testing (e.g., pin electronics). These components are required to operate within a wide input/output voltage range, generally well beyond the breakdown voltage levels of the transistors used therein, with ever-increasing operating frequencies for variety of reasons, such as increased test throughput or testing high-speed, high-voltage flash-memories. The high-voltage amplifier/comparator performance requirements for the state-of-the-art electronic testing systems are particularly stringent since the testing systems are expected to perform significantly better than the devices that they are testing. Therefore, these components are expected to operate seamlessly within wide input/output voltage range, at high speed (or equivalently with minimum delay variation for comparators) to support higher and higher data rates. In addition, these components are expected to operate with low power consumption per unit to allow massive parallelism as high pin count/high volume device testing demands. Hence, high voltage range, high operating frequency and low power consumption are three key design parameters differentiating the offered product to the respective market. Generally speaking, high-voltage amplifiers/comparators include one or more cascaded preamplifier stages having rather low small signal gain followed by series of cascaded amplification stages to get the desired output signal format. The first preamplifier stage typically includes a transconductance circuit, configured to convert a differential input signal ΔVI (where “V” stands for “voltage” and “I” stands for “input”) to an output current ΔIO (where “I” stands for “current” and “O” stands for “output”), and a load circuit, configured to convert the input stage output current ΔIO back to an output voltage ΔVO (where “V” stands for “voltage” and “O” stands for “output”). The first transconductance stage is a critical sub-component because it sets the performance boundaries of amplifiers/comparators in terms of reliability, input signal range and maximum input data rate. It would be desirable to have a transconductance circuit that 1) has a very large transconductance (gm) at small ΔVI levels so that the amplifier/comparator in which such a transconductance circuit is included can resolve small ΔVI levels at full speed, 2) has a very small gm for large ΔVI levels to minimize delay variation due to the cascading stage over driving, 3) has a wide input signal range, 4) can handle high-voltage levels reliably, and 5) has low power consumption. For the remainder of the disclosure, the proposed transconductance circuits implementing a self-protected input stage will be analyzed as a part of a high-voltage comparator. That said, the descriptions provided herein, including the advantages of the proposed design, are also applicable for high-voltage amplifiers since both high-voltage comparators and high-voltage amplifiers might have high-voltage levels at their inputs. Furthermore, the transconductance circuits presented here can also be utilized for low-voltage comparator/amplifier designs since they may improve the input stage speed in a very simple and elegant way. Various transconductance circuits illustrated in the drawings of the present disclosure are shown with N-type input transistors because such transistors may offer operation speed advantage over P-type transistors. Nevertheless, if desired, converting N-type input transistor-based schematics to P-type transistor-based schematics is straight forward for a person skilled in the art and, therefore, such modified embodiments are within the scope of the present disclosure. Furthermore, transistors of various transconductance circuits illustrated in the drawings of the present disclosure are shown with electric circuit diagram notations of high-voltage MOSFET devices, to indicate that any of these transistors can be a high-voltage transistor such as a an extended drain metal-oxide-semiconductor (EDMOS) transistor, a double-diffused metal-oxide-semiconductor (DMOS) transistor, a laterally-diffused metal-oxide-semiconductor (LDMOS) transistor, or a V-groove metal-oxide-semiconductor (VMOS) transistor. The common property of these high-voltage MOSFET devices is that all of their breakdown voltages may match the process CMOS core breakdown voltages except the drain terminal related breakdown voltages. Using special gate dielectric oxides and drain diffusion engineering, drain terminal breakdown voltages (or blocking voltage) can be increased well beyond the CMOS core breakdown voltage levels. However, in other embodiments, transistors that are not high-voltage transistors may be used as well and, therefore, are within the scope of the present disclosure. Conventional Transconductance Circuits FIGS.1and2illustrate some examples of devices with conventional transconductance circuits. FIG.1is an electric circuit diagram of a transconductance circuit112with a simple differential pair included in an amplification stage110of a device100. As shown inFIG.1, the amplification stage110may further include a load circuit114. The device100may be a comparator or an amplifier, where the amplification stage110may be a first preamplifier stage and may be followed up by one or more gain stages120. The implementation shown inFIG.1utilizes a classical differential pair of input transistors MP and MM, each of which may be a high-voltage transistor (e.g., an LDMOS transistor). The reliable common-mode voltage range of the classical input differential pair is equal to the LDMOS BVDS, e.g., 24 volts (V). The reliable differential input voltage range, on the other hand, may be approximately equal to gate to source breakdown voltage plus a gate-to-source voltage (VGS), e.g. ≈7-7.5V. As shown inFIG.1, in the transconductance circuit112, one terminal of the first input transistor MP (e.g., the gate terminal of the transistor MP) may be coupled to a first input132-1, where the transistor MP may receive a first input voltage VIP, while one terminal of the second input transistor MM (e.g., the gate terminal of the transistor MM) may be coupled to a second input132-2, where the transistor MM may receive a second input voltage VIM. As also shown inFIG.1, another terminal of the first input transistor MP (e.g., the drain terminal of the transistor MP) may be coupled to a first output134-1, where the transistor MP may provide a first output current IOP, while another terminal of the second input transistor MM (e.g., the drain terminal of the transistor MM) may be coupled to a second output134-2, where the transistor MM may provide a second output current IOM. A third terminal of the first input transistor MP (e.g., the source terminal of the transistor MP) may be coupled to a third terminal of the second input transistor MM (e.g., the source terminal of the transistor MM), e.g., via a node NCM. As shown inFIG.1, for each of transistors MP and MM, the source terminal of the transistor is coupled to the bulk terminal of the transistor. As also shown inFIG.1, the transconductance circuit112may further include a first current source142-1, coupled to the source terminal of the first input transistor MP, and a second current source142-2, coupled to the source terminal of the second input transistor MM. Together, the first and second current sources142-1/2may provide a total tail current IT for the differential pair of the input transistors MP and MM of the transconductance circuit112, e.g., each current source142providing a tail current IT/2. In some embodiments, the transconductance circuit112may be included in a comparator (e.g., the device100may be a comparator). In this case, VIP and VIM may be the inputs to the comparator. These inputs are provided (driven) by the outside world. The transconductance circuit112is then configured to evaluate the difference between the voltage levels of the inputs VIP and VIM and generate an output current indicative of whether the difference is positive or negative (e.g., to generate a logic 1 or logic 0 output indicative of whether the difference between the voltage levels of the inputs VIP and VIM is positive or negative). For example, in some implementations, IOP-IOM may be positive to indicate that VIP−VIM is positive, and IOP-IOM may be negative to indicate that VIP−VIM is negative. The actual value of the difference between IOP and IOM may be a function of the difference between VIP and VIM. In other embodiments, the transconductance circuit112may be included in an amplifier (e.g., the device100may be an amplifier). Such an amplifier may be utilized within a feedback and be configured to make the inputs VIP and VIM to be substantially equal by changing the outputs IOP and IOM, which outputs may then be coupled to the amplifier input through the feedback. The problem with the transconductance circuit112may arise due to the fact that the NCM node voltage is set by the one of the input transistors MP, MM that has the largest input voltage at its gate. Once the differential pair of the input transistors MP, MM completely switches the tail current IT to an output, the turned-off input transistor (i.e., the one of the input transistors MP, MM that has the lowest input signal at its gate terminal) will have a gate oxide breakdown at its source terminal boundary. For example, when the transconductance circuit112is used in a comparator, especially a high-voltage comparator, the minimum and maximum levels difference of VIP and VIM may be well above the gate oxide reliability voltage rating of the transistors MP and MM. On the other hand, when the transconductance circuit112is used in an amplifier, if the amplifier input signal range is larger than the break down ratings of the transistors MP and MM, then it may have the same reliability problem as a high-voltage comparator. During stead-state, i.e., when the amplifier settles to its final output level or waveform, the two inputs VIP and VIM would substantially be equal and would not have a reliability problem, but the inputs VIP and VIM may be significantly different at the beginning of the settling and this would create reliability problem for the amplifier. Hence, a classical differential pair transconductance circuit112cannot operate reliably for high-voltage applications in the sense that the signal range is bound only when ΔVI is properly bound. FIG.2is an electric circuit diagram of a transconductance circuit212with isolation diodes included in an amplification stage210of a device200. Similar toFIG.1, the amplification stage210may further include a load circuit214, and the device200may be a comparator or an amplifier, where the amplification stage210may be a first preamplifier stage and may be followed up by one or more gain stages220. Similar toFIG.1, the transconductance circuit212includes a differential pair of input transistors MP and MM, each of which may be a high-voltage transistor (e.g., an LDMOS transistor). Also similar toFIG.1, in the transconductance circuit212, one terminal of the first input transistor MP (e.g., the gate terminal of the transistor MP) may be coupled to a first input232-1, where the transistor MP may receive a first input voltage VIP, while one terminal of the second input transistor MM (e.g., the gate terminal of the transistor MM) may be coupled to a second input232-2, where the transistor MM may receive a second input voltage VIM. Further similar toFIG.1, in the transconductance circuit212, another terminal of the first input transistor MP (e.g., the drain terminal of the transistor MP) may be coupled to a first output234-1, where the transistor MP may provide a first output current IOP, while another terminal of the second input transistor MM (e.g., the drain terminal of the transistor MM) may be coupled to a second output234-2, where the transistor MM may provide a second output current IOM. Another similarity toFIG.1is that, as shown inFIG.2, for each of transistors MP and MM of the transconductance circuit212, the source terminal of the transistor may be coupled to the bulk terminal of the transistor. In contrast toFIG.1, the transconductance circuit212further includes an isolation diode DP and an isolation diode DM as shown inFIG.2, and may, therefore, be referred to as an “isolating input transconductance circuit.” As shown inFIG.2, a third terminal of the first input transistor MP (e.g., the source terminal of the transistor MP) may be coupled to a first terminal of the isolation diode DP (e.g., to the anode terminal of the isolation diode DP), while a third terminal of the second input transistor MM (e.g., the source terminal of the transistor MM) may be coupled to a first terminal of the isolation diode DM (e.g., to the anode terminal of the isolation diode DM). The second terminal of the isolation diode DP (e.g., the cathode terminal of the isolation diode DP) may be coupled to the second terminal of the isolation diode DM (e.g., the cathode terminal of the isolation diode DM), e.g., by virtue of each of these second terminals being coupled to the node NCM. The node NCM and, therefore, the second terminals of each of the isolation diodes DP and DM may be coupled to a current source244, configured to provide the tail current IT. The transconductance circuit212may further include a first bias current source242-1, coupled to the source terminal of the first input transistor MP, and a second bias current source242-2, coupled to the source terminal of the second input transistor MM. Each of the first and second bias current sources242-1/2may provide a bias current IB for the differential pair of the input transistors MP and MM of the transconductance circuit212. It is possible to view the input transistors MP and MM of the transconductance circuit212operating like a source follower whose bias current is changing between IB and IB+IT. The isolation diodes DP and DM may operate like a classical differential pair, or a half-diode bridge, to switch the tail current IT between the input transistors MP and MM with respect to the applied input signal difference ΔVI. Since the bias current of the input transistors MP and MM of the transconductance circuit212never reaches zero, they are always operating in the saturation region and, consequently, their gate oxide at the source terminal side is protected. For large ΔVI, the isolation diodes DP and DM effectively separate, or isolate, the positive and negative halves of the input stage to protect the input transistors MP and MM. In this manner, the architecture of the transconductance circuit212effectively migrates the high-voltage reliability problem from the input transistors MP, MM to the isolation diodes DP, DM. For reliable operation, the reverse breakdown voltage of the isolation diodes DP, DM should match the BVDS of the input transistors MP, MM. The transconductance circuit212has several advantages. One is that the architecture of the transconductance circuit212is symmetrical and can accept differential, as well as single-ended, input signals. Another advantage is that the input transistors MP, MM are always on. Hence, the stage delay does not suffer from the on/off transitions of the input transistors. Yet another advantage is that the transconductance circuit212requires relatively small ΔVI to switch the tail current IT completely. Still another advantage is that the transconductance circuit212provides its largest transconductance at the input cross point (i.e., when ΔVI is substantially equal to zero) and the transconductance is reduced as ΔVI increases. Hence, the transconductance circuit212balances the large gain requirement for small input signal amplification to a given logic level versus minimization of the delay variation due to the inter-stage node voltage overdrive at a large ΔVI. The transconductance circuit212also has some disadvantages in certain settings. One disadvantage is that the transconductance circuit212requires isolation diodes with a fast turn on and turn off, and with low parasitic capacitance (such as Schottky diodes) with a large reverse breakdown voltage. Another disadvantage is that the transconductance circuit212requires extra voltage headroom for the isolation diodes DP, DM, thereby reducing the usable input signal range. Yet another disadvantage of the transconductance circuit212is that the maximum transconductance that can be achieved for a given power level may be reduced due to the source degeneration effect of the isolation diodes DP, DM. Although the transconductance circuit212may be well suited for high-speed and low-voltage designs, the first disadvantage listed above (or, rather, the lack of such isolation diodes) may be one of the reasons why the transconductance circuit212may fail to deliver the required performance level for high-speed, high-voltage applications. While it is possible to relax the high breakdown voltage requirement by cascoding multiple isolation diodes, this imposes a heavy penalty on the headroom requirement (hence on the usable input signal range) and on delay variation. In contrast, the transconductance circuit with degeneration transistors, as described herein, implements the input isolation scheme (and, hence, protects the input transistors) without the need for extra voltage headroom or an isolation diode with properties required for the transconductance circuit212. Although not specifically shown in the present drawings, other examples of conventional transconductance circuits rely on clamping of the input signal before driving a classical differential pair. Such circuits, however, have disadvantages as well. For example, they are not symmetrical with respect to the input signals and, hence, are only suitable for applications with a single-ended input compared to a DC threshold voltage. A Transconductance Circuit with Degeneration Transistors FIG.3is an electric circuit diagram of a transconductance circuit312with degeneration transistors included in an amplification stage310a device300, according to some embodiments of the present disclosure. Similar toFIG.1, the amplification stage310may further include a load circuit314, and the device300may be a comparator or an amplifier, where the amplification stage310may be a first preamplifier stage and may be followed up by one or more gain stages320. Similar toFIG.1, the transconductance circuit312includes a differential pair of input transistors MP and MM, each of which may be a high-voltage transistor (e.g., an LDMOS transistor). Also similar toFIG.1, in the transconductance circuit312, one terminal of the first input transistor MP (e.g., the gate terminal of the transistor MP) may be coupled to a first input332-1, where the transistor MP may receive a first input voltage VIP, while one terminal of the second input transistor MM (e.g., the gate terminal of the transistor MM) may be coupled to a second input332-2, where the transistor MM may receive a second input voltage VIM. Further similar toFIG.1, in the transconductance circuit312, another terminal of the first input transistor MP (e.g., the drain terminal of the transistor MP) may be coupled to a first output334-1, where the transistor MP may provide a first output current IOP, while another terminal of the second input transistor MM (e.g., the drain terminal of the transistor MM) may be coupled to a second output334-2, where the transistor MM may provide a second output current IOM. Another similarity toFIG.1is that, as shown inFIG.3, for each of transistors MP and MM of the transconductance circuit312, the source terminal of the transistor may be coupled to the bulk terminal of the transistor. Also similar toFIG.1, the transconductance circuit312may further include a first current source342-1, coupled to the source terminal of the first input transistor MP, and also include a second current source342-2, coupled to the source terminal of the second input transistor MM. Together, the first and second current sources342-1/2may provide a total tail current IT for the differential pair of the input transistors MP and MM of the transconductance circuit312, e.g., each current source342may provide a tail current IT/2 in some embodiments, although in other embodiments this distribution may be different, as described in greater detail below. In contrast toFIG.1, the transconductance circuit312further includes a degeneration transistor MDP and a degeneration transistor MDM as shown inFIG.3, and may, therefore, be referred to as a “transconductance circuit with degeneration transistors.” As shown inFIG.3, a third terminal of the first input transistor MP (e.g., the source terminal of the transistor MP) may be coupled to a third terminal of the degeneration transistor MDP (e.g., the source terminal of the degeneration transistor MDP, which source terminal may be coupled to the bulk terminal of the degeneration transistor MDP). As also shown inFIG.3, a third terminal of the second input transistor MM (e.g., the source terminal of the transistor MM) may be coupled to a third terminal of the degeneration transistor MDM (e.g., the source terminal of the degeneration transistor MDM, which source terminal may be coupled to the bulk terminal of the degeneration transistor MDM). The second terminal of the degeneration transistor MDP (e.g., the drain terminal of the degeneration transistor MDP) may be coupled to the second terminal of the degeneration transistor MDM (e.g., the drain terminal of the degeneration transistor MDM), e.g., by virtue of the second terminal of each degeneration transistor being coupled to the node NCM, as shown inFIG.3. The first terminal of the first degeneration transistor MDP (e.g., the gate terminal of the degeneration transistor MDP) may be coupled to the first input332-1, where the degeneration transistor MDP may receive the first input voltage VIP, while the first terminal of the second degeneration transistor MDM (e.g., the gate terminal of the degeneration transistor MDM) may be coupled to the second input332-2, where the degeneration transistor MDM may receive the second input voltage VIM. Turning to the aspect ratios of various transistors included in the transconductance circuit312, an aspect ratio (Ax) of a FET refers to a ratio of a channel width (wx) to a channel length (lx) of the FET: Ax=wxlx(1) In some embodiments of the transconductance circuit312, a ratio of an aspect ratio of the degeneration transistor MDP (AMDP) to an aspect ratio of the input transistor MP (AMP) may be substantially equal to a ratio of an aspect ratio of the degeneration transistor MDM (AMDM) to an aspect ratio of the input transistor MM (AMM): AMDPAMP=AMDMAMM(2) In some embodiments of the transconductance circuit312, the aspect ratio of the first input transistor MP may be substantially equal to the aspect ratio of the second input transistor MM, or, equivalently, the aspect ratio of the first degeneration transistor MDP may be substantially equal to the aspect ratio of the second degeneration transistor MDM. For example, the aspect ratio of each of the first and second input transistors MP, MM may be about 1, while the aspect ratio of each of the first and second degeneration transistors MDP, MDM may be about N, where N is any positive real number. However, in other embodiments, these aspect ratios may be different, as long as the ratio of the aspect ratios of the first degeneration and input transistors MDP, MP is substantially equal to the ratio of the aspect ratios of the second degeneration and input transistors MDM, MM. In the embodiments when the aspect ratio of the first input transistor MP is not equal to the aspect ratio of the second input transistor MM, the respective tail current sources342-1,342-2coupled to these transistors may be rationed as well. In general, if the aspect ratio of the first input transistor MP is designated as “AMP” and the aspect ratio of the second input transistor MM is designated as “AMM”, the first tail current source342-1coupled to the first input transistor MP may be configured to provide a current IT1of IT1=AMPAMP+AMM*IT,(3) while the second tail current source342-2coupled to the second input transistor MM may be configured to provide a current IT2of IT2=AMMAMP+AMM*IT,(4) so that the sum of these two currents is substantially equal to IT. For example, if the aspect ratio of the first input transistor MP may be substantially equal to the aspect ratio of the second input transistor MM and the aspect ratio of the first degeneration transistor MDP may be substantially equal to the aspect ratio of the second degeneration transistor MDM, then each of the first tail current source342-1and the second tail current source342-2may provide the current IT/2. The aspect ratios of the first and second input transistors MP, MM, and the first and second degeneration transistors MDP, MDM may affect the equivalent transconductance (GM) of the transconductance circuit312. For example, if a ratio of the aspect ratio of the first degeneration transistor MDP to the aspect ratio of the first input transistor MP is N (or, equivalently, if a ratio of the aspect ratio of the second degeneration transistor MDM to the aspect ratio of the second input transistor MM is N), then the transconductance GM of the transconductance circuit312when the first input voltage VIP is equal to the second input voltage VIM may be proportional to N/(1+N). For example, in some embodiments, the transconductance GM may be calculated as GM=NN+1*gmMP,(5) where gmMPis the transconductance of the first input transistor MP, and where gmMPmay be calculated as gmMP=√{square root over (ΔMP*IT)}, (6) where ΔMPis the gain of the transistor MP and may be calculated as βMP=μn*Cox*wMPlMP=μn*Cox*AMP.(7) In some embodiments, during operation of the transconductance circuit312, each of the first input transistor MP and the second input transistor MM may be configured to operate in a saturation region. On the other hand, each of the first degeneration transistor MDP and the second degeneration transistor MDM may be configured to operate either in a linear region or in a saturation region. In particular, in some embodiments, when one of the degeneration transistors MDP, MDM enters the saturation region, another one may continue to operate in the linear region. The transistors MDP, MDM may function as nonlinear degeneration resistors and, hence, are referred to as “degeneration transistors.” Because the degeneration transistors MDP, MDM never turn off during normal operation, they do not suffer from the MOS channel forming/removing time constants. It can be shown that, during operation of the transconductance circuit312, an equivalent degeneration resistance between the source terminal of the first degeneration transistor MDP (e.g., the node NCMP shown inFIG.3) and the source terminal of the second degeneration transistor MDM (e.g., the node NCMM shown inFIG.3) may be symmetric with respect to the input signal difference ΔVI, meaning that the resistance between the nodes NCMP to NCMM may change with the applied input signal difference VIP−VIM and that the actual value of this resistance may change the value of the transconductance GM, where the change is symmetric in that GM(VIP−VIM)=GM(VIM−VIP). Hence, the transconductance circuit312would produce the same output current IOP, IOM if a voltage difference of 100 mV or a voltage difference of −100 mV is applied at the inputs VIP, VIM. It can also be shown that the equivalent degeneration resistance between the nodes NCMP and NCMM may be smallest when the first input voltage VIP is substantially equal to the second input voltage VIM. Furthermore, the equivalent resistance between the source terminal of the first degeneration transistor MDP and the source terminal of the second degeneration transistor MDM may increase as an absolute value (or magnitude) of a difference between the input voltages VIP and VIM increases. Once the degeneration transistor whose gate terminal is coupled to the lowest input voltage (which could be either transistor MDP or transistor MDP) enters into the saturation region, the positive and negative half of the transconductance circuit312become effectively isolated from each other. Hence, the proposed input stage does not need any extra protection devices for high-voltage protection and is self-protected. The degeneration transistors MDP and MDM reduce the equivalent transconductance GM of the differential pair of the input transistors MP, MM. It can be shown that the equivalent transconductance GM at ΔVI=0 may be reduced by N/(1+N). Hence, the equivalent transconductance GM may drop to 80% of its value compared to the zero-degeneration case at the same power level when N=4. Once one of the degeneration transistors MDP, MDM enters the saturation region, the drain current of the respective input transistor reaches its minimum level and is substantially equal to IT/2*(N+1). The remaining of the respective input side tail current may then be conveyed to the complementary half input side through the degeneration transistor operating in the saturation region. Under this condition, the output currents ratio may be substantially equal to 2N+1. If desired, these values can be arbitrarily set by properly choosing the ratio of the input and degeneration transistor aspect ratios, i.e., by choosing N. This feature is particularly attractive because it prevents the input transistors MP, MM from entering the cut-off region that would slow the block down during normal operation. In addition, it allows trading-off between small-signal gain and minimum input signal difference for total tail current switching with large signal overdrives and NCMP/NCMM node capacitance related delay variation reduction. In some implementations, high small-signal gain and small input signal difference for total tail current switching goals may favor a relatively large N value, whereas reduced large signal overdrive and reduced NCMP/NCMM node capacitance related delay variation goals may favor a relatively small N value. The exact value of N specific for the transistors used in the transconductance circuit312may be determined using simulation. It can be shown that, in some implementations, once one of the degeneration transistors MDP, MDM enters into the saturation region, the complementary degeneration transistor stays in the linear operation region if N is chosen larger than or equal to about 1.5. The large voltage drop between NCMP and NCMM nodes may appear mainly across the drain source terminal of the degeneration transistor operating in the saturation region. As the foregoing illustrates, the transconductance circuit312is symmetric with respect to the input terminals and, therefore, may advantageously process both single-ended as well as differential input signals. By including the degeneration transistors MDP, MDM as described above, the transconductance circuit312may operate up to the BVDS of the transistors included therein without any reliability problems in absence of additional protection mechanisms. The transconductance circuit312may have lower parasitic load capacitance, which may translate to lower power consumption at a given data rate, compared to conventional implementations. Example Systems Various embodiments of transconductance circuits with degeneration transistors as described above may be implemented in any kind of system where conversion of voltage to current may be used. One example of such a system was shown inFIG.3, where the transconductance circuit312is shown as a part of the amplification stage310that further includes the load circuit314and is followed up by the one or more gain stages320. However, in other embodiments, the transconductance circuit312may be a part of the amplification stage310that further includes the load circuit314but is not followed up by the one or more gain stages320illustrated inFIG.3, while, in still other embodiments, the transconductance circuit312is not necessarily used with the load circuit314as shown inFIG.3. In some embodiments, the transconductance circuit312may be included in a comparator or an amplifier, e.g., as was described above with reference to the transconductance circuit112. In other embodiments, the transconductance circuit312may be included in various range finding systems. For example, aspects of this disclosure can be implemented in any suitable light detection and ranging (LIDAR) system such as, for example, automotive LIDAR, industrial LIDAR, space LIDAR, military LIDAR, etc. LIDAR systems can include a receiver or a transmitter and a receiver. LIDAR systems can be integrated with a vehicle, such as an automobile, a drone such as an unmanned flying machine, an autonomous robot, or a space vehicle. LIDAR systems can transmit and/o receive laser light. LIDAR systems can be used for three-dimensional sensing applications. LIDAR systems can be used with augmented reality technology. In further embodiments, the transconductance circuit312may be included in a radio system, e.g., in an RF transmitter of a cellular wireless communication system. In still other embodiments, the transconductance circuit312may be used in variable gain-amplifiers, continuous-time filters, delta-sigma modulators, or data converters. Moreover, various embodiments of transconductance circuits with degeneration transistors can be implemented in various electronic devices. Examples of the electronic devices can include, but are not limited to, electronic products, parts of electronic products such as integrated circuits, vehicular electronics such as automotive electronics, etc. Further, the electronic devices can include unfinished products. Select Examples The following paragraphs provide examples of various ones of the embodiments disclosed herein. Example 1 provides a transconductance circuit that includes a first and a second portions. The first portion includes a first degeneration transistor (e.g., transistor MDP), configured to receive a first input voltage (e.g., voltage VIP) at a first input, and a first input transistor (e.g., transistor MP), coupled to the first degeneration transistor, and configured to provide a first output current (e.g., current IOP) at a first output. The second portion includes a second degeneration transistor (e.g., transistor MDM), coupled to the first degeneration transistor and configured to receive a second input voltage (e.g., voltage VIM) at a second input, and a second input transistor (e.g., transistor MM), coupled to the second degeneration transistor, and configured to provide a second output current (e.g., current IOM) at a second output. The first degeneration transistor is further coupled to the second degeneration transistor. Example 2 provides the transconductance circuit according to example 1, where each of the first degeneration transistor, the second degeneration transistor, the first input transistor, and the second input transistor has a gate terminal, a source terminal, and a drain terminal. Example 3 provides the transconductance circuit according to example 2, where the first degeneration transistor is configured to receive the first input voltage at the first input by having the gate terminal of the first degeneration transistor being coupled to the first input, and the second degeneration transistor is configured to receive the second input voltage at the second input by having the gate terminal of the second degeneration transistor being coupled to the second input. Example 4 provides the transconductance circuit according to examples 2 or 3, where the first input transistor is coupled to the first degeneration transistor by having the source terminal of the first input transistor being coupled to the source terminal of the first degeneration transistor, and the second input transistor is coupled to the second degeneration transistor by having the source terminal of the second input transistor being coupled to the source terminal of the second degeneration transistor. Example 5 provides the transconductance circuit according to any one of examples 2-4, where the first input transistor is configured to provide the first output current at the first output by having the drain terminal of the first input transistor being coupled to the first output, and the second input transistor is configured to provide the second output current at the second output by having the drain terminal of the second input transistor being coupled to the second output. Example 6 provides the transconductance circuit according to any one of examples 2-5, where the first degeneration transistor is coupled to the second degeneration transistor by having the drain terminal of the first degeneration transistor being coupled to the drain terminal of the second degeneration transistor. Example 7 provides the transconductance circuit according to any one of examples 2-6, where, for each transistor of the first degeneration transistor, the first input transistor, the second degeneration transistor, and the second input transistor, the source terminal of the each transistor is coupled to a bulk terminal of the each transistor. Example 8 provides the transconductance circuit according to any one of examples 2-7, where a ratio of an aspect ratio of the first degeneration transistor to an aspect ratio of the first input transistor is substantially equal to a ratio of an aspect ratio of the second degeneration transistor to an aspect ratio of the second input transistor. Example 9 provides the transconductance circuit according to any one of examples 2-8, where a ratio of an aspect ratio of the first degeneration transistor to an aspect ratio of the first input transistor is N, and a transconductance (GM) of the transconductance circuit when the first input voltage is equal to the second input voltage is proportional to N/(1+N). Example 10 provides the transconductance circuit according to any one of examples 2-9, further including a first current source, coupled to the source terminal of the first input transistor, and a second current source, coupled to the source terminal of the second input transistor, where, when an aspect ratio of the first input transistor is AMPand an aspect ratio of the second input transistor is AMM, the first current source is configured to generate a current substantially equal to AMP/(AMP+AMM)*IT, and the second current source is configured to generate a current substantially equal to AMM/(AMP+AMM)*IT, where IT is a tail current of the differential pair of first and second input transistors. Example 11 provides the transconductance circuit according to any one of examples 2-10, where, during operation of the transconductance circuit, an equivalent resistance between the source terminal of the first degeneration transistor (e.g., the node NCMP shown inFIG.3) and the source terminal of the second degeneration transistor (e.g., the node NCMM shown inFIG.3) is smallest when the first input voltage is substantially equal to the second input voltage. Example 12 provides the transconductance circuit according to example 11, where, during operation of the transconductance circuit, equivalent resistance between the source terminal of the first degeneration transistor and the source terminal of the second degeneration transistor increases as an absolute value (or magnitude) of a difference between the first input voltage and the second input voltage increases. Example 13 provides the transconductance circuit according to any one of the preceding examples, where, during operation of the transconductance circuit, each of the first input transistor and the second input transistor is configured to operate in a saturation region. Example 14 provides the transconductance circuit according to any one of the preceding examples, where, during operation of the transconductance circuit, each of the first degeneration transistor and the second degeneration transistor is configured to operate either in a linear region or in a saturation region. Example 15 provides the transconductance circuit according to example 14, where, during operation of the transconductance circuit, when one of the first degeneration transistor and the second degeneration transistor enters the saturation region, another one of the first degeneration transistor and the second degeneration transistor continues to operate in the linear region. Example 16 provides the transconductance circuit according to any one of the preceding examples, where each of the first degeneration transistor, the second degeneration transistor, the first input transistor, and the second input transistor is one of an EDMOS transistor, a DMOS transistor, an LDMOS transistor, or a VMOS transistor. Example 17 provides a transconductance circuit that includes a first transistor (e.g., transistor MDP), configured to receive a first input voltage (e.g., voltage VIP) at a first input; a second transistor (e.g., transistor MP), coupled to the first transistor, and configured to provide a first output current (e.g., current 10P) at a first output; a third transistor (e.g., transistor MDM), coupled to the first transistor and configured to receive a second input voltage (e.g., voltage VIM) at a second input; and a fourth transistor (e.g., transistor MM), coupled to the third transistor, and configured to provide a second output current (e.g., current IOM) at a second output, where the first transistor is further coupled to the third transistor. Example 18 provides the transconductance circuit according to example 17, where each of the first transistor, the second transistor, the third transistor, and the fourth transistor has a gate terminal, a source terminal, and a drain terminal, the first transistor is configured to receive the first input voltage at the first input by having the gate terminal of the first transistor being coupled to the first input, and the third transistor is configured to receive the second input voltage at the second input by having the gate terminal of the third transistor being coupled to the second input. In other examples, the transconductance circuit according to examples 17 or 18 may be the transconductance circuit according to examples 2-16, where the “first degeneration transistor” of examples 2-16 is the “first transistor” of the transconductance circuit according to examples 17 or 18, the “first input transistor” of examples 2-16 is the “second transistor” of the transconductance circuit according to examples 17 or 18, the “second degeneration transistor” of examples 2-16 is the “third transistor” of the transconductance circuit according to examples 17 or 18, and the “second input transistor” of examples 2-16 is the “fourth transistor” of the transconductance circuit according to examples 17 or 18. Example 19 provides the transconductance circuit according to examples 17 or 18, where a ratio of an aspect ratio of the first degeneration transistor to an aspect ratio of the first input transistor is substantially equal to a ratio of an aspect ratio of the second degeneration transistor to an aspect ratio of the second input transistor. Example 20 provides a transconductance circuit that includes a plurality of transistors, including a first transistor, a second transistor, a third transistor, and a fourth transistor, each of which having a gate terminal, a source terminal, and a drain terminal. The transconductance circuit further includes a first input, coupled to the gate terminal of the first transistor and to the gate terminal of the third transistor; a second input, coupled to the gate terminal of the second transistor and to the gate terminal of the fourth transistor; a first output, coupled to the drain terminal of the second transistor; and a second output, coupled to the drain terminal of the fourth transistor. In such a transconductance circuit, the drain terminal of the first transistor is coupled to the drain terminal of the third transistor, the source terminal of the first transistor is coupled to the source terminal of the second transistor, and the source terminal of the third transistor is coupled to the source terminal of the fourth transistor. In other examples, the transconductance circuit according to example 20 may be the transconductance circuit according to examples 1-16, where the “first degeneration transistor” of examples 1-16 is the “first transistor” of the transconductance circuit according to example 20, the “first input transistor” of examples 1-16 is the “second transistor” of the transconductance circuit according to example 20, the “second degeneration transistor” of examples 1-16 is the “third transistor” of the transconductance circuit according to example 20, and the “second input transistor” of examples 1-16 is the “fourth transistor” of the transconductance circuit according to example 20. Example 21 provides an electronic component, including one or more transconductance circuits according to any one of the preceding examples. Example 22 provides the electronic component according to example 21, where the electronic component is an amplifier or a comparator. Example 23 provides a method, including steps performed by a transconductance circuit or an electronic component according to any one of the preceding examples. Example 24 provides a method, including steps that cause a transconductance circuit or an electronic component to operate according to any one of the preceding examples. Other Implementation Notes, Variations, and Applications The illustration ofFIG.3provides just one non-limiting example where transconductance circuits with degeneration transistors as described herein may be used. Various teachings related to transconductance circuits with degeneration transistors as described herein are applicable to a large variety of other systems. In some scenarios, various embodiments of transconductance circuits with degeneration transistors as described herein can be used in automotive systems, safety-critical industrial applications, medical systems, scientific instrumentation, wireless and wired communications, radar, industrial process control, audio and video equipment, current sensing, instrumentation (which can be highly precise), and various digital-processing-based systems. In other scenarios, various embodiments of transconductance circuits with degeneration transistors as described herein can be used in the industrial markets that include process control systems that help drive productivity, energy efficiency, and reliability. In yet further scenarios, various embodiments of transconductance circuits with degeneration transistors may be used in consumer applications. While certain embodiments have been described, these embodiments have been presented by way of example, and are not intended to limit the scope of the disclosure. Indeed, the novel methods, apparatus, and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the methods, apparatus, and systems described herein may be made without departing from the spirit of the disclosure. For example, circuit blocks and/or circuit elements described herein may be deleted, moved, added, subdivided, combined, and/or modified. Each of these circuit blocks and/or circuit elements may be implemented in a variety of different ways. The accompanying claims and their equivalents are intended to cover any such forms or modifications as would fall within the scope and spirit of the disclosure. Any of the principles and advantages discussed herein can be applied to other systems, devices, integrated circuits, electronic apparatus, methods, not just to the embodiments described above. The elements and operations of the various embodiments described above can be combined to provide further embodiments. The principles and advantages of the embodiments can be used in connection with any other systems, devices, integrated circuits, apparatus, or methods that could benefit from any of the teachings herein. It is to be understood that not necessarily all objects or advantages may be achieved in accordance with any particular embodiment described herein. Thus, for example, those skilled in the art will recognize that certain embodiments may be configured to operate in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other objects or advantages as may be taught or suggested herein. In one example embodiment, any number of transconductance circuits with degeneration transistors as described above may be implemented on a board of an associated electronic device. The board can be a general circuit board that can hold various components of the internal electronic system of the electronic device and, further, provide connectors for other peripherals. More specifically, the board can provide the electrical connections by which the other components of the system can communicate electrically. Any suitable processors (inclusive of digital signal processors, microprocessors, supporting chipsets, etc.), computer-readable non-transitory memory elements, etc. can be suitably coupled to the board based on particular configuration needs, processing demands, computer designs, etc. Other components such as external storage, controllers for configuring any of the components, and peripheral devices may be attached to the board as plug-in cards, via cables, or integrated into the board itself. In various embodiments, the functionalities described herein may be implemented in emulation form as software or firmware running within one or more configurable (e.g., programmable) elements arranged in a structure that supports these functions. The software or firmware providing the emulation may be provided on non-transitory computer-readable storage medium comprising instructions to allow a processor to carry out those functionalities. In another example embodiment, the transconductance circuits with degeneration transistors as described above may be implemented as stand-alone modules (e.g., a device with associated components and circuitry configured to perform a specific application or function) or implemented as plug-in modules into application specific hardware of electronic devices. Note that particular embodiments of the present disclosure may be readily included in a system on chip (SOC) package, either in part, or in whole. An SOC represents an IC that integrates components of a computer or other electronic system into a single chip. It may contain digital, analog, mixed-signal, and often radio frequency functions: all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of separate ICs located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the transconductance circuits with degeneration transistors as described above may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips. It is also imperative to note that all of the specifications, dimensions, and relationships outlined herein (e.g., the number of degeneration or input transistors, etc.) have only been offered for purposes of example and teaching only. Such information may be varied considerably without departing from the spirit of the present disclosure, or the scope of the appended claims. The present descriptions may apply only to some non-limiting examples and, accordingly, they should be construed as such. In the foregoing description, example embodiments have been described with reference to particular arrangements of components. Various modifications and changes may be made to such embodiments without departing from the scope of the appended claims. The description and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense. Note that with the numerous examples provided herein, interaction may be described in terms of two, three, four, or more electrical components. However, this has been done for purposes of clarity and example only. It should be appreciated that the system can be consolidated in any suitable manner. Along similar design alternatives, any of the illustrated components, modules, and elements of the present drawings may be combined in various possible configurations, all of which are clearly within the broad scope of this Specification. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of electrical elements. It should be appreciated that the electrical circuits of the present drawings and its teachings are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of the electrical circuits as potentially applied to a myriad of other architectures. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended select examples. Note that all optional features of the apparatus described above may also be implemented with respect to the method or process described herein and specifics in the examples may be used anywhere in one or more embodiments. | 63,444 |
11863139 | DETAILED DESCRIPTION FIG.1is a diagram illustrating a configuration of an amplifier100in accordance with an embodiment. The amplifier100may receive input signals and generate an output signal by amplifying the input signals. The amplifier100may receive a first input signal IN and a second input signal INB. The amplifier100may generate an output signal OUT by differentially amplifying the first and second input signals IN and INB. In an embodiment, the first and second input signals IN and INB may be differential signals, and the second input signal INB may have a complementary voltage level to the first input signal IN. In an embodiment, the first input signal IN may be a single-ended signal. When the first input signal IN is a single-ended signal, the second input signal INB may serve as a reference voltage. The reference voltage may have a voltage level corresponding to the middle level of the range in which the first input signal IN swings. The amplifier100may include one or more gain adjusting circuits. The one or more gain adjusting circuits may adjust gains of the amplifier100. The gains of the amplifier100may include a DC gain and/or an AC gain The DC gain, which is a gain of the amplifier when an input signal having a relatively low frequency is received, may indicate a gain of the amplifier100when the first input signal IN retains a steady-state voltage level. The AC gain, which is a gain of the amplifier when an input signal having a relatively high frequency is received, may indicate a gain of the amplifier100when the voltage level of the first input signal IN transitions. The amplifier100may include one or more gain adjusting circuits to adjust the DC gain and the AC gain in various manners. InFIG.1, the amplifier100may include an amplification stage110, an equalization stage120, and an output stage130. The amplification stage110may receive the first and second input signals IN and INB, and generate a first amplified signal AOUT and a second amplified signal AOUTB by differentially amplifying the first and second input signals IN and INB. The amplification stage110may generate the first and second amplified signals AOUT and AOUTB by changing the voltage levels of first and second amplification nodes AN1and AN2based on the first and second input signals IN and INB. The equalization stage120may be coupled to the first and second amplification nodes AN1and AN2, and change the voltage levels of the first and second amplified signals AOUT and AOUTB by equalizing the voltage levels of the first and second amplification nodes AN1and AN2. The equalization stage120may change the voltage level of the second amplification node AN2based on the voltage level of the first amplification node AN1, and change the voltage level of the first amplification node AN1based on the voltage level of the second amplification node AN2. The output stage130may be coupled to the first and second amplification nodes AN1and AN2, and receive the first and second amplified signals AOUT and AOUTB. The output stage130may generate the output signal OUT based on the first and second amplified signals AOUT and AOUTB. Each of the amplification stage110, the equalization stage120, and the output stage130may include one or more gain adjusting circuits to adjust the gain of the amplifier. The amplification stage110may be coupled between a first supply voltage VH terminal and a second supply voltage VL terminal, and perform an amplification operation on the first and second input signals IN and INB. The amplification stage110may include an amplification circuit111and a first gain adjusting circuit112. The amplification circuit111may receive the first and second input signals IN and INB, and change the voltage levels of the first and second amplification nodes AN1and AN2based on the first and second input signals IN and INB. The amplification circuit111may change the voltage levels of the first and second amplification nodes AN1and AN2by differentially amplifying the first and second input signals IN and INB. The amplification circuit111may change the voltage level of the second amplification node AN2based on the first input signal IN, and change the voltage level of the first amplification node AN1based on the second input signal INB. The first gain adjusting circuit112may be coupled to the first and second amplification nodes AN1and AN2. The first gain adjusting circuit112may receive a first gain control signal VC1. The first gain adjusting circuit112may change voltage levels applied to the first and second amplification nodes AN1and AN2based on the voltage levels of the first and second amplification nodes AN1and AN2and the first gain control signal VC1. The first gain adjusting circuit112may increase the AC gain of the amplifier100by forming inductive peaks of the first and second amplified signals AOUT and AOUTB based on the first gain control signal VC1. The first gain adjusting circuit112may have a structure of an active inductor to adjust the AC gain of the amplifier100. The amplification circuit111may include a first input transistor IT1and a second input transistor IT2. The first and second input transistors IT1and IT2may be N-channel MOS transistors. The first input transistor IT1may have a gate configured to receive the first input signal IN, a drain coupled to the second amplification node AN2, and a source coupled to a common node CN. The second input transistor IT2may have a gate configured to receive the second input signal INB, a drain coupled to the first amplification node AN1, and a source coupled to the common node CN. The common node CN may be coupled to the second supply voltage VL terminal. The common node CN may be coupled to the second supply voltage VL terminal through a current source. When the first input signal IN is at a logic high level, the first input transistor IT1may lower the voltage level of the second amplification node AN2to a lower level than the voltage level of the first amplification node AN1. Therefore, the second amplified signal AOUTB having a logic low level may be outputted through the second amplification node AN2, and the first amplified signal AOUT having a logic high level may be outputted through the first amplification node AN1. On the other hand, when the first input signal IN is at a logic low level, the first input transistor IT1may raise the voltage level of the second amplification node AN2to a higher level than the voltage level of the first amplification node AN1. Therefore, the second amplified signal AOUTB having a logic high level may be outputted through the second amplification node AN2, and the first amplified signal AOUT having a logic low level may be outputted through the first amplification node AN1. The first gain adjusting circuit112may include a first active inductor112-1and a second active inductor112-2. The first active inductor112-1may be coupled between the first supply voltage VH terminal and the second amplification node AN2, and apply the first supply voltage VH to the second amplification node AN2based on the first gain control signal VC1. The first active inductor112-1may change a voltage level applied to the second amplification node AN2based on the first gain control signal VC1. The second active inductor112-2may be coupled between the first supply voltage VH terminal and the first amplification node AN1, and apply the first supply voltage VH to the first amplification node AN1based on the first gain control signal VC1. The second active inductor112-2may change a voltage level applied to the first amplification node AN1based on the first gain control signal VC1. The first active inductor112-1may include a first transistor T1and a first resistor circuit RC1. The first transistor T1may be a P-channel MOS transistor. The first transistor T1may have a source coupled to the first supply voltage VH terminal and a drain coupled to the second amplification node AN2. The first resistor circuit RC1may be coupled between a gate of the first transistor T1and the second amplification node AN2. The first resistor circuit RC1may have a resistance value that is varied based on the first gain control signal VC1. The first resistor circuit RC1may include a second transistor T2. The second transistor T2may be an N-channel MOS transistor. The second transistor T2may have a gate configured to receive the first gain control signal VC1, and a drain and source of which one is coupled to the gate of the first transistor T1and the other is coupled to the second amplification node AN2. The first transistor T1may adjust the level of a voltage applied to the second amplification node AN2from the first supply voltage VH terminal based on the voltage level of the second amplification node AN2. The second transistor T2may have a resistance value that is varied based on the first gain control signal VC1. Therefore, the second transistor T2may adjust the level of a voltage which the first transistor T1applies to the second amplification node AN2, according to the first gain control signal VC1. The second active inductor112-2may include a third transistor T3and a second resistor circuit RC2. The third transistor T3may be a P-channel MOS transistor. The third transistor T3may have a source coupled to the first supply voltage VH terminal and a drain coupled to the first amplification node AN1. The second resistor circuit RC2may be coupled between a gate of the third transistor T3and the first amplification node AN1. The second resistor circuit RC2may have a resistance value that is varied based on the first gain control signal VC1. The second resistor circuit RC2may include a fourth transistor T4. The fourth transistor T4may be an N-channel MOS transistor. The fourth transistor T4may have a gate configured to receive the first gain control signal VC1and a drain and source of which one is coupled to the gate of the second transistor T2and the other is coupled to the first amplification node AN1. The third transistor T3may adjust the level of a voltage applied to the first amplification node AN1from the first supply voltage VH terminal based on the voltage level of the first amplification node AN1. The fourth transistor T4may have a resistance value that is varied based on the first gain control signal VC1. Therefore, the fourth transistor T4may adjust the level of a voltage which the third transistor T3applies to the first amplification node AN1, according to the first gain control signal VC1. The equalization stage120may include an equalization circuit121, a second gain adjusting circuit122, and a third gain adjusting circuit123. The second and third gain adjusting circuits122and123may be included as components of the equalization circuit121. The equalization circuit121may be coupled between the first and second amplification nodes AN1and AN2and the second supply voltage VL terminal, and perform an equalization operation on the first and second amplified signals AOUT and AOUTB. The equalization circuit121may include a first equalization transistor QT1and a second equalization transistor QT2. The first and second equalization transistors QT1and QT2may be N-channel MOS transistors. The first equalization transistor QT1may have a gate coupled to the first amplification node AN1, a drain coupled to the second amplification node AN2, and a source coupled to a first equalization node QN1. The first equalization transistor QT1may couple the second amplification node AN2to the first equalization node QN1, based on the voltage level of the first amplification node AN1. The second equalization transistor QT2may have a gate coupled to the second amplification node AN2, a drain coupled to the first amplification node AN1, and a source coupled to a second equalization node QN2. The second equalization transistor QT2may couple the first amplification node AN1to the second equalization node QN2, based on the voltage level of the second amplification node AN2. The second gain adjusting circuit122may receive a second gain control signal VC2, and adjust the gain of the amplifier100based on the second gain control signal VC2. The second gain adjusting circuit122may couple the first equalization node QN1and the second equalization node QN2, based on the second gain control signal VC2. The second gain adjusting circuit122may include a first resistor R1, a second resistor R2, and a source transistor ST. The first resistor R1may have one end coupled to the first equalization node QN1. The second resistor R2may have one end coupled to the second equalization node QN2. The source transistor ST may be coupled between the other ends of the first and second resistors R1and R2. The source transistor ST may couple the other ends of the first and second resistors R1and R2based on the second gain control signal VC2. The source transistor ST may have a resistance value that is set based on the second gain control signal VC2. The source transistor ST may be an N-channel MOS transistor. The source transistor ST may have a gate configured to receive the second gain control signal VC2and a drain and source of which one is coupled to the other end of the first resistor R1and the other is coupled to the other end of the second resistor R2. The second gain adjusting circuit122may adjust the DC gain and/or the entire gain of the amplifier100. The third gain adjusting circuit123may adjust the amount of current flowing through the first and second equalization nodes QN1and QN2, based on a third gain control signal VC3. The third gain adjusting circuit123may include a first current source CS1and a second current source CS2. The first and second current is sources CS1and CS2may be variable current sources whose current amounts are adjusted by the third gain control signal VC3. The first current source CS1may be coupled between the first equalization node QN1and the second supply voltage VL terminal. The first current source CS1may adjust the amount of current flowing from the first equalization node QN1to the second supply voltage VL terminal, based on the third gain control signal VC3. The second current source CS2may be coupled between the second equalization node QN2and the second supply voltage VL terminal. The second current source CS2may adjust the amount of current flowing from the second equalization node QN2to the second supply voltage VL terminal, based on the third gain control signal VC3. The equalization stage120may further include a first capacitor C1and a second capacitor C2. The first capacitor C1may have one end coupled to the first equalization node QN1and the other end coupled to the second supply voltage VL terminal. The second capacitor C2may have one end coupled to the second equalization node QN2and the other end coupled to the second supply voltage VL terminal. The first and second capacitors C1and C2may change the AC gain of the amplifier100. The first and second capacitors C1and C2may have the same capacitance or different capacitances. In an embodiment, the first and second capacitors C1and C2may have a variable capacitance to adjust the AC gain of the amplifier. The output stage130may include an output circuit131and a fourth gain adjusting circuit132. The output circuit131may be coupled to the first and second amplification nodes AN1and AN2, and receive the first and second amplified signals AOUT and AOUTB. The output circuit131may generate the output signal OUT based on the first and second amplified signals AOUT and AOUTB. The fourth gain adjusting circuit132may receive a fourth gain control signal VC4, and adjust the gain of the amplifier100based on the fourth gain control signal VC4. The fourth gain adjusting circuit132may change the voltage level of the output signal OUT based on the fourth gain control signal VC4. The fourth gain adjusting circuit132may change the AC gain of the amplifier100by changing the voltage level of the output signal OUT. The output circuit131may include a current supply circuit131-1and a current discharge circuit131-2. The current supply circuit131-1may be coupled between the first supply voltage VH terminal and first and second output nodes ON1and ON2. The current supply circuit131-1may supply a current to the first and second output nodes ON1and ON2based on the first and second amplified signals AOUT and AOUTB. The current supply circuit131-1may supply a current to the second output node ON2based on the first amplified signal AOUT, and supply a current to the first output node ON1based on the second amplified signal AOUTB. The current supply circuit131-1may change the voltage level of the second output node ON2based on the voltage level of the first amplification node AN1, and change the voltage level of the first output node ON1based on the voltage level of the second amplification node AN2. The current discharge circuit131-2may be coupled between the first and second output nodes ON1and ON2and the second supply voltage VL terminal. The current discharge circuit131-2may change the voltage level of the first output node ON1based on the voltage level of the second output node ON2. The current discharge circuit131-2may adjust the amounts of current flowing from the first and second output nodes ON1and ON2to the second supply voltage VL terminal, based on the voltage level of the second output node ON2. The current supply circuit131-1may include a first current transistor CT1and a second current transistor CT2. The first and second current transistors CT1and CT2may be P-channel MOS transistors. The first current transistor CT1may have a gate coupled to the second amplification node AN2to receive the second amplified signal AOUTB. The first current transistor CT1may have a source coupled to the first supply voltage VH terminal and a drain coupled to the first output node ON1. The second current transistor CT2may have a gate coupled to the first amplification node AN1to receive the first amplified signal AOUT. The second current transistor CT2may have a source coupled to the first supply voltage VH terminal and a drain coupled to the second output node ON2. The current discharge circuit131-2may include a third current transistor CT3and a fourth current transistor CT4. The third and fourth current transistors CT3and CT4may be N-channel MOS transistors. The third current transistor CT3may have a gate coupled to the second output node ON2, a drain coupled to the first output node ON1, and a source coupled to the second supply voltage VL terminal. The fourth current transistor CT4may have a gate coupled to the second output node ON2, a drain coupled to the second output node ON2, and a source coupled to the second supply voltage VL terminal. The fourth gain adjusting circuit132may include a gain transistor GT. The gain transistor GT may be a P-channel MOS transistor. The gain transistor GT may have a gate configured to receive the fourth gain control signal VC4, a source coupled to the second output node ON2, and a drain coupled to the gate of the fourth current transistor CT4. The gain transistor GT may change the amount of current supplied to the gate of the fourth current transistor CT4from the second output node ON2, based on the fourth gain control signal VC4. The output stage130may further include an output capacitor133. The output capacitor133may have one end coupled to the first output node ON1and the other end coupled to the second supply voltage VL terminal. The output capacitor133may stabilize the voltage level of the first output node ON1, thereby stably retaining the voltage level of the output signal OUT. The amplifier100may further include a control signal generation circuit140. The control signal generation circuit140may generate the first gain control signal VC1, the second gain control signal VC2, the third gain control signal VC3, and the fourth gain control signal VC4. The control signal generation circuit140may generate the first to fourth gain control signals VC1to VC4based on gain adjustment information EQ. The gain adjustment information EQ may indicate a signal which may be randomly generated depending on the characteristics and operation environment of a semiconductor apparatus including the amplifier100. The control signal generation circuit140may generate the first to fourth gain control signals VC1to VC4having a plurality of bits or voltage levels suitable for controlling the first to fourth gain adjusting circuits112,122,123, and132. The control signal generation circuit140may generate the first to fourth gain control signals VC1to VC4as bias voltages having different voltage levels, based on the gain adjustment information EQ. FIGS.2A to2C and3A to3Bare graphs illustrating gains of the amplifier100in accordance with some embodiments.FIG.2Aillustrates a gain change of the amplifier100according to an operation of the first gain adjusting circuit112,FIG.2Billustrates a gain change of the amplifier100according to an operation of the fourth gain adjusting circuit132, andFIG.2Cillustrates a gain change of the amplifier100according to the operations of the first and fourth gain adjusting circuits112and132. In the graphs ofFIGS.2A to2C, the x-axis may correspond to the frequency of the input signal IN/INB, and the y-axis may correspond to the gain of the amplifier100. The frequency of the input signal IN/INB may be expressed in units of hertz (Hz), and the gain of the amplifier100may be expressed in units of decibels (dB). Referring toFIG.1as well asFIGS.2A to2C, when the voltage levels of the first and second amplification nodes AN1and AN2are changed, the amounts of current supplied to the first and second amplification nodes AN1and AN2from the first supply voltage VH terminal may be changed by the first and third transistors T1and T3. When the voltage level of the first gain control signal VC1is decreased to increase the resistance values of the first and second resistor circuits RC1and RC2, a peak of the first amplified signal AOUT and a peak of the output signal OUT may occur in the case that the voltage level of the first input signal IN transitions. The first gain adjusting circuit112may cause the peak of the output signal OUT, thereby increasing the AC gain of the amplifier100as illustrated inFIG.2A. For example, as the resistance values of the first and second resistor circuits RC1and RC2are increased, the AC gain of the amplifier100may be increased. On the other hand, as the resistance values of the first and second resistor circuits RC1and RC2are decreased, the AC gain of the amplifier100may be decreased. Therefore, when the voltage level of the first gain control signal VC1inputted to the first and second resistor circuits RC1and RC2is decreased, the resistance values of the second and fourth transistors T2and T4may be increased, and the AC gain of the amplifier100may be increased. On the other hand, when the voltage level of the first gain control signal VC1inputted to the first and second resistor circuits RC1and RC2is increased, the resistance values of the second and fourth transistors T2and T4may be increased, and the AC gain of the amplifier100may be decreased. Referring toFIG.2B, the fourth gain adjusting circuit132may adjust the AC gain of the amplifier100. For example, as the resistance value of the fourth gain adjusting circuit132is increased, the AC gain of the amplifier100may be increased. On the other hand, as the resistance value of the fourth gain adjusting circuit132is decreased, the AC gain of the amplifier100may be decreased. Therefore, when the voltage level of the fourth gain control signal VC4inputted to the fourth gain adjusting circuit132is increased, the resistance value of the gain transistor GT may be increased, and the AC gain of the amplifier100may be increased. On the other hand, when the voltage level of the fourth gain control signal VC4inputted to the fourth gain adjusting circuit132is decreased, the resistance value of the gain transistor GT may be decreased, and the AC gain of the amplifier100may be decreased. Referring toFIG.2C, when the resistance value of the gain transistor GT of the fourth gain adjusting circuit132is increased while the resistance values of the first and second resistor circuits RC1and RC2of the first gain adjusting circuit112are increased, the AC gain and bandwidth of the amplifier100may be further increased. The bandwidth may indicate a range of frequencies at which a predetermined level of gain can be obtained. When the voltage level of the first gain control signal VC1is decreased and the voltage level of the fourth gain control signal VC4is increased, the AC gain of the amplifier100may be increased in a specific frequency region, and the specific frequency region in which the AC gain is increased may be expanded. Therefore, by adjusting the voltage levels of the first and fourth gain control signals VC1and VC4according to the environment of a signal bus and/or channel to which the input signal IN is transmitted, it is possible to control the gain and bandwidth of the amplifier100such that the amplifier100can have the optimal AC gain and bandwidth. Therefore, it is possible to increase the amplitude and valid duration of the output signal OUT generated by amplifying the first and second input signals IN and INB. FIGS.3A and3Bare graphs illustrating gain changes of the amplifier100according to operations of the second and third gain adjusting circuits122and123. Referring toFIG.3A, when the second gain adjusting circuit122is turned on based on the second gain control signal VC2and couples the first and second equalization nodes QN1and QN2and the third gain adjusting circuit123is turned on based on the third gain control signal VC3and passes a current through the first and second equalization nodes QN1and QN2, the entire gain of the amplifier100, i.e. the AC gain and the DC gain, may be increased. For example, as the amounts of current flowing through the first and second equalization nodes QN1and QN2are increased by the third gain adjusting circuit123, the entire gain of the amplifier100may be increased. Referring toFIG.3B, when the third gain adjusting circuit123adjusts the amounts of current flowing through the first and second equalization nodes QN1and QN2while the second gain adjusting circuit122does not couple the first and second equalization nodes QN1and QN2, the AC gain of the amplifier100may be changed. For example, as the amounts of current flowing through the first and second equalization nodes QN1and QN2are increased by the third gain adjusting circuit123, the AC gain of the amplifier100may be increased. At this time, according to the resistance value of the second gain adjusting circuit122, the DC gain of the amplifier100may be changed. As the resistance value of the second gain adjusting circuit122is increased, the DC gain of the amplifier100may be decreased. On the other hand, as the resistance value of the second gain adjusting circuit122is decreased, the DC gain of the amplifier100may be increased. When the resistance value of the second gain adjusting circuit122is increased and the amounts of current flowing through the first and second equalization nodes QN1and QN2are increased by the third gain adjusting circuit123, the DC gain of the amplifier100may be decreased, and the AC gain of the amplifier100may be increased. When the DC gain of the amplifier100is decreased, the AC gain of the amplifier100may be further increased. FIG.4illustrates a configuration of an amplifier400in accordance with an embodiment. InFIG.4, the amplifier400may include an amplification stage410, an equalization stage420, and an output stage430. The amplification stage410may include an amplification circuit411and a first gain adjusting circuit412. The amplification circuit411may generate first and second amplified signals AOUT and AOUTB through first and second amplification nodes AN1and AN2based on input signals IN and INB. The first gain adjusting circuit412may include a first active inductor412-1and a second active inductor412-2. The first active inductor412-1may include a first transistor T1and a first resistor circuit RC41. The second active inductor412-2may include a third transistor T3and a second resistor circuit RC42. The equalization stage420may include an equalization circuit421, a second gain adjusting circuit422, and a third gain adjusting circuit423. The equalization circuit421may be coupled between the first and second amplification nodes AN1and AN2and the first and second equalization nodes QN1and QN2. The output stage430may include an output circuit431and a fourth gain adjusting circuit432. The output circuit431may generate an output signal OUT by changing the voltage levels of the first and second output nodes ON1and ON2based on the voltage levels of the first and second amplification nodes AN1and AN2. The output circuit431may include a first current transistor CT1, a second current transistor CT2, a third current transistor CT3, and a fourth current transistor CT4. The amplifier400may further include a control signal generation circuit440. The control signal generation circuit440may generate the first to fourth gain control signals C1<0:n> to C4<0:n> as digital code signals having different code values based on gain adjustment information EQ, where n is an integer equal to or greater than two. The amplifier400may have the same configuration as the amplifier100illustrated inFIG.1, except the first resistor circuit RC41, the second resistor circuit RC42, the second gain adjusting circuit422, the third gain adjusting circuit423, and the fourth gain adjusting circuit432. Components that perform the same functions may be represented by the same or similar reference numerals, and duplicated descriptions of same or similar components are omitted herein. Each of the first and second resistor circuits RC41and RC42may include a plurality of transistors, and have a resistance value that is adjusted based on the first gain control signal C1<0:n> having a plurality of bits. The second gain adjusting circuit422may include a plurality of transistors, and have a resistance value that is adjusted based on the second gain control signal C2<0:n> having a plurality of bits. The third gain adjusting circuit423may include a plurality of transistors, and change the amounts of current flowing through the first and second equalization nodes QN1and QN2, based on the third gain control signal C3<0:n> having a plurality of bits. The fourth gain adjusting circuit432may include a plurality of transistors, and have a resistance value that is adjusted based on the fourth gain control signal C4<0:n> having a plurality of bits. FIG.5illustrates the configuration of the first resistor circuit RC41illustrated inFIG.4. InFIG.5, the first resistor circuit RC41may include first to (n+1)th transistors T51to T5n+1. The first to (n+1)th transistors T51to T5n+1 may be N-channel MOS transistors. The number of the transistors included in the first resistor circuit RC41may correspond to the number of bits included in the first gain control signal C1<0:n>. Drains of the first to (n+1)th transistors T51to T5n+1 may be coupled to a gate of the first transistor T1in common. Sources of the first to (n+1)th transistors T51to T5n+1 may be coupled to the second amplification node AN2in common. The first to (n+1)th transistors T51to T5n+1 may receive first to nth bits of the first gain control signal C1<0:n>, respectively. Each of the first to (n+1)th transistors T51to T5n+1 may be turned on based on an allocated bit of the first gain control signal C1<0:n>. The first to (n+1)th transistors T51to T5n+1 may have different sizes. For example, the size may indicate the ratio of width to length of the gate of the transistor. For example, the first transistor T51may have the smallest size, and the (n+1)th transistor T5n+1 may have the largest size. For example, the size of the second transistor T52may be twice as large as the size of the first transistor T51, and the size of the third transistor T53may be four times as large as the size of the first transistor T51. The size of the (n+1)th transistor T5n+1 may be 2ntimes larger than the size of the first transistor T51. When the first to (n+1)th transistors T51to T5n+1 have different sizes, the first to (n+1)th transistors T51to T5n+1 may have different turn-on resistance values. When the number and types of transistors turned on by the first gain control signal C1<0:1> are changed, the first resistor circuit RC41may be set to various resistance values. The second resistor circuit RC42may have the same configuration as the first resistor circuit RC41except that the drains of the first to (n+1)th transistors are coupled to the gate of the third transistor T3in common and the sources of the first to (n+1)th transistors are coupled to the first amplification node AN1in common. FIG.6illustrates a configuration of the second gain adjusting circuit422illustrated inFIG.4. The second gain adjusting circuit422may include first to (n+1)th left resistors LR61to LR6n+1, first to (n+1)th right resistors RR61to RR6n+1, and first to (n+1)th source transistors ST61to ST6n+1. The first to (n+1)th left resistors LR61to LR6n+1 may each have one end coupled to the first equalization node QN1in common. The first to (n+1)th right resistors RR61to RR6n+1 may each have one end coupled to the second equalization node QN2in common. The first source transistor ST61may be coupled to the other ends of the first left resistor LR61and the first right resistor RR61, and couple the first left resistor LR61and the first right resistor RR61based on a first bit C2<0> of the second gain control signal. The second source transistor ST62may be coupled to the other ends of the second left resistor LR62and the second right resistor RR62, and couple the second left resistor LR62and the second right resistor RR62based on a second bit C2<1> of the second gain control signal. The (n+1)th source transistor ST6n+1 may be coupled to the other ends of the (n+1)th left resistor LR6n+1 and the (n+1)th right resistor RR6n+1, and couple the (n+1)th left resistor LR6n+1 and the (n+1)th right resistor RR6n+1 based on an (n+1)th bit C2<n> of the second gain control signal. The first left resistor LR61and the first right resistor RR61may have the same resistance value, and the first left resistor LR61and the first right resistor RR61may have the largest resistance value. The (n+1)th left resistor LR6n+1 and the (n+1)th right resistor RR6n+1 may have the same resistance value, and the (n+1)th left resistor LR6n+1 and the (n+1)th right resistor RR6n+1 may have the smallest resistance value. For example, the resistance values of the second left resistor LR62and the second right resistor RR62may be 2n-1times larger than the resistance values of the (n+1)th left resistor LR6n+1 and the (n+1)th right resistor RR6n+1. The resistance values of the first left resistor LR61and the first right resistor RR61may be 2ntimes larger than the resistance values of the (n+1)th left resistor LR6n+1 and the (n+1)th right resistor RR6n+1. When the number of source transistors turned on by the second gain control signal C2<0:n> is changed, the second gain adjusting circuit may be set to have various resistance values. FIG.7is a diagram illustrating a configuration of the third gain adjusting circuit423illustrated inFIG.4. InFIG.7, the third gain adjusting circuit423may include a first variable current source710and a second variable current source720. The first variable current source710may include first to (n+1)th transistors T711to T71n+1, and the second variable current source720may include first to (n+1)th transistors T721to T72n+1. The first to (n+1)th transistors T711to T71n+1 and T721to T72n+1 of the first and second variable current sources710and720may be N-channel MOS transistors. Drains of the first to (n+1)th transistors T711to T71n+1 may be coupled to the first equalization node QN1in common, and sources of the first to (n+1)th transistors T711to T71n+1 may be coupled to the second supply voltage VL terminal in common. Drains of the first to (n+1)th transistors T721to T72n+1 may be coupled to the second equalization node QN2in common, and sources of the first to (n+1)th transistors T721to T72n+1 may be coupled to the second supply voltage VL terminal in common. The first transistors T711and T721may receive a first bit C3<0> of the third gain control signal, and the second transistors T712and T722may receive a second bit C3<1> of the third gain control signal. The third transistors T713and T723may receive a third bit C3<2> of the third gain control signal. The (n+1)th transistors T71n+1 and T72n+1 may receive an (n+1)th bit C3<n> of the third gain control signal. The first transistors T711and T721may have the smallest size, and the (n+1)th transistors T71n+1 and T72n+1 may have the largest size. The size of the second transistors T712and T722may be twice as large as the size of the first transistors T711and T721. The size of the third transistors T713and T723may be four times as large as the size of the first transistors T711and T721. The size of the (n+1)th transistors T71n+1 and T72n+1 may be 2ntimes as large as the size of the first transistors T711and T721. Because the first to (n+1)th transistors T711to T71n+1 and T721to T72n+1 have different sizes, the first to (n+1)th transistors T711to T71n+1 and T721to T72n+1 may have different current drivabilities. The third gain control signal C3<0:n> may change the number of transistors to be turned on, thereby changing the amounts of current applied from the first and second equalization nodes QN1and QN2to the second supply voltage VL terminal by the first and second variable current sources710and720in various manners. FIG.8is a diagram illustrating a configuration of the fourth gain adjusting circuit432illustrated inFIG.4. InFIG.8, the fourth gain adjusting circuit432may include first to (n+1)th gain transistors GT81to GT8n+1. The first to (n+1)th gain transistors GT81to GT8n+1 may be P-channel MOS transistors. Sources of the first to (n+1)th gain transistors GT81to GT8n+1 may be coupled to the second output node ON2in common. Drains of the first to (n+1)th gain transistors GT81to GT8n+1 may be coupled to the gate of the fourth current transistor CT4in common. The first to (n+1)th gain transistors GT81to GT8n+1 may receive the first to nth bits C4<0:n> of the fourth gain control signal, respectively. Each of the first to (n+1)th gain transistors GT81to GT8n+1 may be turned on based on an allocated bit of the fourth gain control signal. The first to (n+1)th gain transistors GT81to GT8n+1 may have different sizes. For example, the first gain transistor GT81may have the smallest size, and the (n+1)th gain transistor GT8n+1 may have the largest size. For example, the size of the second gain transistor GT82may be twice as large as the size of the first gain transistor GT81, and the size of the third gain transistor GT83may be four times as large as the size of the first gain transistor GT81. The size of the (n+1)th gain transistor GT8n+1 may be 2ntimes as large as the size of the first gain transistor GT81. When the first to (n+1)th gain transistors GT81to GT8n+1 have different sizes, the first to (n+1)th gain transistors GT81to GT8n+1 may have different turn-on resistance values. The fourth gain control signal C4<0:n> may change the number and types of transistors to be turned on, and thus control the fourth gain adjusting circuit432to have various resistance values. FIG.9illustrates a configuration of a semiconductor system900in accordance with an embodiment. InFIG.9, the semiconductor system900may include a first semiconductor apparatus910and a second semiconductor apparatus920. The first semiconductor apparatus910may provide various control signals required for operating the second semiconductor apparatus920. The first semiconductor apparatus910may include various types of host devices. For example, the first semiconductor apparatus910may be a host device such as a central processing unit (CPU), graphic processing unit (GPU), multi-media processor (MMP), digital signal processor, application processor (AP), or memory controller. The second semiconductor apparatus920may be a memory device, for example, and the memory device may include a volatile memory and a nonvolatile memory. The volatile memory may include an SRAM (Static RAM), DRAM (Dynamic RAM), and SDRAM (Synchronous DRAM), and the nonvolatile memory may include a ROM (Read Only Memory), PROM (Programmable ROM), EEPROM (Electrically Erasable and Programmable ROM), EPROM (Electrically Programmable ROM), flash memory, PRAM (Phase change RAM), MRAM (Magnetic RAM), RRAM (Resistive RAM), FRAM (Ferroelectric RAM), and the like. The second semiconductor apparatus920may be coupled to the first semiconductor apparatus910through first and second buses901and902. The first and second buses901and902may be signal transmission paths, links or channels for transmitting signals. The first bus901may be a unidirectional bus. The first semiconductor apparatus910may transmit a first signal TS1to the second semiconductor apparatus920through the first bus901, and the second semiconductor apparatus920may be coupled to the first bus901to receive the first signal TS1transmitted from the first semiconductor apparatus910. The first signal TS1may include control signals such as a command signal, a clock signal, and address signal, for example. The second bus902may include a bidirectional bus. The first semiconductor apparatus910may transmit a second signal TS2to the second semiconductor apparatus920through the second bus902, or receive the second signal TS2transmitted from the second semiconductor apparatus920through the second bus902. The second semiconductor apparatus920may transmit the second signal TS2to the first semiconductor apparatus910through the second bus902, or receive the second signal TS2transmitted from the first semiconductor apparatus910through the second bus902. The second signal TS2may include data, for example. In an embodiment, the first and second signals TS1and TS2may be transmitted as a differential signal pair with complementary signals TS1B and TS2B through the first and second buses901and902, respectively. In an embodiment, the first and second signals TS1and TS2may be transmitted as single-ended signals through the first and second buses901and902, respectively. The first semiconductor apparatus910may include a first transmitting (TX) circuit911, a second transmitting circuit913, and a receiving (RX) circuit914. The first transmitting circuit911may be coupled to the first bus901, and drive the first bus901to transmit the first signal TS1to the second semiconductor apparatus920, based on an internal signal of the first semiconductor apparatus910. The second transmitting circuit913may be coupled to the second bus902, and drive the second bus902to transmit the second signal TS2to the second semiconductor apparatus920, based on the internal signal of the first semiconductor apparatus910. The receiving circuit914may be coupled to the second bus902, and receive the second signal TS2transmitted from the second semiconductor apparatus920through the second bus902. The receiving circuit914may generate the internal signal used in the first semiconductor apparatus910by differentially amplifying the second signal TS2transmitted through the second bus902. When a differential signal pair is transmitted through the second bus902, the receiving circuit914may generate the internal signal by differentially amplifying the second signal TS2and a complementary signal TS2B of the second signal. When a single-ended signal is transmitted through the second bus902, the receiving circuit914may generate the internal signal by differentially amplifying the second signal TS2and a first reference voltage VREF1. The first reference voltage VREF1may have a voltage level corresponding to the middle of the range in which the second signal TS2swings. The receiving circuit914may include any one of the amplifiers100and400illustrated inFIGS.1and4. The second semiconductor apparatus920may include a first receiving (RX) circuit922, a transmitting (TX) circuit923, and a second receiving circuit924. The first receiving circuit922may be coupled to the first bus901, and receive the first signal TS1transmitted from the first semiconductor apparatus910through the first bus901. The first receiving circuit922may generate an internal signal used in the second semiconductor apparatus920by differentially amplifying the first signal TS1transmitted through the first bus901. When a differential signal pair is transmitted through the first bus901, the first receiving circuit922may generate the internal signal by differentially amplifying the first signal TS1and a complementary signal TS1B of the first signal. When a single-ended signal is transmitted through the first bus901, the first receiving circuit922may generate the internal signal by differentially amplifying the first signal TS1and a second reference voltage VREF2. The second reference voltage VREF2may have a voltage level corresponding to the middle of the range in which the first signal TS1swings. The transmitting circuit923may be coupled to the second bus902, and drive the second bus902to transmit the second signal TS2to the first semiconductor apparatus910, based on the internal signal of the second semiconductor apparatus920. The second receiving circuit924may be coupled to the second bus902, and receive the second signal TS2transmitted from the first semiconductor apparatus910through the second bus902. The second receiving circuit924may generate the internal signal used in the second semiconductor apparatus920by differentially amplifying the second signal TS2transmitted through the second bus902. When a differential signal pair is transmitted through the second bus902, the second receiving circuit924may generate the internal signal by differentially amplifying the second signal TS2and the complementary signal TS2B of the second signal. When a single-ended signal is transmitted through the second bus902, the second receiving circuit924may generate the internal signal by differentially amplifying the second signal TS2and the first reference voltage VREF1. The first and second receiving circuits922and924may include any one of the amplifiers100and400illustrated inFIGS.1and4. FIG.10illustrates a configuration of a receiving circuit1000in accordance with an embodiment. The receiving circuit1000may be coupled to an external bus1001or a channel, and receive a transmit (Tx) signal TS transmitted through the external bus1001. The receiving circuit1000may generate an internal signal IS from the Tx signal TS. Inter-symbol interference (ISI) may occur in the Tx signal TS due to a high frequency loss, reflection, or cross-talk of the external bus1001or the channel. Thus, a previously transmitted signal may cause precursor interference with a signal to be subsequently transmitted. The receiving circuit1000may include an amplifier1010and an equalization circuit1020in order to reduce or minimize the precursor interference. The amplifier1010may be coupled to the external bus1001, and receive a Tx signal TS transmitted through the external bus1001. The amplifier1010may generate a pair of receive (Rx) signals RS and RSB by differentially amplifying the Tx signal TS. The Rx signal pair may include the Rx signal RS and a complementary signal RSB of the Rx signal. The amplifier1010may accurately amplify a level transition of the Tx signal TS by increasing an AC gain instead of decreasing a DC gain, in order to generate the Rx signal RS. The Tx signal TS may be transmitted as a differential signal pair with the complementary signal TSB, and transmitted as a single-ended signal. The amplifier1010may generate the Rx signal RS by differentially amplifying the Tx signal TS and the complementary signal TSB, and generate the Rx signal RS by differentially amplifying the Tx signal TS transmitted as the single-ended signal and the reference voltage VREF. The amplifier1010may be a CTLE (Continuous Time Linear Equalizer), and the amplifiers100and400illustrated inFIGS.1and4may be applied as the amplifier1010. The equalization circuit1020may receive the Rx signal pair RS and RSB, and generate the internal signal IS. The equalization circuit1020may generate the internal signal IS by removing a precursor which may occur in the Rx signal pair RS and RSB. The equalization circuit1020may be implemented in various manners depending on the characteristics of a semiconductor apparatus to which the receiving circuit1000is applied. The equalization circuit1020may include one or more of a decision feedback equalization circuit and a feed forward equalization circuit. While various embodiments have been described above, it will be understood by those skilled in the art that the described embodiments represent only a limited number of possible embodiments. Accordingly, the amplifier of the present teaching should not be limited based on the described embodiments. | 49,264 |
11863140 | DETAILED DESCRIPTION The detailed description set forth below, in connection with the appended drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts. FIG.1Aillustrates a schematic/block diagram of an example receiver100in accordance with an aspect of the disclosure. In this example, the receiver100is configured to process a set of received channels, such as channel1with corresponding components identified with a “−1” suffix, and channel2with corresponding components identified with a “−2” suffix. With regard to channel1, the receiver100includes an antenna110-1, a low noise amplifier (LNA)120-1which may include antenna impedance matching elements as represented by a series circuit of an inductor and a capacitor coupled to ground, an analog processing circuit130-1, an I-mixer140-1I, a Q-mixer140-1Q, a local oscillator (LO)150-1, an I-baseband filter (I-BBF)160-1I, and a Q-BBF160-1Q. The antenna110-1receives a wireless signal transmitted, e.g., by a remote wireless communication device, and outputs a received signal. The LNA120-1amplifies the received signal. The analog processing circuit130-1may perform one or more analog processes on the received signal including, but not limited to, filtering, spatial processing, converting the signal into a differential signal, and/or others. As mentioned, the output of the analog processing circuit130-1may be differential, and is coupled to differential inputs of the I- and Q-mixers140-1I and140-1Q, respectively. The LO150-1provides the LO signal and the 90° phase shifted LO signal to the I- and Q-mixers140-1I and140-1Q, respectively. Accordingly, the I- and Q-mixers140-1I and140-1Q downconvert the I- and Q-quadrature components of the received signal to generate I- and Q-input differential signals ViI1+/ViI1−and ViQ1+/ViQ1−for the I-BBF160-1I and Q-BBF160-1Q, respectively. The I-BBF160-1I and Q-BBF160-1Q filter the input differential signals ViI1+/ViI1−and ViQ1+/ViQ1−to remove the high-frequency conversion components and other unwanted signals, such as jammers, from the signals to generate output differential signals VoI1+/VoI1−and VoQ1+/VoQ1−, respectively. Although not shown, the output differential signals VoI1+/VoI1−and VoQ1+/VoQ1−are transmitted downstream for further processing, such as analog-to-digital conversion (ADC), demodulation, error correction decoding, etc. Similarly, with regard to channel2, the receiver100includes an antenna110-2, a low noise amplifier (LNA)120-2which may include antenna impedance matching elements as represented by a series circuit of an inductor and a capacitor coupled to ground, an analog processing circuit130-2, an I-mixer140-2I, a Q-mixer140-2Q, a local oscillator (LO)150-2, an I-baseband filter (I-BBF)160-2I, and a Q-BBF160-2Q. The antenna110-2receives a wireless signal transmitted, e.g., by a remote wireless communication device, and outputs a received signal. The LNA120-2amplifies the received signal. The analog processing circuit130-2may perform one or more analog processes on the received signal including, but not limited to, filtering, spatial processing, converting the signal into a differential signal, and/or others. As mentioned, the output of the analog processing circuit130-2may be differential, and is coupled to differential inputs of the I- and Q-mixers140-2I and140-2Q, respectively. The LO150-2provides the LO signal and the 90° phase shifted LO signal to the I- and Q-mixers140-2I and140-2Q, respectively. Accordingly, the I- and Q-mixers140-2I and140-2Q downconvert the I- and Q-quadrature components of the received signal to generate I- and Q-input differential signals ViI2+/ViI2−and ViQ2+/ViQ2−for the I-BBF160-2I and Q-BBF160-2Q, respectively. The I-BBF160-2I and Q-BBF160-2Q filter the input differential signals ViI2+/ViI2−and ViQ2+/ViQ2−to remove the high-frequency conversion components and other unwanted signals, such as jammers, from the signals to generate output differential signals VoI2+/VoI2−and VoQ2+/VoQ2−, respectively. Although not shown, the output differential signals VoI2+/VoI2−and VoQ2+/VoQ2−are transmitted downstream for further processing, such as analog-to-digital conversion (ADC), demodulation, error correction decoding, etc. The first and second channels may be independent for receiving two independent signals. Alternatively, the first and second channels may be used for spatial processing, such as multiple-input-multiple-output (MIMO) processing. In the latter case, channel1may serve as the primary channel, and channel2may serve as the MIMO channel, or vice-versa. Although two channels are shown for description purposes, it shall be understood that the receiver100may include hardware for processing more than two channels, such as, four (e.g., for 4×4 MIMO), five (e.g., for 100 MHz carrier aggregation), eight (e.g., for dual subscriber information module (SIM) card operation) or ten channels (e.g., for 200 MHz carrier aggregation). Further, while the elements for processing the two channels are illustrated inFIG.1as being coupled to respective antennas110-1and110-2, in some embodiments both the LNA120-1and the LNA120-2are coupled to the same antenna. In some such embodiments, a diplexer or filter elements may be coupled between one or more of the LNAs110and the antenna. As there is a trend for higher data throughputs in such receiver100, such as in the case of fifth generation (5G) new radio (NR) developed by 3rdGeneration Partnership Project (3GPP) for mobile network (referred to herein as “5G NR”), there may be multiple channels used for carrier aggregation (CA) to achieve higher data throughput. For example, 10 channels, each with a bandwidth of 20 mega Hertz (MHz), may provide a combined 200 MHz bandwidth for high data throughput applications. To provide such multiple channels for high data throughput applications, while at the same time provide a relatively small integrated circuit (IC) footprint (for example, to implement such a receiver for cost effective purposes), new technology nodes, such as 14 nanometer (nm) FIN field effect transistors (FINFETs), may be used in active components, such as the I- and Q-BBFs160-1I/160-1Q and160-2I/160-2Q of channels1and2of receiver100, respectively. Also, for versatility, the receiver100may be selectively reconfigured to process channels specified by other standards, such as 4thGeneration broadband cellular network developed by 3GPP (also known as Long Term Evolution (LTE)), and Global System for Mobile (GSM) cellular network. The use of new technology nodes, e.g., in active components of receiver100, serves well in filters for the higher bandwidths of 5G NR (e.g., 5 MHz to 160 MHz bandwidth). However, as LTE and GSM utilize narrower bandwidths (e.g., as low as 600 KHz to 1 MHz), the new technology nodes may introduce flicker noise, where the filter stopband (out-of-band) rejection is inferior for certain levels of the flicker noise. This may be the case at low frequencies since level of flicker noise varies inverse proportionally with frequency (1/f). To reduce flicker noise, the new technology nodes may be made larger. However, making the devices larger introduces additional parasitic capacitance, which may degrade performance for applications utilizing wider bandwidths, such as the case for 5G NR where the communication bands and/or channels may be wider and/or significant carrier aggregation is used. Moreover, the larger devices also occupy more IC footprint which may lead to higher product costs. As discussed, for a versatile receiver, the bandwidth can be selectively reconfigured for use in 5G NR, 4G, and/or GSM cellular networks, the I- and Q-BBFs160-1I/160-1Q and160-2I/160-2Q may be selectively reconfigured for the different bandwidths. For example, with regard to 5G NR, the I- and Q-BBFs160-1I/160-1Q and160-2I/160-2Q may be configured with poles around 100 MHz range. With regard to 4G, the I- and Q-BBFs160-1I/160-1Q and160-2I/160-2Q may be configured with poles around 20 MHz range. And, with regard to GSM, the I- and Q-BBFs160-1I/160-1Q and160-2I/160-2Q may be configured with poles around 660 KHz range. This is explained in more detail with reference to the following graphs. FIG.1Billustrates a graph of a spectrum (shaded region) and frequency response H(f) (dashed line) of an example received 5G NR channel (CHN) and baseband filter (BBF) in accordance with another aspect of the disclosure. The spectrum and frequency response H(f) of the channel 5G NR CHN and BBF may pertain to the case where the receiver100is configured to process signals in accordance with 5G NR. The graph includes an x- or horizontal-axis that represents frequency (f), and the y- or vertical axis represents power level with regard to the received channel 5G NR CHN signal and frequency response H(f) with regard to the corresponding BBF. In this example, the spectrum of the received channel-of-interest 5G NR CHN has a bandwidth of around 100 MHz. Thus, to filter the signal of the received channel 5G NR CHN to substantially eliminate unwanted signals (stopband rejection) to provide acceptable signal-to-noise ratio (SNR), the frequency response H(f) of the BBF should have poles fpproximately at −50 MHz and +50 MHz, respectively. The passband of the filter frequency response H(f) may be the substantially flat region between the poles fp, and the roll-offs of the filter frequency response H(f) may be the inclined portions below and above the poles fp, respectively. FIG.1Cillustrates a graph of a spectrum (shaded region) and frequency response H(f) (dashed line) of an example received 4G channel (CHN) and baseband filter (BBF) in accordance with another aspect of the disclosure. The spectrum and frequency response H(f) of the channel 4G CHN and BBF may pertain to the case where the receiver100is configured to process signals in accordance with 4G. In this example, the spectrum of the received channel-of-interest 4G CHN has a bandwidth of around 20 MHz. Thus, in order to filter the signal of the received channel to substantially eliminate unwanted signals (stopband rejection) to provide acceptable SNR, the frequency response H(f) of the BBF should have poles fpproximately at −10 MHz and +10 MHz, respectively. Similarly, the passband of the filter frequency response H(f) may be the substantially flat region between the poles fp, and the roll-offs of the filter frequency response H(f) may be the inclined portions below and above the poles fp, respectively. FIG.1Dillustrates a graph of a spectrum (shaded region) and frequency response H(f) (dashed line) of an example received GSM channel (CHN) and baseband filter (BBF) in accordance with another aspect of the disclosure. The spectrum and frequency response H(f) of the channel GSM CHN and BBF may pertain to the case where the receiver100is configured to process signals in accordance with GSM. In this example, the spectrum of the received channel-of-interest GSM CHN has a bandwidth of around 600 kilo Hertz (KHz). Thus, to filter the signal of the received channel to substantially eliminate unwanted signals to provide acceptable SNR, the frequency response H(f) of the BBF should have poles fpproximately at −300 KHz and +300 KHz, respectively. Similarly, the passband of the filter frequency response H(f) may be the substantially flat region between the poles fp, and the roll-offs of the filter frequency response H(f) may be the inclined portions below and above the poles fp, respectively. Filters may include resistor and capacitor banks to frequency shift the poles when dealing with different applications, such as 5G NR, 4G, and GSM applications. However, the resistor and capacitor banks are often large and take up significant IC footprint; and thus, may not be very cost effective. Additionally, the switching in-and-out of the resistors and capacitors of the banks may introduce increased parasitic capacitance that may have adverse effects on the performance of the filter, e.g., the frequency selectivity and noise suppression of the filters. FIG.2Aillustrates a schematic diagram of an example programmable baseband filter200in accordance with another aspect of the disclosure. In summary, the baseband filter200includes a pair of baseband filters (BBFs)210and250that may be selectively coupled together in different manners to achieve an improved performance for the filter200in different bandwidths of specific applications, such as, e.g., the specific bandwidths used in 5G NR, 4G, and GSM cellular networks. In some bandwidth specific applications, the pair of BBFs210and250may be selectively coupled to each other in order to (jointly) process a single receive signal (e.g., a single channel). Yet, in other applications, the pair of BBFs210and250may also be completely decoupled in order to (independently) process separate receive signals (e.g., two different channels). Different manners of selective couplings of the pair of BBFs210and250are possible. For example, in comparably narrow bandwidth and low noise applications, the first BBF210may borrow active and passive components of the second BBF250, where borrowing active components reduces flicker noise as the effective device size is increased (e.g., doubled), and borrowing passive components (e.g., resistors and capacitors) results in narrower (or tightened) poles and/or an increased stopband rejection, for example, at lower frequencies. In such case, the second BBF250is not used to filter a signal separate from the signal being filtered by the first BBF210with the borrowed components of the second BBF250. For extra narrow bandwidth applications, the first BBF210may also selectively couple to a capacitor bank to narrow the poles frequency even further for improved stopband rejection at such low frequencies, as in the case of GSM. In comparably narrow bandwidth and low power applications, the first BBF210may borrow the passive components (not the active components) of the second BBF250, where borrowing the passive components (resistors and capacitors) results in narrower (tighter) poles and/or increased stopband rejection, for example, at lower frequencies, while disabling the active components of the second BBF250in order to conserve power. Again, in such case, the second BBF250is not used to filter a signal separate from the signal being filtered by the first BBF210with the borrowed components of the second BBF250. In comparably wide bandwidth and low noise applications, the first BBF210may borrow the active components (not the passive components) of the second BBF250, where borrowing of the active components reduces flicker noise as the effective device size is increased (e.g., doubled), while the passive components are not needed for the narrower (tighter) poles. Again, in such case, the second BBF250is not used to filter a signal separate from the signal being filtered by the first BBF210with the borrowed components of the second BBF250. As an example, the first BBF210may be selectively coupled to the second BBF250to borrow an amplifier from the second BBF250independently of whether a (resistor-capacitor (RC)) feedback network associated with the amplifier is borrowed. Similarly, the first BBF210may be selectively coupled to the second BBF250to borrow a feedback network from the BBF250independently of whether an amplifier with which the feedback network is associated is borrowed. In some embodiments, the amplifier and its associated feedback network from the BBF250are independently selectively couplable to the first BBF210. In such embodiments, the first BBF210, more specifically, a set of one or more switching devices may be configured to borrow the amplifier, but not its associated feedback network, from the second BBF250. Alternatively, the first BBF210, more specifically, a set of one or more switching devices may be configured to borrow the feedback network, but not its associated amplifier, from the second BBF250. Further alternatively, the first BBF210, more specifically, a set of one or more switching devices may be configured to borrow the amplifier and its associated resistor-capacitor (RC) feedback network from the second BBF250. With reference to receiver100, the first BBF210may be any one of the I-BBF and Q-BBF160-1I and160-1Q of channel1. The second BBF250may be any one of the I-BBF and Q-BBF160-2I and160-2Q of channel2. For example, the I-BBFs of channels1and2may be selectively coupled together, and/or the Q-BBFs of channels1and2may be selectively coupled together. If the BBFs210and250are used to process separate signals, such as in the case of two different channels or primary and MIMO channels, the BBFs210and250are completely decoupled from each other. Thus, there may be no IC area penalty as the second BBF250may be needed to perform separate channel processing. However, if the second BBF250is not being used for separate channel processing, the first BBF210may be selectively coupled to the second BBF250to borrow the active or passive components or both, so as to modify its filtering performance, for example based on different bandwidth applications. More specifically, the first BBF210includes a differential input configured to receive differential signal Vi1+/Vi1−, which may be generated by one of the mixers140-1I and140-1Q of receiver100. The first BBF210further includes a capacitor C11coupled across the differential input, another capacitor C12+coupled between the positive side of the differential input and ground, and another capacitor C12−coupled between the negative side of the differential input and ground. The first BBF210further includes a first resistor bank212+ coupled between the positive side of the differential input and a positive input of a first stage amplifier220. Additionally, the first BBF210includes a second resistor bank212− coupled between the negative side of the differential input and a negative input of the amplifier220. The amplifier220can be configured as a transimpedance amplifier (TIA), as indicated inFIGS.2A-2G, and may include two or more internal amplification stages. For example, a first internal amplification stage of the amplifier220may have inputs coupled (directly or through one or more components, such as another internal amplification stage) to the resistor banks212and outputs coupled to inputs of a second internal amplification stage of the amplifier220; the second internal amplification stage of the amplifier220may have outputs coupled (directly or through one or more components, such as another internal amplification stage) to resistors R14(described below). The first BBF210further includes a first resistor-capacitor (RC) feedback network including capacitor C13+(which may be variable) coupled in parallel with resistor R13+(which may be variable) between a negative output and the positive input of the TIA220. Similarly, the first BBF210further includes a second RC feedback network including capacitor C13−(which may be variable) coupled in parallel with resistor R13−(which may be variable) between a positive output and the negative input of the TIA220. The TIA220including the RC feedback networks C13+/R13+and C13−/R13−, the first and second resistor banks212+ and212−, and capacitors C11, C12+, and C12−form a first filter stage of the first BBF210. The resistance of the resistor banks212+ and212−, the capacitance of the feedback capacitors C13+/C13−and the resistance of the feedback resistors R13+/R13−may be made variable to set the pole of the first filter stage. The first BBF210further includes resistors R14+and R15+(one or both of which may be variable) coupled in series between the negative output of the TIA220and a positive input of a second stage amplifier230. The first BBF210further includes resistors R14−and R15−(one or both of which may be variable) coupled in series between the positive output of the TIA220and a negative input of the amplifier230. The amplifier230may be configured as a programmable gain amplifier (PGA), as indicated inFIGS.2A-2G, and may include two or more internal amplification stages. For example, a first internal amplification stage of the amplifier230may have inputs coupled (directly or through one or more components, such as another internal amplification stage) to the resistors R15and outputs coupled to inputs of a second internal amplification stage of the amplifier230; the second internal amplification stage of the amplifier230may have outputs coupled (directly or through one or more components, such as another internal amplification stage) to outputs of the filter (described below). Although not shown inFIGS.2A-2G, a capacitor may be coupled between a first node between resistors R14+and R15+and a second node between resistors R14−and R15−for providing an additional pole for a second filter stage, as discussed further herein with reference to another filter implementation. Alternatively, a single resistor (which may be variable) may be used in place of R14+and R15+and/or a single resistor (which may be variable) may be used in place of R14−and R15−. The first BBF210further includes a third RC feedback network including capacitor C16+(which may be variable) coupled in parallel with resistor R16+(which may be variable) between a negative output and the positive input of the PGA230. Similarly, the first BBF210further includes a fourth RC feedback network including capacitor C16−(which may be variable) coupled in parallel with resistor R16−(which may be variable) between a positive output and the negative input of the PGA230. The PGA230includes a differential output to generate a filtered differential output signal Vo1+/Vo1−, such as the output differential signal VoI1+/VoI1−or VoQ1+/VoQ1−of I-BBF160-1I or Q-BBF160-1Q of receiver100, respectively. The PGA230including the RC feedback networks C16+/R16+and C16−/R16−and the resistors R14+/R15+and R14−/R15−form the second filter stage of the first BBF210. The resistance of the resistors R14+/R15+and R14−/R15−, the capacitance of feedback capacitors C13+/C13−and the resistance of the feedback resistors R13+/R13−may be made variable to set the pole(s) of the second filter stage. The second BBF250may be configured same or similar to the first BBF210. In particular, the second BBF250includes a differential input configured to receive differential signal Vi2+/Vi2−, which may be generated by one of the mixers140-2I and140-2Q of receiver100. The second BBF250further includes a capacitor C21coupled across the differential input, another capacitor C22+coupled between the positive side of the differential input and ground, and another capacitor C22−coupled between the negative side of the differential input and ground. The second BBF250further includes a first resistor bank252+ coupled between the positive side of the differential input and a positive input of a first stage amplifier260. Additionally, the second BBF250includes a second resistor bank252− coupled between the negative side of the differential input and a negative input of the amplifier260. The amplifier260can be configured as a transimpedance amplifier (TIA), as indicated inFIGS.2A-2G, and may include two or more internal amplification stages. For example, a first internal amplification stage of the amplifier260may have inputs coupled (directly or through one or more components, such as another internal amplification stage) to the resistor banks252and outputs coupled to inputs of a second internal amplification stage of the amplifier260; the second internal amplification stage of the amplifier260may have outputs coupled (directly or through one or more components, such as another internal amplification stage) to resistors R24(described below). The TIA260may also include an enable input to receive a first enable signal (en1) for selectively enabling and disabling the TIA260(e.g., by turning on/off at least one head switch coupled to a direct current (DC) supply (Vdd) rail). Although not shown, the TIA220may also include a similar enable input. The second BBF250further includes a first RC feedback network including capacitor C23+(which may be variable) coupled in parallel with resistor R23+(which may be variable) between a negative output and the positive input of the TIA260. Similarly, the second BBF250further includes a second RC feedback network including capacitor C23−(which may be variable) coupled in parallel with resistor R23−(which may be variable) between a positive output and the negative input of the TIA260. The TIA260including the RC feedback networks C23+/R23+and C23−/R23−, the first and second resistor banks252+ and252−, and capacitors C21, C22+, and C22−form the first filter stage of the second BBF250. The resistance of the resistor banks252+ and252−, the capacitance of the feedback capacitors C23+/C23−and the resistance of the feedback resistors R23+/R23−may be made variable to set the pole of the first filter stage. The second BBF250further includes resistors R24+and R25+(one or both of which may be variable) coupled in series between the negative output of the TIA260and a positive input of a second stage amplifier270. The second BBF250further includes resistors R24−and R25−(one or both of which may be variable) coupled in series between the positive output of the TIA260and a negative input of the amplifier270. The amplifier270may be configured as a programmable gain amplifier (PGA), as indicated inFIGS.2A-2G, and may include two or more internal amplification stages. For example, a first internal amplification stage of the amplifier270may have inputs coupled (directly or through one or more components, such as another internal amplification stage) to the resistors R25and outputs coupled to inputs of a second internal amplification stage of the amplifier270; the second internal amplification stage of the amplifier270may have outputs coupled (directly or through one or more components, such as another internal amplification stage) to outputs of the filter (described below). Although not shown inFIG.2, a capacitor may be coupled between a first node between resistors R24+and R25+and a second node between resistors R24−and R25−for providing an additional pole for the second filter stage, as discussed with reference to another filter implementation. Alternatively, a single resistor (which may be variable) may be used in place of R24+and R25+and/or a single resistor (which may be variable) may be used in place of R24−and R25−. The PGA270may also include an enable input to receive a second enable signal (en2) for selectively enabling and disabling the PGA270(e.g., by turning on/off at least one head switch coupled to a DC supply (Vdd) rail). Although not shown, PGA230may also include a similar enable input. The second BBF250further includes a third RC feedback network including capacitor C26+(which may be variable) coupled in parallel with resistor R26+(which may be variable) between a negative output and the positive input of the PGA270. Similarly, the second BBF250further includes a fourth RC feedback network including capacitor C26−(which may be variable) coupled in parallel with resistor R26−(which may be variable) between a positive output and the negative input of the PGA270. The PGA270includes a differential output to generate a filtered differential output signal Vo2+/Vo2−, such as the output differential signal VoI2+/VoI2−or VoQ2+/VoQ2−of I-BBF160-2I or Q-BBF160-2Q of receiver100if the BBFs210and250are not coupled together via a set of switching devices as discussed further herein. The PGA270including the RC feedback networks C26+/R26+and C26−/R26−and the resistors R24+/R25+and R24−/R25−form the second filter stage of the second BBF250. The resistance of the resistors R24+/R25+and R24−/R25−, the capacitance of the feedback capacitors C26+/C26−, and the resistance of the feedback resistors R26+/R26−may be made variable to set the pole(s) of the second filter stage. The programmable BBF200further includes a set of switching devices for selectively coupling various nodes of the BBFs210and250together to configure one of the filters, such as the first BBF210, with certain characteristics while disabling the other filter, such as the second BBF250, from filtering an independent signal. For example, the BBF200includes switching devices SW1+and SW1−for selectively coupling the positive and negative sides of the differential inputs of the first and second BBFs210and250together, respectively. The BBF200further includes switching devices SW3+and SW3−for selectively coupling the positive and negative inputs of the TIAs220and260of the first and second BBFs210and250together, respectively. The BBF200also includes switching devices SW7+, and SW7−for selectively coupling the negative and positive outputs of the TIAs220and260of the first and second BBFs210and250together, respectively. Additionally, the BBF200includes switching devices SW8+and SW8−for selectively coupling the positive and negative inputs of the PGAs230and270of the first and second BBFs210and250together, respectively. And, the BBF200includes switching devices SW12+and SW12−for selectively coupling the negative and positive outputs of the PGAs230and270of the first and second BBFs210and250together, respectively. The BBF200also includes switching devices SW4+, SW4−, SW5+, and SW5−for selectively coupling the RC feedback networks of the TIAs220and260of the first and second BBFs210and250to the respective inputs/outputs of the TIAs220and260. Further, the switches SW4+, SW4−, SW5+, and SW5−may be configured to selectively couple the RC feedback networks of the TIAs220and260of the first and second BBFs210and250together, for example when the switches SW3+, SW3−, SW7+, and SW7−are operated appropriately. The BBF200further includes switching devices SW9+, SW9−, SW10+, and SW10−for selectively coupling the RC feedback networks of the PGAs230and270of the first and second BBFs210and250to the respective inputs/outputs of the PGAs230and270. Further, the switches SW9+, SW9−, SW10+, and SW10−may be configured to selectively couple the RC feedback networks of the PGAs230and270of the first and second BBFs210and250together, for example when the switches SW8+, SW8−, SW12+, and SW12−are operated appropriately. The BBF200also includes switching devices SW6+and SW6−to selectively couple a differential output of the first internal stage of the TIA220to a differential output of the first internal stage of the TIA260. Similarly, the BBF200also includes switching devices SW11+and SW11−to selectively couple a differential output of the first internal stage of the PGA230to a differential output of the first internal stage of the PGA270. The BBF200further includes switching devices SW2+and SW2−to selectively couple the resistors banks252+ and252− of the second BBF250to the positive and negative inputs of the TIA260, respectively. Although not explicitly shown, the BBF200may include switching devices to selectively couple/decouple the variable resistors R24+, R24−, R25+, and R25− of the second BBF250to/from the first BBF210. FIG.2Billustrates a schematic diagram of the programmable baseband filter200in a first configuration in accordance with another aspect of the disclosure. In the first configuration, the first and second BBFs210and250operate independently of each other, and filter separate input signals Vi1+/Vi1−and Vi2+/Vi2−(e.g., simultaneously) to generate separate output signals Vo1+/Vo1−and Vo2+/Vo2−, respectively. Accordingly, in the first configuration, the switching devices SW1+/SW1−, SW3+/SW3−, SW6+/SW6−, SW7+/SW7−, SW8+/SW8−, SW11+/SW11−, and SW12+/SW12−are configured in open states. These switching devices being in open states decouple the first BBF210from the second BBF250. Also, in the first configuration, the switching devices SW2+/SW2−, SW4+/SW4−, SW5+/SW5−, SW9+/SW9−, and SW10+/SW10−are configured in closed states. The switching devices SW2+/SW2−being in the closed states couple the differential input of the second BBF250to the differential input of the TIA260. The switching devices SW4+/SW4−and SW5+/SW5−being in closed states couple the RC feedback networks C23+/R23+and C23−/R23−to the inputs and outputs of the TIA260, respectively. The switching devices SW9+/SW9−and SW10+/SW10−being in closed states couple the RC feedback networks C26+/R26−and C26−/R26−to the inputs and outputs of the PGA270, respectively. The first and second enable signals en1and en2are asserted to enable the TIA260and PGA270, respectively. FIG.2Cillustrates a schematic diagram of the example programmable baseband filter200in a second configuration in accordance with another aspect of the disclosure. In the second configuration, the first BBF210is selectively coupled to the second BBF250to borrow certain passive (resistors/capacitors) and the active (amplifiers) components of the second BBF250. This may result in narrower (tighter) poles and higher stopband rejection, thereby attaining lower flicker noise. In the second configuration, the first BBF210filters an input differential signal Vi1+/Vi1−to generate an output differential signal for Vo1+/Vo1−, while the second BBF250does not filter a separate signal, as it is used merely to provide additional components to the first BBF210for filtering operation. Accordingly, in the second configuration, the switching devices SW1+/SW1−and SW2+/SW2−are configured in open states to decouple the differential input, capacitors C21, C22+/C22−, and resistor banks252+/252− of the second BBF250from the first BBF210. Although not explicitly shown, in the second configuration, there may be switching devices to also decouple the variable resistors R24+/R24−and R25+/R25−of the second BBF250from the first BBF210. Further, in the second configuration, the switching devices SW3+/SW3−, SW4+/SW4−, SW5+/SW5−, SW6+/SW6−, SW7+/SW7−, SW8+/SW8−, SW9+/SW9−, SW10+/SW10−, SW11+/SW11−and SW12+/SW12−are configured in closed states. The switching devices SW3+/SW3−being in the closed states couple the differential input of the TIA220of the first BBF210to the differential input of the TIA260of the second BBF250. The switching devices SW4+/SW4−and SW5+/SW5−being in the closed states couple the RC feedback networks C23+/R23+and C23−/R23−to the inputs and outputs of the TIA260, respectively. The switching devices SW6+/SW6−being in the closed states couple the differential output of the first internal stage of the TIA220to the differential output of the first internal stage of the TIA260. The switching devices SW7+/SW7−being in the closed states couple the differential output of the TIA220of the first BBF210to the differential output of the TIA260of the second BBF250. The first enable signal en1is asserted to enable the TIA260. Further, in the second configuration, the switching devices SW8+/SW8−being in the closed states couple the differential input of the PGA230of the first BBF210to the differential input of the PGA270of the second BBF250. The switching devices SW9+/SW9−and SW10+/SW10−being in the closed states couple the RC feedback networks C26+/R26+and C26−/R26−to the inputs and outputs of the PGA270, respectively. The switching devices SW11+/SW11−being in the closed states couple the differential output of the first internal stage of PGA230to the differential output of the first internal stage of PGA270. And the switching devices SW12+/SW12−being in the closed states couple the differential output of the PGA230of the first BBF210to the differential output of the PGA270of the second BBF250. The second enable signal en2is asserted to enable the PGA270. FIG.2Dillustrates a schematic diagram of the example programmable baseband filter200in a third configuration in accordance with another aspect of the disclosure. In the third configuration, the first BBF210is selectively coupled to the second BBF250to borrow certain passive (resistors/capacitors) components of the second BBF250, but not the active (amplifiers) components of the second BBF250. The first BBF210borrowing the passive components may result in 1 narrower (tighter) poles and higher stopband rejection, and not borrowing the active components may improve power conservation as the TIA260and PGA270of the second BBF250may be disabled. In the third configuration, the first BBF210filters an input differential signal Vi1+/Vi1−to generate an output differential signal for Vo1+/Vo1−, while the second BBF250does not filter a separate signal, as it is used merely to provide additional components to the first BBF210for filtering operation. Accordingly, in the third configuration, the switching devices SW1+/SW1−and SW2+/SW2−are configured in open states to decouple the differential input, capacitors C21, C22+/C22−, and resistor banks252+/252− of the second BBF250from the first BBF210. The switching devices SW6+/SW6−and SW11+/SW11−are configured also in open states to decouple the differential outputs of the first internal stages of TIA220and PGA230from the differential outputs of the first internal stages of TIA260and PGA270, respectively. The first and second enable signals en1and en2are not asserted and thus disable the TIA260and PGA270, respectively. Although not explicitly shown, in the third configuration, there may be switching devices to also decouple the variable resistors R24+/R24−and R25+/R25−of the second BBF250from the first BBF210. Also, in the third configuration, the switching devices SW3+/SW3−, SW4+/SW4−, SW5+/SW5−, SW7+/SW7−, SW8+/SW8−, SW9+/SW9−, SW10+/SW10−, and SW12+/SW12−are configured in closed states. The switching devices SW3+/SW3−, SW4+/SW4−, SW5+/SW5−, and SW7+/SW7−being in the closed states couple the RC feedback networks C23+/R23+and C23−/R23−of the second BBF250in parallel with the RC feedback networks C13+/R13+and C13−/R13−of the first BBF210, respectively. Also, the switching devices SW8+/SW8−, SW9+/SW9−, SW10+/SW10−, and SW12+/SW12−being in the closed states couple the RC feedback networks C26+/R26+and C26−/R26−of the second BBF250in parallel with the RC feedback networks C16+/R16+and C16−/R16−of the first BBF210, respectively. FIG.2Eillustrates a schematic diagram of the example programmable baseband filter200in a fourth configuration in accordance with another aspect of the disclosure. In the fourth configuration, the first BBF210is selectively coupled to the second BBF250to borrow the active (amplifiers) components of the second BBF250, but not the passive (resistors/capacitors) components of the second BBF250. This may result in lower flicker noise as the effective device size is increased (e.g., doubled). Further, it may be unnecessary to borrow the passive components because the filter poles need not be that narrow (tight) in frequency and a single-pole configuration may be sufficient to achieve the requisite stopband rejection. In the fourth configuration, the first BBF210filters an input differential signal Vi1+/Vi1−to generate an output differential signal for Vo1+/Vo1−, while the second BBF250does not filter a separate signal, as it is used merely to provide additional components to the first BBF210for filtering operation. Accordingly, in the fourth configuration, the switching devices SW1+/SW1−and SW2+/SW2−are configured in open states to decouple the differential input, capacitors C21, C22+/C22−, and resistor banks252+/252− of the second BBF250from the first BBF210. The switching devices SW4+/SW4−and SW5+/SW5are configured in open states to decouple the RC feedback networks C23+/R23+and C23−/R23−of the second BBF250from the first BBF210. Similarly, the switching devices SW9+/SW9−and SW10+/SW10−are configured in the open states to decouple the RC feedback networks C26+/R26+and C26−/R26−of the second BBF250from the first BBF210. Although not explicitly shown, in the fourth configuration, there may be switching devices to also decouple the variable resistors R24+/R24−and R25+/R25−of the second BBF250from the first BBF210. Also, in the fourth configuration, the switching devices SW3+/SW3−, SW6+/SW6−, SW7+/SW7−, SW8+/SW8−, SW11+/SW11−and SW12+/SW12−are configured in the closed states. The switching devices SW3+/SW3−being in the closed states couple the differential input of the TIA220of the first BBF210to the differential input of the TIA260of the second BBF250. The switching devices SW6+/SW6−being in the closed states couple the differential output of the first internal stage of the TIA220to the differential output of the first internal stage of the TIA260. The switching devices SW7+/SW7−being in the closed states couple the differential output of the TIA220of the first BBF210to the differential output of the TIA260of the second BBF250. The switching devices SW8+/SW8−being in the closed states couple the differential input of the PGA230of the first BBF210to the differential input of the PGA270of the second BBF250. The switching devices SW11+/SW11−being in the closed states couple the differential output of the first internal stage of the PGA230to the differential output of the first internal stage of the PGA270. And the switching devices SW12+/SW12−being in the closed states couple the differential output of the PGA230of the first BBF210to the differential output of the PGA270of the second BBF250. The first and second enable signals en1and en2are asserted to enable the TIA260and PGA270, respectively. FIG.2Fillustrates a schematic diagram of the example programmable baseband filter200in a fifth configuration in accordance with another aspect of the disclosure. The fifth configuration is similar to the second configuration previously discussed in detail, where the first BBF210is selectively coupled to the second BBF250to borrow certain passive (resistors/capacitors) and the active (amplifiers) components of the second BBF250, for example, to configure its performance for narrower (tighter) poles and higher stopband rejection, and lower flicker noise. To achieve even narrower (tighter) poles and higher stopband rejection performance, for example, as in the case of GSM signals being processed, the baseband filter200is selectively coupled to a capacitor bank290. In particular, the RC feedback networks C13+/R13+and C13−/R13−of the first BBF210and the RC feedback networks C23+/R23+and C23−/R23−of the second BBF250are coupled in parallel with capacitors C+and C−of the capacitor bank290via switching devices SW13+/SW17+and SW13−/SW17−when these switching devices are configured in the closed states. The capacitors C+/C−lower the frequency of the poles of the baseband filter200. The capacitors C+/C−may be representative of single capacitors or of a plurality of capacitors. In some embodiments, the capacitors C+/C−are variable and/or include multiple switchable components. FIG.2Gillustrates a schematic diagram of the example programmable baseband filter200in a sixth configuration in accordance with another aspect of the disclosure. The sixth configuration is also similar to the second configuration previously discussed in detail, where the first BBF210is selectively coupled to the second BBF250to borrow certain passive (resistors/capacitors) and the active (amplifiers) components of the second BBF250, for example, to configure its performance for narrower (tighter) poles and higher stopband rejection, and lower flicker noise. To achieve higher stopband rejection performance at lower frequencies, the first BBF210may also be selectively coupled to the second BBF250to borrow the input passive components of the second BBF250. In this regard, the switching devices SW1+/SW1−and SW2+/SW2−are configured in closed states to couple the capacitors C21, C22+/C22−, and resistor banks252+/252− of the second BBF250to the first BBF210. Although, in this example, the variable resistors R24+/R24−and R25+/R25−of the second BBF250are shown decoupled from the first BBF210, it shall be understood that these resistors may be coupled to the first BBF210via corresponding switching devices. It shall be understood that not all of the configurations of the BBF200are described and illustrated. For example, the switching devices may be configured so that the BBF210may be selectively coupled to the second BBF250to borrow the resistor banks252+/252− without borrowing the TIA260and/or the feedback network of the TIA260. As another example, the switching devices may be configured so that the BBF210is selectively coupled to the second BBF250to borrow the resistors R24+/R24−and R25+/R25−with or without other components of the BBF250. It shall be further understood that not all configurations described must be provided by an implementation of the baseband filter200, and thus one or more of the connections and/or switching devices illustrated as coupling the BBF210to BBF250may be omitted. For example, the switching devices selectively coupling the intermediate stages of TIA220to TIA260(e.g., SW6+/SW6−) may be omitted in some embodiments; in other embodiments the connection between these stages is omitted altogether and thus the intermediate stages of TIA220will not be selectively coupled to the intermediate stages of TIA260. As another example, the switching devices selectively coupling the inputs of PGA230to the inputs of PGA270(e.g., SW8+/SW8−) need not be provided in some embodiments, or the switching devices coupling the feedback network of the TIA220to the feedback network of TIA260need not be provided in some embodiments. Accordingly, any one or more switching devices described in BBF200may be omitted in certain embodiments so that the configurations they effectuate need not be available in all embodiments. Such an omission of switching devices may be caused by a permanent coupling or permanent decoupling of the individual active/passive component of the second BBF250from the first BBF210. FIG.3Aillustrates a graph of a spectrum (shaded region) and frequency response H(f) (dashed line) of an example received channel CHN1and single-pole baseband filter in a zero-intermediate frequency (ZIF) reception mode in accordance with another aspect of the disclosure. In ZIF reception mode, the frequency of the LO is substantially the same as the frequency of the carrier of the channel-of-interest, such as CHN1. Accordingly, when the associated mixer mixes the channel-of-interest CHN1with the LO, the resulting lower frequency component of the mixing operation is centered around zero Hertz (0 Hz) or DC, as illustrated by the shaded area representing CHN1. The passband of the filter frequency response H(f) may be the substantially flat region between the poles fp, and the roll-offs of the filter frequency response H(f) may be the inclined portions below and above the poles fp, respectively. The transmit channel associated with received CHN1may include a transmit signal, which may leak into the received CHN1via antenna-to-antenna coupling or transmitter-to-receiver coupling, and may be treated as a jammer signal with respect to received CHN1. As illustrated, the CHN1transmit (Tx) jammer is separated from the spectrum of CHN1by a certain frequency offset. A baseband filter (BBF) configured to filter CHN1to remove the upper frequency components and other unwanted signals, such as the CHN1Tx jammer, may be configured as a single-pole fpfrequency response H(f) (represented as a dashed line around the spectrum of CHN1) because its stopband rejection at the frequency of CHN1Tx jammer may be sufficient to reduce its power level such that the jammer does not significantly affect the SNR of CHN1. FIG.3Billustrates a graph of a spectrum (shaded regions) and frequency response H(f) (dashed line) of another example received channel and single-pole baseband filter in an offset zero-intermediate frequency (OZIF) reception mode in accordance with another aspect of the disclosure. In OZIF reception mode, two channels (e.g., CHN1-2) are down converted and filtered by the same mixers and baseband filters. As two channels are processed by the same hardware, the OZIF mode of receiving channels saves significant IC area and power. In the OZIF mode, the frequency of the LO is set to between (e.g., the middle of) the spectrum of the first and second channels CHN1-2. Accordingly, when the associated mixer mixes the channels-of-interest with the LO, the resulting lower frequency components of the mixing operation is centered around 0 Hz or DC, as illustrated by the shaded areas representing CHN1and2. The transmit channel associated with received CHN1generates a signal, which, with respect to the received CHN1, is treated as a jammer signal. As illustrated, the CHN1transmit (Tx) jammer is separated from the spectrum of CHN1by a certain frequency offset. A baseband filter (BBF), previously configured as a single-pole to filter CHN1so as to remove the upper frequency components and other unwanted signals, such as CHN1Tx jammer, now has to filter CHN2as well, to remove such unwanted signals from the frequency band of CHN2. Note that CHN1Tx jammer may be close in frequency to the frequency band of CHN2, and the single-pole fpBBF, which has a frequency response H(f) is sufficient to reject CHN1Tx jammer in the ZIF mode, may not be sufficient to reject the jammer with respect to CHN2. FIG.3Cillustrates a graph of a spectrum (shaded regions) and frequency response H(f) (dashed line) of another example received channel and complex-poles fpbaseband filter in an offset zero-intermediate frequency (OZIF) reception mode in accordance with another aspect of the disclosure. As illustrated in this figure, a solution to reject CHN1Tx jammer from degrading the SNR of channel2may be to use a higher pole filter. A higher pole filter may have steeper roll-offs beyond the pole frequencies fpthan a single-pole filter. This is illustrated inFIG.3Bwhere the single-pole filter frequency response H(f) has a roll-off on the negative-frequency side that reduces the intensity of the CHN1Tx jammer by half, for example; whereas, as illustrated inFIG.3C, the complex-poles filter frequency response H(f) has a roll-off on the negative-frequency side that reduces the intensity of the CHN1Tx jammer by significantly more than half. One approach is to use a biquad filter that includes two poles. However, there are disadvantages to using a biquad filter. This filter typically requires two additional operational amplifiers and RC poles to achieve the requisite stopband rejection. This may result in a significant increase in the IC footprint to implement, which is not considered a cost-effective approach. Moreover, the biquad filter implemented for each OZIF reception may require tedious calibration to reduce the residual sideband (RSB) resulting from the down conversion of the images of the desired channels, as well as mismatches in the amplitude and phase of the LO applied to the I- and Q-mixers. FIG.4Aillustrates a schematic diagram of a programmable baseband filter (BBF)400in accordance with another aspect of the disclosure. In summary, the programmable BBF400includes switching devices to configure the filter to have a single-pole or dual- or complex-poles. The single-pole configuration may be used when the programmable (or configured) BBF400is filtering signals in the ZIF mode, as a single-pole filter may be sufficient to provide the requisite jammer rejection. The complex-pole configuration may be programmed (or configured) when the programmable BBF400is filtering signals in the OZIF mode, as complex-poles may be needed to provide the requisite jammer rejection. In particular, the programmable BBF400includes a differential input configured to receive an input differential signal Vi1+/Vi1−from, for example, a corresponding mixer of receiver100. The BBF400is configured to filter the input differential signal Vi1+/Vi1−to generate an output differential signal Vo1+/Vo1−at a differential output. The programmable BBF400includes a capacitor C11coupled across the positive and negative sides of the differential input. The BBF400further includes a capacitor C12+coupled between the positive side of the differential input and ground, and another capacitor C12−coupled between the negative side of the differential input and ground. Additionally, the programmable BBF400includes a first resistor bank412+ coupled between the positive side of the differential input and a positive input of a first stage amplifier420, which may be configured as a transimpedance amplifier (TIA). Additionally, the programmable BBF400includes a second resistor bank412− coupled between the negative side of the differential input and a negative input of the TIA420. The programmable BBF400includes a first RC feedback network including capacitor C13+(which may be variable) coupled in parallel with resistor R13+(which may be variable) between a negative output and the positive input of the TIA420. Similarly, the BBF400further includes a second RC feedback network including capacitor C13−(which may be variable) coupled in parallel with resistor R13−(which may be variable) between a positive output and the negative input of the TIA420. The TIA420including the RC feedback networks C13+/R13+and C13−/R13−, the first and second resistor banks412+ and412−, and capacitors C11, C12+, and C12−form a first filter stage of the programmable BBF400. The resistance of the resistor banks412+ and412−, the capacitance of the feedback capacitors C13+/C13−and the resistance of the feedback resistors R13+/R13−may be made variable to set the pole of the first filter stage. The programmable BBF400further includes variable resistors R14+and R15+coupled in series between the negative output of the TIA420and a positive input of a second amplification stage430, which may be configured as a programmable gain amplifier (PGA). The programmable BBF400further includes variable resistors R14−and R15−coupled in series between the positive output of the TIA420and a negative input of the PGA430. The programmable BBF400also includes a capacitor C14which includes a first terminal coupled to a first node between resistors R14+and R15+and a second terminal coupled to a second node between resistors R14−and R15−for configuring the BBF400with complex poles as described further herein. In the case of a single-ended filter, the second terminal of capacitor C14may be coupled to ground. The programmable BBF400further includes a third RC feedback network including capacitor C16+(which may be variable) coupled in parallel with resistor R16+(which may be variable) to a negative output of the PGA430. Similarly, the programmable BBF400further includes a fourth RC feedback network including capacitor C16−(which may be variable) coupled in parallel with resistor R16−(which may be variable) to a positive output of the PGA430. The capacitors C16+and C16−are further connected to the positive input and the negative input of the PGA430, respectively. The PGA430includes a differential output to generate the filtered differential output signal Vo1+/Vo1−, such as the output differential signal VoI1+/VoI1−or VoQ1+/VoQ1−of I-BBF160-1I or Q-BBF160-1Q of receiver100, respectively. The PGA430including the RC feedback networks C16+/R16+and C16−/R16−the resistors R14+/R15+and R14−/R15−, and the capacitor C14form a second filter stage of the BBF400. The resistance of the resistors R14+/R15+and R14−/R15−, the capacitance of the feedback capacitors C16+/C16−and the resistance of the feedback resistors R16+/R16−may be made variable to set the pole(s) of the second filter stage. To program the programmable BBF400between single-pole or complex-poles, the BBF400includes switching devices SW14+/SW14−, SW15+/SW15−, and optionally, SW16+/SW16−. The switching device SW14+is connected between the resistor R16+and the positive input of the PGA430. The switching device SW15+is connected between the resistor R16+and the first node between the variable resistors R14+and R15+. Similarly, the switching device SW14−is connected between the resistor R16−and the negative input of the PGA430. The switching device SW15−is connected between the resistor R16−and the second node between the variable resistors R14−and R15−. While the switching devices SW14+, SW14−, SW15+, SW15−are illustrated as separate devices inFIGS.4A-4G, the switching devices SW14+and SW15+may be configured as a switch having multiple throws and/or the switching devices SW14−and SW15−may be configured as a switch having multiple throws. The programmable BBF400, as depicted inFIG.4A, is in the single-pole configuration, for example, to process signals in accordance with ZIF receive mode of operation. In the single-pole configuration, the switching devices SW14+/SW14−are configured in closed states, the switching devices SW15+/SW15−are in open states, and the switching devices SW16+/SW16−, if present, may be configured in open states. Accordingly, in this configuration, the PGA430includes RC feedback network R16+/C16−, connected between the negative output and the positive input of the PGA430, and RC feedback network R16−/C16−connected between the positive output and the negative input of the PGA430. This is the single-pole configuration as the RC feedback networks provide the single-pole. If the switching devices SW16+/SW16−are not present, the capacitor C14is in play as it is coupled across the first node between resistors R14+and R15+and the second node between R14−and R15−. However, the pole formed by capacitor C14in this configuration may be in frequency far removed from the dominant pole formed by the RC feedback networks so as not to have much impact on the frequency response and the roll-off of the BBF400. FIG.4Billustrates a schematic diagram of the programmable BBF400in the complex-poles configuration in accordance with another aspect of the disclosure. In the complex-poles (plurality of poles) configuration, the switching devices SW14+/SW14−are configured in open states, the switching devices SW15+/SW15−are configured in closed states, and the switching devices SW16+/SW16−, if present, are configured in closed states. Accordingly, in this configuration, the PGA430includes the capacitor C16+of the RC feedback network connected between the negative output and the positive input of the PGA430, and the resistor R16+of the RC feedback network connected between the negative output and the first node between resistors R14+and R15+. Similarly, in this configuration, the PGA430includes the capacitor C16−of the RC feedback network connected between the positive output and the negative input of the PGA430, and the resistor R16−of the RC feedback network connected between the positive output and the second node between resistors R14−and R15−. In this configuration, whether the switching devices SW16+/SW16−are present or not, the capacitor C14is connected between the first node (between resistors R14+and R15+) and the second node (between R14−and R15−). In this configuration, the second filter stage of the programmable BBF400is configured as a Rauch filter that includes complex-poles to provide improved stopband rejection, for example, to reject the transmit (Tx) jammer associated with a first channel in proximity to a second channel in a OZIF receive mode of operation. The additional switching devices do not add considerable IC footprint to implement; thereby, providing single- or complex-pole filtering without significant cost increase. Although in the previous examples, the filters have been described as being differential filters, it shall be understood that the techniques of selectively coupling filters together or selectively reconfiguring filters between single and multiple-pole configurations may be applicable to single-ended filters. Additionally, although in the previous examples, the filters have been described as having two stages, it shall be understood that the techniques of selectively coupling filters together or selectively reconfiguring filters between single and multiple-pole configurations may be applicable to single-stage or more-than-two stage filters. Further, although not explicitly shown, a controller may be provided to configure the states of the switching devices, as well as the resistance and capacitance of the variable resistors and capacitors, to set the filter(s) in any of the configurations previously described. FIG.5illustrates a flow diagram of an example method500of filtering signals in accordance with another aspect of the disclosure. The method500includes operating a first filter to filter a first input signal to generate a first output signal (block510). Example means of operating a first filter to filter a first input signal to generate a first output signal includes the switching devices, resistors/capacitors, and/or amplifiers of the first BBF210. The method500further includes operating a second filter to filter a second input signal to generate a second output signal (block520). Example means for operating a second filter to filter a second input signal to generate a second output signal include the switching devices, resistors/capacitors, and/or amplifiers of the second BBF250. This may be the case where the first and second BBFs210and250are independently filtering separate signals, such as separate channels or primary and MIMO channels. The method500additionally includes selectively coupling (e.g., merging) at least a portion of the second filter with the first filter to filter a third input signal to generate a third output signal (block530). Example means for selectively coupling at least a portion of the second filter with the first filter to filter a third input signal to generate a third output signal include any of the switching devices that selectively couple the first and second BBFs210and250together, where the resulting, selectively coupled filter filters the third input signal to generate the third output signal. FIG.6illustrates a flow diagram of another example method600of filtering signals in accordance with another aspect of the disclosure. The method600includes operating a set of one or more switching devices to configure a filter with a first set of one or more poles (block610). Example means for operating a set of one or more switching devices to configure a filter with a first set of one or more poles include a controller configuring the states of switching devices SW14+/SW14−and SW15+/SW15−of BBF400. The method600further includes filtering a first input signal to generate a first output signal with the filter configured with the first set of one or more poles (block620). Example means for filtering a first input signal to generate a first output signal with the filter configured with the first set of one or more poles includes BBF400(or portions thereof) with the switching devices SW14+/SW14−and SW15+/SW15−of BBF400configured in closed and open states, respectively. The method600also includes operating the set of one or more switching devices to configure the filter with a second set of one or more poles (block630). Example means for operating the set of one or more switching devices to configure the filter with a second set of one or more poles include a controller configuring the states of switching devices SW14+/SW14−and SW15+/SW15−of BBF400. Additionally, the method600includes filtering a second input signal to generate a second output signal with the filter configured with the second set of one or more poles (block640). Example means for filtering a second input signal to generate a second output signal with the filter configured with the second set of one or more poles includes BBF400(or portions thereof) with the switching devices SW14+/SW14−and SW15+/SW15−of BBF400configured in open and closed states, respectively. FIG.7illustrates a block diagram of an example wireless communication device700in accordance with another aspect of the disclosure. The wireless communication device700includes one or more antennas710, and a receiver (or transceiver)720, wherein at least a portion of it is configured in accordance with receiver100with any of the BBFs described herein. The wireless communication device700further includes a baseband processing circuit730configured to process signals from the receiver720. FIG.8illustrates a schematic/block diagram of another example receiver800in accordance with another aspect of the disclosure. The receiver800is an example of a receiver that includes a first programmable baseband filter, such as programmable filter200previously discussed, and a second programmable baseband filter, such as programmable filter400previously discussed. Depending on the baseband filter's requirements on passband ripple and stopband rejection requirements, the first or second programmable filter may be selected, and the selected filter may be programmed in accordance with the passband ripple and stopband rejection requirements. More specifically, the receiver800includes at least one antenna805, a low noise amplifier (LNA)810, a mixer815, a local oscillator (LO)820, and a baseband filter, which includes a programmable filter830same or similar to programmable filter200previously discussed in detail, and a programmable single-pole/complex poles filter840similar to programmable baseband filter400previously discussed in detail. The receiver800further includes a controller850for selection of which filter830or840will filter a signal outputted by the mixer815, and for programming the selected filter, as discussed further herein. The at least one antenna805is coupled to an input of the LNA810. The LNA810includes an output coupled to a first input of the mixer815. The LO820includes an output coupled to a second input of the mixer815. The mixer815includes an output coupled to a first input of the programmable filter830via a switching device SW1, and to an input of the programmable single-pole/complex poles filter840via another switching device SWM. The programmable filter830includes a second input, which may be coupled to another receiver (Rx) chain, such as another quadrature (I- or Q) receiver chain, spatial receiver chain, or another channel receiver chain. The programmable filter830includes a first single-pole baseband filter832, which may be configured similar to BBF210previously discussed, and a second single-pole filter834, which may be configured same or similar to BBF250previously discussed. The first single-pole filter832includes an input coupled to the another receiver (Rx) chain via a switching device SW2. The second single-pole filter834includes an input serving as the first input of the programmable filter830. For filter coupling purposes, the programmable filter830includes a set of switching devices SW3to SWK. For example, the switching device SW3selectively couples together the inputs of the single-pole filters832and834, the switching device SWK selectively couples together outputs of the single-pole filters832and834, and the switching devices SW4to SWK−1 (not explicitly referenced) selectively couple together internal nodes of the single-pole filters832and834. The outputs of the single-pole filters832and834are coupled to downstream processing via switching devices SWK+1 and SWK+2, respectively. As previously discussed, downstream processing may include analog-to-digital conversion (ADC), demodulation, error correction decoding, etc. As discussed in detail with respect to baseband filter200, the single-pole filters832and834may be operated independently of each other, as in the case where the single-pole filter832filters a signal outputted by the another receiver (Rx) chain, and the single-pole filter834filters a signal outputted by the mixer815. In such case, the set of switching devices SW3to SWK are in open states. Also, independently, one or more of the single-pole filters832and834may be made inoperable or disabled. For example, single-pole filter832may be made inoperable or disabled by configuring switching devices SW2to SWK+1 in open states, and single-pole filter834may be made inoperable or disabled by configuring switching devices SW1, SW3to SWK, and SWK+2 in open states. Also, as discussed in detail with respect to baseband filter200, the single-pole filters832and834may be selectively coupled. As discussed, one filter (832or834) may be selectively coupled to the other filter (834or832) to borrow active, passive, or both active and passive components of the other filter (834or832). For example, if a signal outputted by the another receiver (Rx) chain is to be filtered by selectively coupling the single-pole filters832and834, the switching device SW2, one or more of the switching devices SW3to SWK, and switching device SWK+1 are configured in closed states, and switching devices SW1and SWK+2 are configured in open states. In this example, the signal upstream and downstream sides of switching devices SW2and SWK+1 serve as the input and output of the programmable filter830, respectively. Similarly, if a signal outputted by the mixer815is to be filtered by selectively coupled single-pole filters832and834, the switching devices SW1, one or more of the switching devices SW3to SWK, and switching device SWK+2 are configured in closed states, and switching devices SW2and SWK+1 are configured in open states. In this example, the signal upstream and downstream sides of switching devices SW2and SWK+2 serve as the input and output of the programmable filter830, respectively. As discussed, the programmable single-pole/complex poles filter840may be configured same or similar to BBF400. Accordingly, the programmable single-pole/complex poles filter840includes a set of internal switching devices SWM+1 to SWN−1 (not explicitly shown) to configure the filter as a single pole or complex-poles. The programmable single-pole/complex-poles filter840includes an output coupled to the downstream processing via a switching device SWN. The controller850generates control signals for the set of switching devices SW1to SWN to provide the desired filter response at the output of the mixer814and/or the output of the another receiver (Rx) chain based on passband ripple and stopband rejection requirements. For example, if the programmable filter830is selected to filter the signal outputted by the mixer815, the controller850configures switching device SW1in the closed state, and at least the switching device SWM in the open state. Further, the controller850configures the set of switching devices SW2to SWK+2 to program the filter830as previously discussed (e.g., to operate the single-pole filters832and834independently or in a selectively coupled configuration). If the programmable single-pole/complex poles filter840is selected to filter the signal outputted by the mixer815, the controller850configures at least the switching device SW1in the open state, and the switching devices SWM and SWN in the closed state. Further, the controller850configures the set of switching devices SWM+1 to SWN−1 to program (or configure) the filter840as previously discussed (e.g., to operate it as a single-pole filter or a complex-poles filter). FIG.9illustrates a schematic diagram of another example programmable baseband (BBF) filter900in accordance with another aspect of the disclosure. The programmable BBF900includes a first BBF910and a second BBF950. Each of the BBFs910and950may be configured same or similar to BBF400. That is, each of the BBFs910and950may be configured as a single-pole filter or a complex-poles filter as discussed in detail with respect to BBF400. Further, similar to the filter component borrowing or filter coupling scheme of programmable BBF200, the programmable BBF900includes a set of switching devices such that one of the BBFs910or950may borrow one or more active and/or passive components from the other BBF950or910. Thus, the BBF910may be operated independent of BBF950to perform single-pole or complex-poles filtering of input signal Vi1+/Vi1−to generate a filtered output Vo1+/Vo1−. In this configuration, the switching devices for selectively coupling the BBF910to BBF950may be configured open. Similarly, the BBF950may be operated independent of BBF910to perform single-pole or complex-poles filtering of input signal Vi2+/Vi2−to generate a filtered output Vo2+/Vo2−. In this configuration, the switching devices for selectively coupling the BBF950to BBF910may be opened. In some configurations, operation of one or both of the BBF910and950to filter a signal separate from the components or operation of the other BBF is described as a first mode of operation. As discussed in detail with respect to BBF200, one of the BBF910or950may be selectively coupled to the other BBF950or910to borrow one or more active and/or passive components of the other BBF950or910. For example, the BBF910may be operated to filter input signal to generate a filtered output Vo1+/Vo1−by selectively coupling BBF910to one or more active and/or passive components of BBF950. In this configuration, one or more of the switching devices for selectively coupling the BBF910to BBF950may be configured in a closed state, and the remaining, if any, may be configured in an opened state. In some configurations, this operation may be described as a second mode of operation. Similarly, the BBF950may be operated to filter input signal Vi2+/Vi2−to generate a filtered output Vo2+/Vo2−by selectively coupling BBF950to one or more active and/or passive components of BBF910. In this configuration, one or more of the switching devices for selectively coupling the BBF950to BBF910may be configured in a closed state, and the remaining, if any, may be configured in an opened state. In some configurations, this operation may be described as a third mode of operation. In some embodiments, neither the BBF910nor the BBF950is controlled to perform complex-poles filtering (e.g., the switching devices SW14+/SW14−are configured in a closed state and the switching devices SW15+/SW15−are configured in an open state) when the filter900is operating in either the second or third modes. The following provides an overview of aspects of the present disclosure: Aspect 1: A filter, including: a first amplifier; first and second resistors coupled in series between a first input of the filter and a first input of the first amplifier; a first feedback capacitor coupled between a first output and the first input of the first amplifier; a capacitor coupled to a first node between the first and second resistors; a first feedback resistor coupled to the first output of the first amplifier; and a first set of one or more switching devices configured to selectively couple the first feedback resistor to the capacitor and to selectively connect the first feedback resistor to the first input of the first amplifier. The capacitor may be selectively or persistently coupled to the first node. Aspect 2: The filter of aspect 1, wherein the capacitor is coupled to ground. Aspect 3: The filter of aspect 1, wherein the first set of one or more switching devices comprises a first switch configured to selectively connect the first feedback resistor to the first input of the first amplifier and a second switch configured to selectively connect the first feedback resistor to the first node. Aspect 4: The filter of aspect 3, wherein the first set of one or more switching devices further comprises a third switch configured to selectively decouple connect the capacitor from to the first node. Aspect 5: The filter of aspect 3, wherein the capacitor is directly and persistently connected to the first node. Aspect 6: The filter of any one of aspects 1-5, further including: third and fourth resistors coupled in series between a second input of the filter and a second input of the first amplifier; a second feedback capacitor coupled between a second output and the second input of the first amplifier; a second feedback resistor coupled to the second output of the first amplifier; and a second set of one or more switching devices configured to selectively couple the second feedback resistor to the capacitor and to selectively connect the second feedback resistor to the second input of the first amplifier. The capacitor may be coupled persistently or selectively to a second node between the third and fourth resistors. Aspect 7: The filter of aspect 6, wherein the first set of switching devices comprises a first switching device configured to selectively decouple the capacitor from the first node, and wherein the second set of switching devices comprises a second switching device configured to selectively decouple the capacitor from the second node. Aspect 8: The filter of aspect 6 or 7, wherein: the first input of the first amplifier includes a positive input of the first amplifier; the first output includes a negative output of the first amplifier; the second input of the first amplifier includes a negative input of the first amplifier; and the second output includes a positive output of the first amplifier. Aspect 9: The filter of any one of aspects 1-8, wherein the first amplifier includes a programmable gain amplifier (PGA). Aspect 10: The filter of any one of aspects 1-9, further including a second amplifier coupled between the first input of the filter and the first and second resistors. Aspect 11: The filter of any one of aspects 1-10, wherein the filter comprises a baseband filter. Aspect 12: A method, including: operating a set of one or more switching devices to configure a filter with a first set of one or more poles; filtering a first input signal to generate a first output signal with the filter configured with the first set of one or more poles; operating the set of one or more switching devices to configure the filter with a second set of one or more poles; and filtering a second input signal to generate a second output signal with the filter configured with the second set of one or more poles. Aspect 13: The method of aspect 12, wherein the first set of one or more poles includes a single-pole. Aspect 14: The method of aspect 12 or 13, wherein the second set of one or more poles includes a plurality of poles. Aspect 15: The method of any one of aspects 12-14, wherein the second set of one or more poles includes complex poles. Aspect 16: The method of any one of aspects 12-15, wherein the filter configured with the second set of one or more poles is configured as a Rauch filter. Aspect 17: A filter, including: a first amplifier having differential inputs and differential outputs; first and second feedback resistors respectively coupled between the differential inputs and the differential outputs; and a plurality of switching devices configured to selectively couple the first feedback resistor to the second feedback resistor. Aspect 18: The filter of aspect 17, wherein the first feedback resistor is coupled to the second feedback resistor through a capacitor. Aspect 19: The filter of aspect 17 or 18, wherein the filter comprises a baseband filter. The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. | 80,047 |
11863141 | DESCRIPTION OF EMBODIMENT(S) In the following, an exemplary embodiment of the invention is explained on the basis of the attached drawings. FIG.1shows a music data playing control system1that is an exemplary embodiment of audio equipment of the invention. The music data playing control system1includes a computer system2, which includes a processor and a memory. Music data such as MP3 is played by executing music player application software on the computer system2, and the played music data can be outputted, as a sound, from an amplification device and a speaker (not shown). The computer system2of the music data playing control system1can execute, through a user's operations, a variety of special operation processes and a sound effecting process on the music data being played. For allowing a user to perform the above processes, the music data playing control system1includes a DJ controller10, which includes a jog dial11that is a rotary operation unit. The jog dial11is placed on an upper surface of a casing3, which is an equipment body. The jog dial11is rotatably supported around a perpendicular axis. When a user touches the jog dial11to perform a rotary operation, the jog dial11gives a command for fast-forwarding or reversing music data to be played on the computer system2. The jog dial11includes a rotation detecting unit12that detects a user's rotary operation. The rotation detecting unit12may use, for example, a ring-shaped scale provided in the jog dial11and a sensor provided in the casing3so that the sensor detects a moving direction and circumferential speed in a circumferential direction of the scale. The jog dial11includes a touch detecting unit20that detects a user's touch operation. The touch detecting unit20includes a sheet switch21and a switch circuit board22provided in the jog dial11, and a power-supplying circuit board23provided in the casing3. The sheet switch21includes a plurality of switching elements also referred to as a membrane switch, which can be switched on and off by elastic deformation of sheet-shaped electrodes, being arranged circularly. The sheet switch21is placed on a surface of the jog dial11. An operation for switching on and off the sheet switch21is performed by a user pressing the sheet switch21downward. The sheet switch21is not switched on and off by a user touching the jog dial11lightly. The sheet switch21is switched on and off when a user applies predetermined pressing force to the jog dial11. Since the sheet switch21is formed in a ring shape, a user can obtain the same operational feeling regardless of a rotary angle position of the jog dial11. The switch circuit board22is connected to a power receiving coil221, and the power-supplying circuit board23is connected to a power supplying coil231. The power receiving coil221and the power supplying coil231are printed coils with patterns formed on ring-shaped printed boards. The power receiving coil221is attached to the jog dial11and the power supplying coil231is secured to the casing3. The power receiving coil221faces the power supplying coil231at a predetermined interval. When alternating current is supplied to the power supplying coil231, alternating voltage is generated in the power receiving coil221by electromagnetic induction. FIG.2shows a circuit configuration of the touch detecting unit20including the sheet switch21, the switch circuit board22, and the power-supplying circuit board23. The switch circuit board22includes a rectifying diode222, a load resistor223, and a circuit opening/closing FET element224(field-effect transistor). The power receiving coil221, the rectifying diode222, the load resistor223, and the FET element224are connected in series to form a loop-shaped receiving circuit220. The sheet switch21is connected to a gate of the FET element224, thus making it possible to open and close the receiving circuit220formed of the power receiving coil221, the diode222, the load resistor223and the FET element224. When alternating voltage is generated in the power receiving coil221after the sheet switch21switches on the FET element224, a current is generated in the receiving circuit220formed of the diode222, the load resistor223, and the FET element224. When alternating voltage is generated in the power receiving coil221after the sheet switch21switches off the FET element224, the receiving circuit220is shut off and no current flows therethrough. Switching the sheet switch21on and off causes a current to intermittently flow through the receiving circuit220, making it possible to switch load power to be supplied from the power receiving coil221to the switch circuit board22. An operation detecting section201of the invention is configured by the receiving circuit220including the sheet switch21and the power receiving coil221. The power-supplying circuit board23includes a direct-current (DC) power source232, a FET element for drive233, and an oscillation circuit234. The power supplying coil231, the DC power source232, the FET element for drive233, and the oscillation circuit234are connected in series to form a loop-shaped power-supplying circuit230. The oscillation circuit234is connected to a gate of the FET element for drive233. The FET element233increases or decreases, in response to an alternating-current signal (AC signal) from the oscillation circuit234, a current flowing through the power supplying circuit230formed of the DC power source232, the power supplying coil231, and the FET element233. An alternating-current magnetic field is generated in the power supplying coil231depending on the increase or decrease in the current, generating alternating voltage in the power receiving coil221disposed in the vicinity of the power supplying coil231. A power supplying section202of the invention is configured by the power supplying coil231, the DC power source232, the FET element for drive233, and the oscillation circuit234. The DC power source232and the oscillation circuit234may be replaced with the computer system2. The power-supplying circuit board23includes a load detecting resistor235connected to the power supplying circuit230and a monitoring circuit236that detects a current flowing through the load detecting resistor235. A current, which corresponds to the current flowing through the power supplying circuit230of the power supplying section202, flows through the load detecting resistor235. The monitoring circuit236can thus detect a current of the power supplying circuit230of the power supplying section202from a current flowing through the load detecting resistor235. The current in the power supplying section202depends on the increase or decrease in load power of the operation detecting section201, and the load power of the operation detecting section201changes in response to switching on and off in the sheet switch21. It is thus possible to detect the switching on and off in the sheet switch21(user's touch operation) by detecting the change in a current of the power supplying section202using the monitoring circuit236. That is, the detection of the touch operation by the operation detecting section201can be transmitted as a signal by using a non-contact power supply route from the power supplying section202to the operation detecting section201. A monitoring section203of the invention is configured by the load detecting resistor235and the monitoring circuit236. The monitoring circuit236may be replaced with the computer system2. In the music data playing control system1according to the exemplary embodiment, processes based on a user's touch operation are executed in accordance with the following procedure. Initial setting is executed on respective parts or components including the computer system2and the DJ controller10(process S1), when the music data playing control system1is turned on. When music data is played, the DJ controller10receives a user's operation (rotary operation or touch operation performed on the jog dial11) (process S2). When the user's operation is the touch operation performed on the jog dial11, the user's operation is processed by the touch detecting unit20(processes S3to S9). When the user's touch operation is performed with pressing force of equal to or greater than predetermined pressing force, the sheet switch21of the jog dial11is switched on (S3: YES). With the sheet switch21switched on, a current flows through the receiving circuit220of the operation detecting section201(process S4). This increases a load on the power supplying section202(process S5), and the monitoring section203detects the increase as the touch operation (process S6). When the touch operation is detected, the DJ controller10gives a command to the computer system2together with the rotary operation performed on the jog dial11, changing a play process running on the computer system2. When a user touches the jog dial11with pressing force of less than the predetermined pressing force, or when a user does not touch the jog dial11, the sheet switch21of the jog dial11is switched off (S3: NO). With the sheet switch21switched off, no current flows through the receiving circuit220of the operation detecting section201(process S7). This decreases a load on the power supplying section202(process S8), and the monitoring section203detects the decrease as the absence of the touch operation (process S9). When the absence of the touch operation is detected, the DJ controller10returns the play process running on the computer system2to a normal state. The exemplary embodiment offers the following advantages. In the music data playing control system1according to the exemplary embodiment, the touch detecting unit20of the DJ controller10includes the operation detecting section201in which the sheet switch21is provided in the jog dial11. Thus, in the exemplary embodiment, a DJ, which is a user, can perform the conventional touch operation suitable for the DJ performance, in which the DJ puts a fingertip(s) or the like on the surface of the jog dial11and presses the surface hard with the fingertip at a desired timing, similarly to the existing sheet switch system. In the touch detecting unit20of the exemplary embodiment, the jog dial11is provided with the operation detecting section201including the sheet switch21. A user's touch operation can thus be received directly by the sheet switch21(or with a minimum interposed object, such as a cover plate). This eliminates the need for the structure in which the sheet switch is installed close to a main body side of the DJ equipment and many rollers are provided between the sheet switch and the jog dial to roll over the sheet switch, like the conventional sheet switch type. The arrangement according to the exemplary embodiment thus has a simple structure and eliminates mechanical operating noise. Further, the arrangement according to the exemplary embodiment eliminates the rolling movement of the rollers, minimizing rotation resistance of the jog dial11. A user can thus perform a rotary operation smoothly. In the touch detecting unit20of the exemplary embodiment, the operation detecting section201including the sheet switch21is provided in the jog dial11. The operation detecting section201thus rotates with respect to the casing3. However, the power supplying section202installed in the casing3supplies power to the operation detecting section201in a non-contact manner via the power supplying coil231and the power receiving coil221. Further, it is possible to detect switching on and off in the sheet switch21(user's touch operation) through detection of the change in a current of the power supplying section202using the monitoring section203. That is, the detection of the touch operation by the operation detecting section201can be transmitted as a signal by using the non-contact power supply route from the power supplying section202to the operation detecting section201. The invention is thus capable of providing the music data playing control system1that enables the power supply and signal connection to the jog dial11in a non-contact manner. The invention is not limited to the above exemplary embodiment but includes modifications as long as such modifications are compatible with the invention. In the exemplary embodiment, the sheet switch21is exemplified, as the switching element, in the operation detecting section201of the invention. However, any other switching element that can be switched by a user's operation with pressing force of equal to or greater than a predetermined pressing force may be used. For example, it is possible to use a switching element formed by circularly arranging mechanical switches or a combination of a pressure sensitive sensor and a switch circuit. Further, the switching element is not limited to a switching element that is switched by pressing force of equal to or greater than a predetermined pressing force, like the sheet switch21. An electrostatic capacitance sensor that is switched by contact or any other contact sensor may be used. Such a sensor can detect the switching on and off with higher speed than the detection based on a user's pressing operation. However, in order to provide a user with the conventional touch operation suitable for the DJ performance in which a DJ puts a fingertip(s) on the surface of the jog dial11and presses the surface hard with the fingertip at a desired timing, the contact sensor preferably detects a user's pressing operation. In the operation detecting section201according to the exemplary embodiment, the receiving circuit220receiving power via the power receiving coil221is formed in the switch circuit board22. An electric load on the receiving circuit220is switched by the sheet switch21. The configuration of the receiving circuit220may be changed as appropriate. It is only required that the receiving circuit220is provided for the jog dial11(rotary operation unit) and the electric load changes in response to a user's operation. In the power supplying section202according to the exemplary embodiment, the power supplying circuit230supplying power from the power supplying coil231to the power receiving coil221in a non-contact manner is formed in the power-supplying circuit board23. The configuration of the power supplying circuit230may be changed as appropriate. The DC power source232and the oscillation circuit234may be replaced with the computer system2. It is only required that the power supplying circuit230can supply power to the operation detecting section201in a non-contact manner. The monitoring section203according to the exemplary embodiment includes the load detecting resistor235connected to the power supplying section202and the monitoring circuit236for detecting a current thereof. The monitoring circuit236may be replaced with the computer system2. It is only required that the monitoring circuit236is configured to monitor a load of power supplied by the power supplying section202and to detect, from a change in the load, a user's pressing operation (switching on and off in the sheet switch21). In the exemplary embodiment, the load detecting resistor235and the monitoring circuit236provided in the monitoring section203detect a current flowing through the power supplying section202. The monitoring section203, however, may detect a voltage, power, or any other electrical characteristics. It is only required that the monitoring section203is configured to be capable of monitoring a load of power supplied by the power supplying section202. In the exemplary embodiment, a rotary operation unit of the invention is the jog dial11that designates a play speed and a play direction of music data by a user's rotary operation. However, a rotary operation unit of the invention may be an operation unit for any other function of the audio equipment. For example, a rotary operation unit of the invention may be an operation unit that sets a setting value in response to a user's rotary operation and registers the setting value in response to a user's touch operation. Specifically, a rotary operation unit of the invention may be a jog dial for setting output volume, or a jog dial for setting a parameter of a music synthesizer. In the exemplary embodiment, the music data playing control system1is exemplified as the audio equipment according to the invention. The audio equipment according to the exemplary embodiment of the invention, however, is not limited to a device for controlling an external sound player, but may be a sound player itself that plays music data as sound. | 16,569 |
11863142 | The figures are not to scale. Wherever possible, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. Descriptors “first,” “second,” “third,” etc. are used herein when identifying multiple elements or components which may be referred to separately. Unless otherwise specified or understood based on their context of use, such descriptors are not intended to impute any meaning of priority, physical order or arrangement in a list, or ordering in time but are merely used as labels for referring to multiple elements or components separately for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for ease of referencing multiple elements or components DETAILED DESCRIPTION When a panelist signs up to have their exposure to media monitored by an audience measurement entity, the audience measurement entity sends a technician to the home of the panelist to install a meter (e.g., a media monitor) capable of gathering media exposure data from a media output device(s) (e.g., a television, a radio, a computer, etc.). Generally, the meter includes or is otherwise connected to a microphone and/or a magnetic-coupling device to gather ambient audio. In this manner, when the media output device is “on,” the microphone may receive an acoustic signal transmitted by the media output device. As further described below, the meter may extract audio watermarks from the acoustic signal to identify the media. Additionally or alternatively, the meter may generate signatures and/or fingerprints based on the media. The meter transmits data related to the watermarks and/or signatures to the audience measurement entity to monitor media exposure. Examples disclosed herein relate to efficiently selecting a desirable gain to amply a received signal at a meter prior to processing the audio. Audio watermarking is a technique used to identify media such as television broadcasts, radio broadcasts, advertisements (television and/or radio), downloaded media, streaming media, prepackaged media, etc. Existing audio watermarking techniques identify media by embedding one or more audio codes (e.g., one or more watermarks), such as media identifying information (e.g., herein information and/or data) and/or an identifier that may be mapped to media identifying information, into an audio and/or video component. In some examples, the audio or video component is selected to have a signal characteristic sufficient to mask the watermark. As used herein, the terms “code” or “watermark” are used interchangeably and are defined to mean any identification information (e.g., an identifier) that may be inserted or embedded in the audio or video of media (e.g., a program or advertisement) for the purpose of identifying the media or for another purpose such as tuning (e.g., a packet identifying header). As used herein “media” refers to audio and/or visual (still or moving) content and/or advertisements. To identify watermarked media, the watermark(s) are extracted and used to access a table of reference watermarks that are mapped to media identifying information. Unlike media monitoring techniques based on codes and/or watermarks included with and/or embedded in the monitored media, signature or fingerprint-based media monitoring techniques generally use one or more inherent characteristics of the monitored media during a monitoring time interval to generate a substantially unique proxy for the media. Such a proxy is referred to as a signature or fingerprint, and can take any form (e.g., a series of digital values, a waveform, etc.) representative of any aspect(s) of the media signal(s) (e.g., the audio and/or video signals forming the media presentation being monitored). A signature may be a series of signatures collected in series over a time interval. A good signature is repeatable when processing the same media presentation, but is unique relative to other (e.g., different) presentations of other (e.g., different) media. Accordingly, the term “signature” and “fingerprint” are used interchangeably herein and are defined herein to mean a proxy for identifying media that is generated from one or more inherent characteristics of the media. Signature-based media monitoring generally involves determining (e.g., generating and/or collecting) signature(s) representative of a media signal (e.g., an audio signal and/or a video signal) output by a monitored media device and comparing the monitored signature(s) to one or more references signatures corresponding to known (e.g., reference) media sources. Various comparison criteria, such as a cross-correlation value, a Hamming distance, etc., can be evaluated to determine whether a monitored signature matches a particular reference signature. When a match between the monitored signature and one of the reference signatures is found, the monitored media can be identified as corresponding to the particular reference media represented by the reference signature that matched the monitored signature. Because attributes, such as an identifier of the media, a presentation time, a broadcast channel, etc., are collected for the reference signature, these attributes may then be associated with the monitored media whose monitored signature matched the reference signature. Example systems for identifying media based on codes and/or signatures are long known and were first disclosed in Thomas, U.S. Pat. No. 5,481,294, which is hereby incorporated by reference in its entirety. When a meter senses audio via a sensor (e.g., microphone), the meter uses an amplifier to amplify the sensed audio signal prior to processing the audio signal to generate signatures and/or extract watermarks. The amount of gain (e.g., amplification) used by the amplifier corresponds to the accuracy of signature generation and/or watermark extraction. For example, when the volume of audio output by an media presentation device is low, the gain should be a high gain to successfully generate a signature and/or extract a watermark. However, when the volume of the audio output by the media presentation device is high, applying a high gain will result in undesired clipping of the audio, leading to inaccurate signatures and/or watermark extraction failures. Automated gain control (AGC) protocols can be implemented to automatically determine a satisfactory (e.g., optimal) gain level to utilize in the amplifier of a meter that allows the meter to successfully generate signatures and/or extract watermarks without clipping the audio signal. An AGC protocol adjusts the gain within a range of gain levels to attempt to select the highest gain that does not clip the signal. For example, an AGC protocol may ramp down from the highest gain of the range to a gain at which clipping ceases. Once the meter determines that clipping has ceased, the meter utilizes the gain (e.g., locks the gain) at which clipping ceases for processing audio until the AGC protocol is rerun (e.g., after a duration of time, after a triggering event, etc.). Traditional AGC protocols test the same range of gains each time the protocol is triggered. For example, if the AGC protocol starts at the highest gain (100 decibels (dB)) of an amplifier and ramps down to the lowest gain (e.g., 5 dB) of an amplifier, every time the AGC protocol is run the amplifier will start at the highest gain or the lowest gain and ramp down or up until the highest (e.g., or a sufficiently high) gain level is obtained that does not cause clipping. For example, an AGC protocol may start at 100 dB (e.g., the maximum gain, where clipping typically occurs) and quickly ramp the gain down until clipping ceases, and once clipping ceases, the gain level is slowly ramped up to identify the highest gain that does not result in clipping. However, audio that is played loudly will require less gain. Accordingly, a traditional AGC protocol wastes time and resources by starting at the maximum gain of an amplifier when the audio is loud or starting at a minimum gain of an amplifier when the audio is low. Examples disclosed herein perform an AGC parameter protocol (e.g., prior to a AGC protocol) to reduce the range of gain levels to be used in the AGC protocol. The example AGC parameter protocols disclosed herein utilize multiple tuners in parallel to identify a smaller range of gain levels within which the border between clipping and no clipping occurs. For example, if the full range of gains for a conventional AGC protocol is 100 dB to 0 dB, one example disclosed AGC parameter protocol may control four tuners to split the full range of gains into 4 subranges (e.g., 100-76 dB, 75-51 dB, 50-26 dB, 25-0 dB) and determine the highest range where clipping occurs. In such an example, if the highest range where clipping occurs is 75-51 dB, then somewhere within that range exists the border between clipping and no clipping (e.g., the optimal gain level). Accordingly, the AGC protocol can be performed within the 75-51 dB range, reducing the time and amount of resources that would be required to perform a conventional AGC protocol (e.g., from 100-0 dB). FIG.1illustrates an example environment100for selecting AGC parameters for an AGC protocol based on historic data in accordance with teachings of this disclosure. The example environment100ofFIG.1includes an example media output device102, example speakers104a,104b, an example audio signal106, an example microphone110, and an example meter112. The example meter112includes an example amplifier114, an example audio processor116, and an example AGC parameter determiner118. The example media output device102ofFIG.1is a device that outputs media. Although the example media output device102ofFIG.1is illustrated as a television, the example media output device may be a radio, an MP3 player, a video game counsel, a stereo system, a mobile device, a computing device, a tablet, a laptop, a projector, a DVD player, a set-top-box, an over-the-top device, and/or any device capable of outputting media. The example media output device may include speakers104aand/or may be coupled, or otherwise connected to portable speakers104bvia a wired or wireless connection. The example speakers104a,104boutput the audio portion of the media output by the example media output device. The example microphone110ofFIG.1is an audio sensor that receives the example audio signal106(e.g., as part of a sensing of ambient sound). The microphone110converts the example audio signal106into an electrical signal representative of the audio signal. The example microphone110transmits the electrical signal to the example amplifier114of the example meter112. The example amplifier114amplifies the electrical signal so that the meter112can generate signatures and/or extract watermarks based on the amplified electrical signal, as further described below. The example meter112ofFIG.1is a device installed in a location of a panelist that monitors exposure to media from the example media output device102. Panelists are users included in panels maintained by a ratings entity (e.g., an audience measurement company) that owns and/or operates the ratings entity subsystem. The example meter112may extract watermarks and/or generate signatures from media output by the example media output device102to identify the media. The example meter112is coupled or otherwise connected to the example microphone110. As described above, the example microphone110is device that receives ambient audio. In some examples, the microphone110may be magnetic-coupling device (e.g., an induction coupling device, a loop coupling receiver, a telecoil receiver, etc.), and/or any device capable of receiving an audio signal. In such examples, the magnetic-coupling device may receive an audio signal (e.g., the example audio signal106) wirelessly rather than acoustically. The example microphone110and the example meter112may be connected via a wired or wireless connection. In some examples, the example microphone110and the example meter112may be one device. For example, the example microphone110may be embedded in the example meter112. The example meter112includes the example amplifier114, the example AGC parameter determiner118and the example audio processor116. The example amplifier114ofFIG.1obtains the electrical signal representative of the example audio106from the example microphone110. The example amplifier114amplifies power and/or amplitude the electrical signal using a gain level. The example audio processor116can adjust the gain level from any value between a maximum gain of the amplifier114to a minimum gain of the amplifier114. The maximum gain and minimum gain of the amplifier114depend on the hardware characteristics of the amplifier114. The example audio processor116ofFIG.1processes audio based on the amplified electrical signal. As described above, if the electrical signal is amplified too much, clipping can occur which reduces the effectiveness of signature generation and/or watermark extraction. However, if the electrical signal is amplified too little, the electrical signal is not powerful enough for the audio processor116to generate a signature and/or extract a watermark. Accordingly, the example audio processor116perform an AGC protocol to adjust the gain of the amplifier114to attempt to select the highest, or a sufficiently high, gain for the amplifier114that does not result in clipping. The AGC protocol may include starting the amplifier114at a starting gain level where clipping is occurring, decreasing the gain until the clipping ceases. In some examples, the audio processor116may perform other types of AGC protocols. For example, the audio processor116may start at a gain level where clipping doesn't occur, and increase the gain until clipping occurs to determine the highest, or a high, gain level before clipping begins to occur. The example audio processor116adjusts the gains during the AGC protocol based on the AGC parameters generated by the example AGC parameter determiner118. The AGC parameters may correspond to the starting gain level to use during the AGC protocol and/or the range of gain levels to use during the ramping of the AGC protocol. The example AGC parameter determiner118ofFIG.1runs an AGC parameter protocol to determine AGC parameters for an AGC protocol. The AGC parameters may be, for example, an initial gain level and/or a range of gain levels to use during the AGC protocol. The example AGC parameter determiner118has two or more tuners that can be tuned to two or more different ranges of gain levels. When an AGC protocol is to be performed, the example AGC parameter determiner118runs the AGC parameter protocol by processing audio using the two or more tuners to determine if clipping has occurred at the two or more different ranges of gain levels. In some examples, the example AGC parameter determiner118selects the lowest range of gain levels where clipping occurs and sends the range of gain levels as AGC parameters for the audio processor116to use in the AGC protocol. In this manner, the audio processor116will perform the AGC protocol on a smaller range of gain levels then the full range of gains that the amplifier114is capable of operating at, thereby increasing the speed of the AGC protocol and conserving resources needed to perform the AGC protocol. In some examples, the AGC parameter determiner118determines the highest gain level (e.g., or lowest, depending on the AGC protocol) from the selected range of gain levels and sends the highest (or lowest) gain level as an AGC parameter for the audio processor116to use in the AGC protocol as an initial gain level. Additionally or alternatively, the example AGC parameter determiner118may determine parameters for different types of protocols, such as sound pressure algorithms. In some examples, the AGC parameter determiner118ofFIG.1performs an iterative process where the selected range of gain levels is used to tune the tuners of the AGC parameter determiner118for a second iteration of the AGC protocol to further narrow the range of gain levels. For example, the AGC parameter determiner118may perform a first iteration of the AGC parameter protocol based on convergence of a geometric series by having a first tuner to amplify a sample the audio signal106using gains from 100 decibels (dB) to 50 dB and a second tuner to amplify the sample of the audio signal106using gains from 49 dB to 0 dB. If the example AGC parameter determiner118determines that clipping occurs in the 100-50 dB range but does not occur in the 49-0 dB range, the example AGC parameter determiner118may obtain a subsequent sample of the audio signal106and tune the first tuner to amplify the subsequent sample using gains from 100-75 dB and tune the second tuner to amplify the subsequent sample using gains from 74-50 dB and determine if clipping occurs within either range. The example AGC parameter determiner118may continue this iterative process based on a threshold number of iterations defined by a user and/or manufacturer. A higher number of iterations corresponds to a smaller range of gain levels for the AGC protocol. However, a higher number of iterations requires additional time and resources. The example AGC parameter determiner118is further described below in conjunction withFIG.2. FIG.2is a block diagram of a implementation of the example AGC parameter determiner118ofFIG.1, disclosed herein, to determine AGC parameter(s) for a AGC protocol. While the example AGC parameter determiner118is described in conjunction with the example meter112and media output device102ofFIG.1, the example AGC parameter determiner118may be utilized to in conjunction with any type of meter and/or media output device. The example AGC parameter determiner118ofFIG.2includes an example sensor interface200, an example component interface202, an example AGC master controller204, an example audio controller206, example tuners208, and one or more example clip detection circuit(s)210. The example sensor interface200ofFIG.2receives the electrical signal representative of the example audio signal106from the example microphone110. The example sensor interface200sends the electrical signals to the example tuners208. The example component interface202interfaces with the example audio processor116and/or any other component of the example meter112. For example, the component interface202receives instructions from the example audio processor116to generate AGC parameter(s) for an upcoming AGC protocol. After the AGC parameter(s) are selected (e.g., determined), the example component interface202transmits the selected AGC parameter(s) to the example audio processor116. The example AGC master controller204ofFIG.2obtains instructions and/or a trigger to select AGC parameters for an upcoming AGC protocol. For example, the AGC master controller204may obtain (e.g., from the audio processor116via the example component interface202) instructions to identify an initial gain level or a range of gain levels to perform the AGC protocol. Additionally or alternatively, the AGC master controller204may determine that AGC parameters are needed based on a duration of time (e.g., the AGC protocol is performed after the threshold amount of time). The example AGC master controller204instructs the example audio controller206to adjust the example tuners208to tune to a particular gain level representative of a gain range. For example, if there are two tuners, the example AGC master controller204may split the full range of gains (e.g., 100-0 dB) into two halves. In such an example, the example AGC master controller204may instruct the example audio controller206to tune the first tuner to 100 dB (e.g., the maximum gain of the 100-50 dB half of the full range) and the second tuner to 49 dB (e.g., the maximum gain of the 49-0 dB). In this manner, if the example AGC master controller204determines that clipping occurred at 100 dB but did not occur at 49 dB, the AGC master controller204determines that the border between clipping and no clipping occurs between 100-50 dB. The example AGC master controller204receives the results from an iteration of the AGC parameter protocol from the example clip detection circuit(s)210. In some examples, the AGC master controller204selects the AGC parameter(s) (e.g., an initial gain level or a range of gain levels to utilize in an AGC protocol) based on the results of the clip detection circuit(s)210. In some examples, the AGC master controller204ofFIG.2may instruct the audio controller206to control the example tuners208to different gain levels representative of gain ranges for a subsequent iteration based on the results of the iteration. For example, if there are two tuners208, the example AGC master controller204may instruct the audio controller206to tune the two example tuners208to different gain levels representative of halves of the total gain levels (e.g., 100 dB for the 100-50 dB range and 49 dB for the 49-0 dB range) that may be utilized by the example amplifier114ofFIG.1(e.g., when the AGC parameter protocol is based on convergence of a geometric series). In such an example, in response to determining that clipping occurred at the 100 dB gain level representative of the first half (e.g., 100-50 dB) and not occurred at the 49 dB level representative of the second half (e.g., 49-0 dB), the example AGC master controller204may instruct the audio controller206to tune the two example tuners208to different gains representative of halves of the 100-50 dB range from the first iteration and perform a second iteration based on the new smaller halves. In other examples (e.g., when the AGC parameter protocol is not based on convergence of a geometric series), if there are two tuners208, the example AGC master controller204may instruct the audio controller206to tune the two example tuners208to different gain levels representative of ranges of total gain levels (e.g., 100 dB for a 100-80 dB range and 79 dB for a 79-60 dB range) that may be utilized by the example amplifier114ofFIG.1. In such an example, if the example AGC master controller204determines that both gain levels representative of the respective ranges result in clipping, the example AGC master controller204may instruct the audio controller206to tune the two tuners208to two gain levels representative of different ranges (e.g., 59 dB for a 59-40 dB range and 39 dB for a 39-20 dB range) of the total gain levels that are different than used in the first iteration. The example AGC master controller204continues to perform subsequent iterations until the lowest gain range that includes clipping is determined (e.g., because the highest gain level without clipping occurs in the lowest gain range that includes clipping). The example audio controller206ofFIG.2receives instructions from the example AGC master controller204to tune the example tuner208to particular gain ranges. The example audio controller206tunes the example tuners208based on the instructions. In some examples, the audio controller206may be combined with the example AGC master controller204. The example tuners208tune to the gain levels based on the audio controller206and amplify an obtained electrical signal corresponding to the audio signal106based on the gain levels. In some examples, the example audio controller206is instructed to control one of the tuners to amplify the electrical signal at the maximum gain level of the corresponding to the gain ranges (e.g., 100 dB for a 100-75 dB range). In other examples, if the example audio controller206is instructed to control one of the example tuners208to amplify an electrical signal from 100-50 dB, the audio controller206may start at the highest gain level of 100 dB and ramp the gain level down for a duration of time until the gain level is at the 50 dB gain level. The example tuners208may be any number of tuners (e.g., two or more). An example clip detection circuit210ofFIG.2is a circuit that processes electrical signals amplified by the example tuners208at different gain ranges in a duration of time and determines if clipping occurred within the duration of time (e.g., while the tuners208are tuned to their respective gain levels). When clipping occurs, an electrical signal with be flat for more than a threshold duration of time at some maximum level. Accordingly, the example clip detection circuit(s)210processes the amplified electrical signals to identify clipping based on a flat signal at a maximum level. If clipping occurs, the example clip detection circuit(s)210identifies the clipping in conjunction with the range of gain levels to the example AGC master controller204. If clipping does not occur, the example clip detection circuit(s)210identifies that the signal was not clipped in conjunction with the range of gain levels. The example clip detection circuit(s)210may be multiple clip detection circuits (e.g., a clip detection circuit for each tuner) or may be a single circuit that determines if clipping occurs for the example tuners208. While an example manner of implementing the example meter112is illustrated inFIG.1and an example manner of implementing the example AGC parameter determiner118ofFIG.1is illustrated inFIG.2, one or more of the elements, processes and/or devices illustrated inFIG.2may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example amplifier114, the example audio processor, the example AGC parameter determiner118, and/or, more generally, the example meter112ofFIG.1and the example sensor interface200, the example component interface202, the example AGC master controller204, the example audio controller206, the example tuners208, the example clip detection circuit(s)210, and/or, more generally, the example AGC parameter determiner118ofFIG.2may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example amplifier114, the example audio processor, the example AGC parameter determiner118, and/or, more generally, the example meter112ofFIG.1and the example sensor interface200, the example component interface202, the example AGC master controller204, the example audio controller206, the example tuners208, the example clip detection circuit(s)210, and/or, more generally, the example AGC parameter determiner118ofFIG.2could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example amplifier114, the example audio processor, the example AGC parameter determiner118, and/or, more generally, the example meter112ofFIG.1and the example sensor interface200, the example component interface202, the example AGC master controller204, the example audio controller206, the example tuners208, the example clip detection circuit(s)210, and/or, more generally, the example AGC parameter determiner118ofFIG.2is/are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. including the software and/or firmware. Further still, the example meter112ofFIG.1and/or the example AGC parameter determiner118ofFIG.2may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated inFIGS.1and/or2, and/or may include more than one of any or all of the illustrated elements, processes and devices. As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events. Flowcharts representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the example AGC parameter determiner118ofFIG.2are shown inFIGS.3-4. The machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by a computer processor such as the processor512shown in the example processor platform500discussed below in connection withFIG.5. The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor512, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor512and/or embodied in firmware or dedicated hardware. Further, although the example program is described with reference to the flowcharts illustrated inFIGS.3-4, many other methods of implementing the example AGC parameter determiner118may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Additionally or alternatively, any or all of the blocks may be implemented by one or more hardware circuits (e.g., discrete and/or integrated analog and/or digital circuitry, an FPGA, an ASIC, a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware. The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a packaged format, etc. Machine readable instructions as described herein may be stored as data (e.g., portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, etc. in order to make them directly readable and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined from a set of executable instructions that implement a program such as that described herein. In another example, the machine readable instructions may be stored in a state in which they may be read by a computer, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, the disclosed machine readable instructions and/or corresponding program(s) are intended to encompass such machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit. As mentioned above, the example processes ofFIGS.3-4may be implemented using executable instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. “Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. FIG.3is an example flowchart300representative of example machine readable instructions that may be executed by the example AGC parameter determiner118ofFIGS.1and2to select AGC parameters for an AGC protocol based on convergence of a geometric series. Although the instructions ofFIG.3are described in conjunction with the example meter112, microphone110, media output device102, and AGC parameter determiner118ofFIGS.1and2, the example instructions may be utilized by any type of meter, microphone, media output device, and/or AGC parameter determiner. Although the example flowchart300ofFIG.3is described in conjunction with AGC gain levels, the flowchart300may be described in conjunction with other attributes, such as sound pressure algorithms. The example flowchart300is described in conjunction with the example AGC parameter determiner118having two tuners208. However, the example flowchart300may be described in conjunction with AGC parameter determiners with any number of tuners. At block302, the example sensor interface200obtains an audio signal (e.g., an electrical signal representative of the example audio signal106ofFIG.1) from the example microphone110. At block304, the example audio controller206tunes a first one of the example tuners208to a first gain representative an upper range of gain levels (e.g., 100 dB from the upper range of 100-50 dB). At block306(e.g., at the same time), the example audio controller206tunes a second one of the example tuners208to a second gain representative of a lower range of gain levels different than the first range of gain levels (e.g., 49 dB for the lower range of 49-0 dB). Alternatively, the audio controller206may tune the first tuner to the highest gain of the first gain range and gradually decrease the gain level to the lowest gain of the first gain range to allow the example clip detection circuit(s)210to determine whether clipping occurs within the gain range. At blocks308,310, the example clip detection circuit(s)210perform(s) clip detection on the output of the respective tuners208(e.g., tuned to the different half of gain levels) to determine if clipping occurs within the respective gain ranges. The example clip detection circuit(s)210determine(s) if the respective signals have been clipped based on whether the signal is flat at a maximum level for more than a threshold duration of time, as described above in conjunction withFIG.2. At block312, the example AGC master controller204determines if clipping has occurred in the lower gain range (e.g., based on the results of the example clip detection circuit(s)210for the maximum gain of the lower gain range). If the example AGC master controller204determines that clipping has occurred on the lower gain range (block312: YES), the example AGC master controller204discards the upper gain range (block314) and control returns to block302to perform a subsequent iteration by splitting the lower gain range into an upper gain range and a lower gain range. If the example AGC master controller204determines that clipping has not occurred on the lower gain range (block312: NO), the example AGC master controller204determines if a threshold number of iterations has been performed (block316). If the example AGC master controller204determines that the threshold number of iterations has not been performed (block316: NO), the example AGC master controller204discards the lower gain range (block318) and control returns to block302to perform a subsequent iteration by splitting the upper gain range into an upper gain range and a lower gain range. If the example AGC master controller204determines that the threshold number of iterations has been performed (block316: YES), the example AGC master controller204selects the gain range corresponding to the lower half of the gain range for the AGC parameters (block320). At block322, the example component interface202transmits the selected range to the example audio processor116. In this manner, the example audio processor116can perform the AGC protocol based on the selected gain range. In some examples, the AGC master controller204determines the maximum gain or the minimum gain (e.g., depending on how the AGC protocol is performed—from a low gain to a high gain or from a high gain to a low gain) of the selected gain range and send the maximum gain or minimum gain as the AGC parameter to the example audio processor116. In this manner, the audio processor116can perform the AGC protocol using the maximum gain or minimum gain as an initial gain level for the AGC protocol. FIG.4is an example flowchart400representative of example machine readable instructions that may be executed by the example AGC parameter determiner118ofFIGS.1and2to select AGC parameters for an AGC protocol not based on convergence of a geometric series. Although the instructions ofFIG.4are described in conjunction with the example meter112, microphone110, media output device102, and AGC parameter determiner118ofFIGS.1and2, the example instructions may be utilized by any type of meter, microphone, media output device, and/or AGC parameter determiner. Although the example flowchart400ofFIG.4is described in conjunction with AGC gain levels, the flowchart400may be described in conjunction with other attributes, such as sound pressure algorithms. The example flowchart400is described in conjunction with the example AGC parameter determiner118having two tuners208. However, the example flowchart400may be described in conjunction with AGC parameter determiners with any number of tuners. At block402, the example sensor interface200obtains an audio signal (e.g., an electrical signal representative of the example audio signal106ofFIG.1) from the example microphone110. At block404, the example audio controller206tunes a first one of the example tuners208to a first gain level representative a first gain range (e.g., 100 dB for the 100-80 dB range) of the total gain levels (e.g., 100-0 dB). At block406(e.g., at the same time), the example audio controller206tunes a second one of the example tuners208to a second gain level representative of a second range of gain levels (e.g., 79 dB of the 79-60 dB range) different than the first range of gain levels. Alternatively, the example audio controller206may tune the first tuner to the highest gain of the first gain range and gradually decrease the gain level to the lowest gain of the first gain range to allow the example clip detection circuit(s)210to determine whether clipping occurs within the gain range. At blocks408,410, the example clip detection circuit(s)210perform(s) clip detection on the output of the respective tuners208(e.g., tuned to the different ranges of gain levels) to determine if clipping occurs within the respective gain ranges. The example clip detection circuit(s)210determine(s) if the respective signals have been clipped based on whether the signal is flat at a maximum level for more than a threshold duration of time, as described above in conjunction withFIG.2. At block412, the example AGC master controller204determines if clipping has occurred in the higher gain range (e.g., 100-80 dB) (e.g., based on the results of the example clip detection circuit(s)210for the maximum gain of the lower gain range). If the example AGC master controller204determines that clipping has not occurred on the higher gain range (block412: NO), control continues to block418, as further described below. If the example AGC master controller204determines that clipping has occurred on the higher gain range (block412: YES), the example AGC master controller204determines if clipping has occurred on the lower gain range (e.g., 79-60 dB) (e.g., based on the results of the example clip detection circuit(s)210) (block414). If the example AGC master controller204determines that clipping has not occurred on the lower gain range (block414: NO), control continues to block418, as further described below. If the example AGC master controller204determines that clipping has occurred on the lower gain range (block414: YES), the example AGC master controller204changes the gain ranges of the first and second gain ranges (e.g., to a third gain range of 59-40 dB and a fourth gain range of 39-20 dB) (block416) and control returns to block402to perform a second iteration based on the adjusted gain ranges. At block418the example AGC master controller204selects the gain range corresponding to the lowest gain where clipping occurred for the AGC parameter(s). For example, if, during one or more iterations, the example AGC master controller204determines that the lowest range level where clipping occurred was from 79-60 dB, the example AGC master controller204selects the 79-60 dB gain range as the AGC parameters to be used in the AGC protocol. At block420, the example component interface202transmits the selected range to the example audio processor116. In this manner, the example audio processor116can perform the AGC protocol based on the selected gain range. In some examples, the AGC master controller204determines the maximum gain or the minimum gain (e.g., depending on how the AGC protocol is performed—from a low gain to a high gain or from a high gain to a low gain) of the selected gain range and send the maximum gain or minimum gain as the AGC parameter to the example audio processor116. In this manner, the audio processor116can perform the AGC protocol using the maximum gain or minimum gain as an initial gain level for the AGC protocol. FIG.5is a block diagram of an example processor platform500structured to execute the instructions ofFIGS.3-4to implement the meter112ofFIG.1and/or the example AGC parameter determiner118ofFIG.2. The processor platform500can be, for example, a server, a personal computer, a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a personal digital assistant (PDA), an Internet appliance, a gaming console, a personal video recorder, a set top box, an audio meter, a personal people meter, a headset or other wearable device, or any other type of computing device. The processor platform500of the illustrated example includes a processor512. The processor512of the illustrated example is hardware. For example, the processor512can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor implements the example amplifier114, the example audio processor116, the example sensor interface200, the example component interface202, the example AGC master controller204, the example audio controller206, the example tuners208, and/or the example clip detection circuit(s)210ofFIGS.1and/or2. The processor512of the illustrated example includes a local memory513(e.g., a cache). The processor512of the illustrated example is in communication with a main memory including a volatile memory514and a non-volatile memory516via a bus518. The volatile memory514may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. The non-volatile memory516may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory514,516is controlled by a memory controller. The processor platform500of the illustrated example also includes an interface circuit520. The interface circuit520may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface. In the illustrated example, one or more input devices522are connected to the interface circuit520. The input device(s)522permit(s) a user to enter data and/or commands into the processor512. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system. One or more output devices524are also connected to the interface circuit520of the illustrated example. The output devices524can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. The interface circuit520of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor. The interface circuit520of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network526. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc. The processor platform500of the illustrated example also includes one or more mass storage devices528for storing software and/or data. Examples of such mass storage devices528include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives. The machine executable instructions532ofFIGS.3-4may be stored in the mass storage device528, in the volatile memory514, in the non-volatile memory516, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD. From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been disclosed that determine automated gain control parameters for an automated gain control protocol. The disclosed methods, apparatus and articles of manufacture improve the efficiency of using a computing device by decreasing the total number of gain levels needed during an AGC protocol to determine the optimal gain level. For example, instead of performing an AGC protocol that starts at the highest gain of an amplifier and/or is performed across the full range of the amplifier, the AGC parameter protocol can set the starting gain level and/or range of gain value for the AGC protocol to result in a reduction of time and resources for the AGC protocol. The disclosed methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer. Although certain example methods, apparatus and articles of manufacture have been described herein, other implementations are possible. The scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent. | 50,552 |
11863143 | DETAILED DESCRIPTION The following detailed description of certain embodiments presents various descriptions of specific embodiments of the invention. However, the invention can be embodied in a multitude of different ways as defined and covered by the claims. In this description, reference is made to the drawings where like reference numerals may indicate identical or functionally similar elements. Unless defined otherwise, all terms used herein have the same meaning as are commonly understood by one of skill in the art to which this invention belongs. All patents, patent applications and publications referred to throughout the disclosure herein are incorporated by reference in their entirety. In the event that there is a plurality of definitions for a term herein, those in this section prevail. When the terms “one”, “a” or “an” are used in the disclosure, they mean “at least one” or “one or more”, unless otherwise indicated. Overview Video conferencing over a computer network has existed and has increasingly played a significant role in the modern workplace. With the advent of remote working and shelter-in-place mandates by various government agencies during the COVID-19 pandemic, the role of robust video conferencing systems have only become more critical. There are various components (local and remote) that work in unison to implement a video conferencing system. Typical video conferencing applications include a client-side application that can run on a desktop, laptop, smart phone or similar stationary or mobile computing device and can capture video and audio and transmit those to a recipient computer. It may be desirable to capture audio at a targeted level. To that end, gain control modules can be used to manipulate the received level of audio. This device would have to apply one or more gain parameters to the received audio signal in order to boost it or depress it to a target level. The gain control module may have to estimate a signal level, which can track the real signal level for present or future signal values. The estimated signal level can be used to generate one or more corresponding gain values to convert the received signal to a target level. The described embodiments include a two-stage automatic gain control (AGC) module, which can amplify an input signal to a target level, by applying an appropriate amount of digital gain. In some embodiments, the gain includes a gain based on long-term signal level estimate and a gain based on short-term signal level estimate. The gains are applied differently, depending on whether the AGC module has reached stable, converged stage, and based on other circumstances as will be described. Example Environment of a Video Conferencing Application FIG.1illustrates a networked computer system with which an embodiment may be implemented. In one approach, a server computer140is coupled to a network130, which is also coupled to client computers100,110,120. For purposes of illustrating a clear example,FIG.1shows a limited number of elements, but in practical embodiments there may be any number of the elements shown inFIG.1. For example, the server computer140may represent an instance of a server computer running one or more application servers among a large plurality of instances of application servers in a data center, cloud computing environment, or other mass computing environment. There also may be hundreds, thousands or millions of client computers. In an embodiment, the server computer140hosts a video conferencing meeting, transmits and receives video, image, and audio data to and from each of the client computers100,110,120. Each of the client computers100,110,120can be a computing device having a central processing unit (CPU), graphics processing unit (GPU), one or more buses, memory organized as volatile and/or nonvolatile storage, one or more data input devices, I/O interfaces and output devices such as loudspeakers, headphones, headsets, and LINE-OUT jack and associated software drivers. Each of the client computers100,110,120may include an integrated or separate display unit such as a computer screen, touch screen, TV screen or other display. Client computers100,110,120may comprise any of mobile or stationary computers including desktop computers, laptops, netbooks, ultrabooks, tablet computers, smartphones, etc. The GPU and CPU can each manage separate hardware memory spaces. For example, CPU memory may be used primarily for storing program instructions and data associated with application programs, whereas GPU memory may have a high-speed bus connection to the GPU and may be directly mapped to row/column drivers or driver circuits associated with a liquid crystal display (LCD), organic light emitting diode (OLED) or other display technology that serves as the display. In one embodiment, the network130is the Internet. Each of the client computers100,110,120hosts, in an embodiment, a video conferencing application that allows each of the client computers100,110,120to communicate with the server computer140. In an embodiment, the server computer140may maintain a plurality of user accounts, each associated with one of the client computers100,110,120and/or one or more users of the client computers. Among other functions, the video conferencing application running on client computers can capture audio and transmit it to the server computer140. The audio signal is generally captured having a variety of characteristics and parameters. The audio signal captured by the client device is converted into a digital audio signal, which can have a signal level. “Level” in an audio signal can be equivalent to an audio signal volume as perceived by a human. Digital signal level also relates to another characteristic of an audio signal called gain. Gain can refer to an amount of signal level added to or subtracted from an audio signal. Signal level, gain, and similar terminology, in this context can be expressed in units of decibel (dB). A related concept is dBov, or dBO, otherwise known as dB overload and can refer to a signal level or gain level, usually audio, that a device can handle before clipping occurs. Two-Stage AGC In video conferencing it can be desirable to maintain a stable audio signal level and/or audio signal level gain. This will help the computer server140or other recipient client computers to receive consistent audio and produce a more pleasant auditory experience for a recipient of the audio signal. To that end, an automatic gain control (AGC) module can be embedded in the video conferencing applications running on client computers100,110and120to control the gain and signal level of a captured audio signal. Below, several embodiments of AGC modules, which can be embedded in client computers100,110and120, will be described. FIG.2illustrates a two-stage automatic gain control (AGC)200according to an embodiment. An input signal202, of unknown and potentially variable speech level is processed with and through the AGC200and an output signal224of a known target voice level224is generated. The output signal224, generated at a known target level, can be transmitted to the server computer140and subsequently to downstream client computers100,110,120. The output signal224can be generated at a target level that is suitable for high quality transmission and for generating a pleasant auditory experience at the receiver end, without much noise or interference. In some embodiments, the target level within a video conferencing system can be determined empirically or by analyzing past recordings of video conferences across and between multiple devices, diverse microphones and varying conferencing parameters. In some embodiments, the target level for the output signal224can be −20 dB to −25 dB. This target level range in some embodiments can reduce or minimize the chance of transmitting a signal level that is too loud, and/or has undesirable clipping, echo or other characteristics. In some embodiments, the AGC200can be configured to intentionally miss the target level in order to reduce or minimize the chance of overamplification of the signal, which can be more deleterious to audio quality than under-amplification. The input signal202can be fed through a first stage204. In some embodiments, a first voice activity detector (VAD1)206can receive the input signal202and filter out the noise components, allowing the speech component of the input signal202to pass through to a long-term level estimator208. The long-term level estimator208predicts the speech level for a future duration of time, for example for the next 2-5 seconds (s) of audio. In some embodiments, the result of the long-term level estimator208can be validated by a validation module218. If the long-term estimate is valid, a first stage gain G1can be generated and applied to the input signal202via a gain table set216. Validation module218may perform statistical analysis on an input buffer to determine whether a long-term estimate should be trusted, so as to generate a corresponding first stage gain G1to apply to the input signal202. The first stage gain G1is generated based on the unknown and estimated variable of speech level. As a result, the amplified input signal due to first stage gain G1may gradually convert to a desired target level. For example, in the beginning of a video conference, the long-term level estimator208may not have enough incoming audio data to produce a speech level estimate. In other situations, the underlying data to be buffered and processed in order to make a long-term speech level estimation can take a period of time to accumulate and process. For these and other reasons, the first stage gain G1can take a period of time before it can amplify the input signal202to a desired target level of the output signal224. The period of time in which the first stage is gradually increasing the signal level to a target level can be termed an unconverged period of time, unconverged status or state and the input signal level during the unconverged state can be termed the unconverged signal level. Alternatively, it can be said that the first stage gain has not stabilized or has not converged during the unconverged period of time. As will be described a second stage gain G2produced by a second stage can assist the first stage gain to amplify the input signal202to the target level during the unconverged period of time. When the first gain G1reaches a stable level where it can amplify the input signal202to the target level unassisted, the second stage gain G2can be clipped to a minimum value (e.g., less than 5 dB). The period of time during which (or after which0 the first stage gain G1can amplify the input signal202to a desired target level, unassisted, can be termed the converged period of time, the converged status or state and the signal level during the converged period of time can be referred to as the converged level. Stated otherwise, the first stage gain G1is based on a long-term estimate calculation (e.g., 2-5 seconds of upcoming audio) and is therefore, a value that is determined gradually, is applied to the input signal202gradually and therefore amplifies the input signal202gradually. As will be described, the second stage gain G2is determined based on a short-term signal level estimate (e.g., 200 milliseconds to 1.5 milliseconds), and as a result can convert the input signal202to the target level instantaneously, near-instantaneously, or with much less delay compared to the gain contribution from the first stage. The second stage gain G2can be applied based on the present value of the first stage gain G1to assist it in converting the input signal202to the target level. In other words, the sum of the first and second stage gains applied to the input signal202converts the input signal202to target level. As discussed earlier, the second stage210can be used to assist the first stage204during the unconverged period of time or in other cases, where immediate or near immediate assistance to the first stage can be desirable to amplify the input signal202to a target level. As described earlier, the first stage gain G1can implement a gradual amplification of the input signal202before it can converge and reach a stable state. During the unconverged state, without the assistance of the second stage, the transmitted audio can be below or above the target level resulting in undesirable or unpleasant auditory experience on the receiving end. The second stage210can be configured to receive the input signal202and make a short-term signal level estimation upon which a second stage gain G2can be determined and applied to the input signal202. The second stage gain G2along with the present value of the first stage gain G1can affect an immediate, near immediate or short-term amplification of the input signal202to a desired target level. In some embodiments, this can be implemented via the gain table set216, which can receive the first and second stage gains G1, G2and apply the sum of them to the input signal202to generate the output signal224at a desired target level. In some embodiments, the second stage210can include a second voice activity detector (VAD2)212, which filters out noise and interference and feeds the speech component of the input signal202to a short-term level estimator214. To estimate a short-term speech level, the short-term level estimator214can analyze an interval of speech of a short duration of time, relative to the interval of speech which the long-term level estimator208processes. As described earlier, a short duration of time can be 200 milliseconds to 1.5 seconds. The second stage gain G2can be generated based on the output of the short-term level estimator214. In some embodiments, the short-term signal level estimate can be the mean of the speech levels in the speech interval analyzed by the short-term level estimator214. First and Second Voice Activity Detection Modules As described earlier, both the first stage and second stage can utilize voice activity detectors to filter out noise and feed the voice component of the incoming audio to the short- and long-term level estimators. The first stage is configured to make a long-term estimate of the incoming audio, upon which a first stage gain G1can be generated and applied for a relatively long duration of time, compared to the second stage gain contribution. For example, as will be described, once the first stage gain stabilizes and can convert the input signal202to a target level, unassisted, the second stage gain G2is clipped to a minimum value. This is to maintain an overall stable gain (overall gain G=G1+G2). Once converged, the first stage gain G1contributes all or the majority of the needed gain to amplify the input signal202to the target level and is applied for relatively longer time periods during the audio conference. The first stage gain G1is based on the long-term signal level of the voice component as outputted by the first voice activity detector (VAD1)206. As a result, it is more desirable to configure the first voice activity detector (VAD1) to have a high noise rejection ratio and to produce an output that is more confidently voice (rather than noise). A high-noise rejection ratioVAD1can reduce or minimize the chance that the long-term estimate and the resulting gain would be based on noise. This might cause VAD1to miss some voice data, but its output is speech data with a high confidence (e.g., above 90%). By contrast, the short-term level estimator214responds quickly to speech data. As a result, it is more desirable that VAD2212have a high degree of speech sensitivity. While this can cause VAD2to mistakenly characterize noise as speech data in some cases, it can also mean that VAD2misses fewer (e.g., less than 10%) speech data. Thus, in an embodiment, VAD1is configured or chosen to be of a kind that has high noise rejection (more inclined to rejecting noise) and VAD2is configured or chosen to be of a kind that has high voice sensitivity (more inclined to detecting speech). For example, VAD1can be above 90% sensitive to rejecting noise, or its output is voice with 90% or more measure of confidence. VAD2can be above 90% sensitive to detecting speech data, or its output is guaranteed or near guaranteed to capture 90% or more of the voice data in its input audio stream. The stated specifications and sensitivities are examples intended to illustrate the relationship between VAD1and VAD2, and their respective configurations. A person of ordinary skill in the art can configure the AGC200with sensitivities and thresholds that differ from those stated above, but do not depart from the spirit of the disclosed technology. Estimating a Long-Term Signal Level FIG.3illustrates a block diagram of a long-term level estimator300according to an embodiment. Feed data302can be a portion of the input signal202, for example, a portion of the input signal302processed by the first voice activity detector206, as discussed in relation to the embodiment ofFIG.2. In this scenario, the feed data302can include the speech component of the input signal202as processed by the first voice activity detector206. The feed data302can accumulate in a buffer, such as the ring buffer304. In some embodiments, the ring buffer304stores a predetermined window size of the incoming speech, such as the previously or most recently received 5 seconds of speech. The size of the ring buffer304can be determined empirically or based on analysis of other recorded video conferences to determine an appropriate buffer size that allows for efficient processing, transmission and auditory experience. In other embodiments, the size of the ring buffer can be determined by existing buffers within a video conferencing application. An example range of the sizes of the ring buffer304, which can alternatively be referred to as the long-term buffer can be 2 to 5 seconds of the most recently received speech. The feed data302may be received in intervals of speech shorter than the size of the ring buffer. For example, the feed data302can be received in 10 milliseconds (ms) intervals and accumulate in the ring buffer304on a rolling basis (e.g., first in, first out or FIFO). A statistical analysis unit306performs statistical analysis on the data in the ring buffer304and based on that analysis can generate a long-term signal level estimate (or it can update a previously generated signal level estimate). Alternatively, the statistical analysis unit306can maintain a previously generated long-term level estimate based on the result of its statistical analysis of the ring buffer304. In one embodiment, a statistical analysis module308can generate a histogram of the levels of audio signal data in the ring buffer304. One method for obtaining the histogram levels can be by dividing the data in the ring buffer304into predetermined intervals (e.g., 2 ms) and calculating the root-mean-square (RMS) of each interval. The histogram levels can be used to obtain a Gaussian distribution of the signal levels in the ring buffer304. The Gaussian distribution can be used to determine the validity of a long-term level estimate that can be obtained from the data in the ring buffer304. The Gaussian distribution can indicate a new signal level estimate (e.g., the signal level at the mean). The Gaussian distribution can also be used to validate a recently derived estimate, as will be further described in relation toFIG.4. At blocks312,314, the long-term level estimator300determines whether a previously determined signal level estimate is to be maintained (block314) or whether a new signal level estimate should be used to update a previously estimated signal level (block312). FIG.4illustrates graphs400of some example Gaussian distributions that can exist in the ring buffer304data. Horizontal axis indicates signal level in dB and the vertical axis represents the distribution of the signal data having the signal level shown on the horizontal axis. Referencing bothFIGS.3and4, in graph402, the feed data302is sparse (perhaps in the beginning of a video call session). In graphs402and403the feed data302begins to accumulate more in the ring buffer304and a more bell-shaped curve is observed in the Gaussian distribution of the data in the ring buffer304. In some embodiments, the graph406, showing a distinct peak around −30 dB can indicate that a valid long-term estimate can be obtained from the data in the ring buffer406. The graphs408and410illustrate situations where the Gaussian distribution is skewed to the left or right, respectively, indicating perhaps a speaker has moved away or toward a microphone. The first and second stage can be configured to respond to the situations presented in graphs408and410, depending on whether the first stage is in the converged state or not. For example, in the unconverged state, the signal level corresponding to the signal level of a majority of the ring buffer304data can be chosen as the long-term estimate. In some embodiments, the mean or the mode can be chosen as the long-term signal level estimate. In the unconverged state, the long-term level estimator300can be configured with logic that responds to the left-skewed graph (e.g.,408) by not updating the signal level estimate, so as to reduce or minimize the chance of unintended overamplification. For the right-skewed Gaussian distribution, detected when the AGC200is in converged state, the AGC200can be configured to transition back to unconverged state to determine second stage gain to return the signal level to the target level. Graph412can occur when audio streams from multiple speakers have been received in the ring buffer304. In this scenario, the long-term level estimator300may be configured to disregard the new change in level estimation and not update the signal estimate level (block314). Alternatively, it can use another statistically desirable parameter, such as the mean or the mod of the Gaussian distribution to update the signal level (block312). Graph414illustrates a situation that might occur if noise or interference is present, when multiple distinct peaks in the Gaussian distribution can exist. In some embodiments, the statistical analysis unit306can determine if the difference between the mean and the mod in a multi-peak Gaussian distribution exceeds a predetermined threshold (e.g., 5 dB) and refrain from generating a new long-term level estimate and/or retrain from updating the long-term level estimate when the distance between the mean and the mod exceeds that threshold. In this scenario, the long-term level estimator300executes the block314and maintains a previously determined long-term signal level estimate. In other words, a distance (more than a threshold) between the mean and the mod in the Gaussian distribution can indicate that the data integrity of the voice data in the ring-buffer304is not adequate enough to warrant changing a previously determined long-term signal level estimate. Example Operations of the AGC FIG.5illustrates a graph500of various speech levels relative to a desired target level in a near ideal scenario, where real speech level and the estimated speech level are the same and remain constant during the period shown. The horizontal axis shows time, and the vertical axis shows speech (signal) level in dB. The graph500illustrates a situation where a positive gain is applied to the input signal202to amplify it to the target level, but the described embodiments can also apply to situations and input signals where the speech level (real speech level) is decreased by applying a negative gain, in order to reach a target speech level. During the unconverged state, from 0 to Tc (converged time), the first stage gain G1gradually increments the input signal level (real speech level) toward the target level. During this period, the second stage gain G2makes up for the amount of gain that is needed to amplify the input signal level to the target level. In other words, the sum of first and second stage gains (G1+G2) applied to the input signal level would convert the input signal level to the target level. During the unconverged period (0 to Tc), the first stage gain G1is a parameter that gradually increases in absolute value in the total gain G and the second stage gain G2is a parameter that gradually decreases in absolute value as more and more of the total or the majority gain (G) can be made up by the contribution from the first stage gain. Near or around the converged time (Tc), the second stage gain approaches near zero or a small value (e.g., less than 5 dB), while the gain due to the first stage G1reaches a stable level that amplifies the input signal level to the target level, unassisted. The term amplification in the context of this description is used synonymously with depressing a signal and is used in the context of digital signal amplification, which is achieved by digitally adding or subtracting an appropriate amount of first stage and/or second stage gains G1and G2to a signal. In some embodiments, the AGC200can be set in a converged state using the following technique. When the difference between the amplified signal and the target level is within or less than a threshold, for example 5 dB, a convergence counter is incremented. If the convergence counter reaches a value above a predetermined threshold (e.g., greater than 10-30), the AGC200and the first stage are put in converged mode. The mode can be stored in a state machine, memory or other storage component. The second stage gain G2can be prone to wide variation because it is based on a short-term estimate, which can be impacted by noise or interference. During the converged mode, the first stage gain G1can become stable but the second stage gain G2can still vary widely if unchecked. In some embodiments, when the AGC200is in the converged mode, the second stage gain G2can be clipped to a value less than 5 dB to prevent or reduce the chance of the second stage gain G2destabilizing the overall gain. Additionally, to prevent the overall gain from multiple convergence, the last converged signal level can be recorded locally and compared with a new converged level before the corresponding gain is applied to the input signal. This can help prevent or reduce the overall gain from divergence in a noisy environment. FIG.6illustrates a graph600of the processing of a variable level speech through the AGC200. The real speech level602is a variable speech level as may be encountered in a typical video conference, where the speaker's voice level rises and falls. Before convergence time Tc, the first and second stage gains G1and G2together convert the input signal level (real speech level)602to the target level. The estimated signal level can closely track the real speech level from which the first stage gain G1is generated and applied to the input signal level (real speech level)602. The second stage gain G2makes up the difference between the target level and the signal amplified by the first stage gain G1. When convergence time Tc is reached, the first stage gain G1has reached a stable value and can maintain the amplification of the input signal level602to the target level, entirely or almost entirely unassisted. The second stage gain G2can be clipped to a value less than a threshold such as 5 dB or less to keep the overall gain stable. In some scenarios, it may be desirable to avoid amplifying the input signal to the target level. For example, when the AGC200has already reached convergence (time>Tc), when a speaker moves away from the microphone, the first and second stage gains G1, G2can react to and increase the applied gain to amplify the now lower-level input signal to the target level. However, this scenario in some cases can lead to re-convergence to a loud signal level above the target level. In general, loud signals and over-amplified signals can create more unpleasant auditory experiences and may be more challenging to correct. For this scenario, where the AGC200is already in converged mode, and when the input signal level drops, the AGC200can be configured to not react to the drop in signal level and to not increase the gains G1and/or G2. For example, the long-term estimator module300can be configured to not update the estimated speech level when the AGC200is in converged mode and the speech level drops below the converged level. This scenario is illustrated inFIG.7. FIG.7illustrates a graph700of a scenario when the AGC200is in a converged state and the input signal starts to fall off below its most-recently estimated speech level (converged level). In this example, the first stage gain stabilizes at convergence time Tc and the processed signal (input signal plus G1+G2) is at target level at the convergence time. A short time thereafter, the real speech level702starts falling below the converged level. In this scenario, an exception can be coded in the operation of the long-term estimator module300, where the long-term estimate is not updated when the speech level begins to fall off during an already-converged period of time. For example, the long-term estimator module can discard the new long-term estimate corresponding to the falling speech level and consequently, the AGC200would continue to apply the previous value of the first stage gain G1, which had previously stabilized at the convergence time Tc. Without coding this exception, the long-term estimator300would update the long-term speech level estimate causing the AGC200to transition out of the converged state and generate new values for G1and G2until the fallen speech level is restored to the target level. In other words, without the exception, the AGC200would try to reach the target level by applying newly calculated gain values, which can lead to multiple convergence and loud audio if the speaker quickly moves back to a prior position, leading to overamplification of the input signal. When the exception, as described above is coded, the processed signal (real speech level plus the gain) is allowed to fall below target level. This is acceptable since a signal level below the target level is less prone to audio processing problems than a potentially loud signal. In some embodiments, the exception related to the falling signal level can be coded in the logic implementing the behavior of the long-term level estimator300. For example, the source code implementing the functionality of the long-term level estimator300can include conditional lines of code that detect conditions, such as the AGC200being in a converged state and a falling of the input signal below the converged level. When the conditions are satisfied, the long-term level estimator300would not update and output new speech level estimates; instead, it maintains the previous speech estimate levels, leading the AGC200to not change the values of the first and second stage gains G1, G2. Another exception can be coded and relates to when the real speech level rises above the converged level when the AGC200is in converged state. In this instance, the AGC200can be configured to restore the amplified signal to the target level by applying appropriate, newly calculated values of the first and second stage gains G1and G2. FIG.8illustrates a graph800of a scenario when the AGC200has reached a converged state at the first convergence time Tc1. The real speech level802is at the converged level804. The converged level804is the same as or based on the most recently estimated speech level. During the converged state, the first stage gain G1amplifies the input signal (real speech level) to the target level. The second stage gain G2is clipped to a minimum value (e.g., below 5 dB). Consequently, the majority of the gain contribution is from the first stage gain G1. At Tr (rise time), the real speech level802starts to rise above the converged level804and the input signal (real speech level802) starts to rise above its most-recently estimated speech level (converged level804). Consequently, the first stage gain amplifies the input signal to a level higher than the target level. The AGC200can be configured to detect these circumstances and transition from the converged mode to unconverged mode. Transitioning to unconverged mode, allows the second stage gain to be unclipped and to start applying the second stage gain G2values that restore the input signal to the target level. The graph800illustrates a momentary rise in the processed signal level above the target level between times Tr (rise time) and Tu (unconverged time). At Tu, the second stage gain G2returns online and starts applying gain values to restore the input signal to the target level, which is accomplished at the second converged time (Tc2). During this time, the first stage gain values G1will be recalculated and updated to reflect the real speech level802. Subsequently, the first stage gain G1will gradually stabilize, and at or near the second convergence time Tc2, the first stage gain G1contributes substantially to the overall gain amplifying the signal to the target level, unassisted. The second stage gain G2can be clipped again to a minimum value (e.g., less than 5 dB) at or a time near the second convergence time Tc2. Without changing the mode on the AGC200from converged to unconverged mode, the second stage gain G2remains clipped to a minimum value and the first stage gain pushes the input signal above the target level. As described earlier, a loud or over-amplified signal is undesirable in some circumstances. An AGC200configured as described herein can avoid or reduce the chance of signal over-amplification. The mode switching between converged state and unconverged state can be accomplished by a state machine implemented in the AGC200. FIG.9illustrates a flow chart of a method900of automatic gain control in a video conferencing application according to an embodiment. The method900starts at the step902. At step904, the method includes receiving an input signal. At step906, the method includes estimating a long-term signal level. At step908, the method includes generating a first stage gain based on the long-term estimate, wherein the first stage gain applied to the input signal gradually converts the input signal to a target level, wherein the conversion occurs over a duration of time comprising an unconverged duration of time. At step910, the method includes estimating a short-term signal level. At step912, the method includes generating a second stage gain, wherein the second stage gain summed with the first stage gain, applied to the input signal during the unconverged duration of time converts the input signal to the target level during the unconverged duration of time. The method ends at step914. Example Implementation Mechanism Hardware Overview Some embodiments are implemented by a computer system or a network of computer systems. A computer system may include a processor, a memory, and a non-transitory computer-readable medium. The memory and non-transitory medium may store instructions for performing methods, steps and techniques described herein. According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. The special-purpose computing devices may be server computers, cloud computing computers, desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. For example,FIG.10is a block diagram that illustrates a computer system1000upon which an embodiment of can be implemented. Computer system1000includes a bus1002or other communication mechanism for communicating information, and a hardware processor1004coupled with bus1002for processing information. Hardware processor1004may be, for example, special-purpose microprocessor optimized for handling audio and video streams generated, transmitted or received in video conferencing architectures. Computer system1000also includes a main memory1006, such as a random access memory (RAM) or other dynamic storage device, coupled to bus1002for storing information and instructions to be executed by processor1004. Main memory1006also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor1004. Such instructions, when stored in non-transitory storage media accessible to processor1004, render computer system1000into a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system1000further includes a read only memory (ROM)1008or other static storage device coupled to bus1002for storing static information and instructions for processor1004. A storage device1010, such as a magnetic disk, optical disk, or solid state disk is provided and coupled to bus1002for storing information and instructions. Computer system1000may be coupled via bus1002to a display1012, such as a cathode ray tube (CRT), liquid crystal display (LCD), organic light-emitting diode (OLED), or a touchscreen for displaying information to a computer user. An input device1014, including alphanumeric and other keys (e.g., in a touch screen display) is coupled to bus1002for communicating information and command selections to processor1004. Another type of user input device is cursor control1016, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor1004and for controlling cursor movement on display1012. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. In some embodiments, the user input device1014and/or the cursor control1016can be implemented in the display1012for example, via a touch-screen interface that serves as both output display and input device. Computer system1000may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system1000to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system1000in response to processor1004executing one or more sequences of one or more instructions contained in main memory1006. Such instructions may be read into main memory1006from another storage medium, such as storage device1010. Execution of the sequences of instructions contained in main memory1006causes processor1004to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operation in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical, magnetic, and/or solid-state disks, such as storage device1010. Volatile media includes dynamic memory, such as main memory1006. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus1002. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor1004for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system1000can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus1002. Bus1002carries the data to main memory1006, from which processor1004retrieves and executes the instructions. The instructions received by main memory1006may optionally be stored on storage device1010either before or after execution by processor1004. Computer system1000also includes a communication interface1018coupled to bus1002. Communication interface1018provides a two-way data communication coupling to a network link1020that is connected to a local network1022. For example, communication interface1018may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface1018may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface1018sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link1020typically provides data communication through one or more networks to other data devices. For example, network link1020may provide a connection through local network1022to a host computer1024or to data equipment operated by an Internet Service Provider (ISP)1026. ISP1026in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet”1028. Local network1022and Internet1028both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link1020and through communication interface1018, which carry the digital data to and from computer system1000, are example forms of transmission media. Computer system1000can send messages and receive data, including program code, through the network(s), network link1020and communication interface1018. In the Internet example, a server1030might transmit a requested code for an application program through Internet1028, ISP1026, local network1022and communication interface1018. The received code may be executed by processor1004as it is received, and/or stored in storage device1010, or other non-volatile storage for later execution. While the invention has been particularly shown and described with reference to specific embodiments thereof, it should be understood that changes in the form and details of the disclosed embodiments may be made without departing from the scope of the invention. Although various advantages, aspects, and objects of the present invention have been discussed herein with reference to various embodiments, it will be understood that the scope of the invention should not be limited by reference to such advantages, aspects, and objects. Rather, the scope of the invention should be determined with reference to patent claims. | 44,441 |
11863144 | To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one aspect may be beneficially utilized on other aspects without specific recitation. DETAILED DESCRIPTION Certain aspects of the present disclosure generally relate to techniques and apparatus for monitoring an oscillation signal in a non-invasive manner. In this scheme, oscillation signal amplitude and automatic gain control (AGC) circuit gain may be evaluated without monitoring. For example, a main automatic gain control (AGC) circuit and a replica AGC circuit may be used to evaluate oscillation signal amplitude and AGC circuit gain without sensing the oscillation signal amplitude. Constructed of transistors fabricated with the same semiconductor process and powered by the same voltage rail, the main and replica AGC circuits may each use a constant transconductance bias generator to self-generate a reference that is dependent on process, voltage, and temperature (PVT). In this manner, a process-tracking threshold (e.g., an AGC output current amplitude threshold) is effectively generated, against which the main AGC circuit is tested, rather than using a fixed threshold. In this manner, weak failures (where the oscillation signal amplitude has just started to decrease, but may still allow the apparatus using the oscillation signal to function) may be detected earlier (and in some cases corrected by switching to a backup oscillator) before a more serious failure occurs (e.g., where the apparatus cannot function. Therefore, certain aspects of the present disclosure may provide a more robust oscillation signal. Various aspects of the disclosure are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Based on the teachings herein, one skilled in the art should appreciate that the scope of the disclosure is intended to cover any aspect of the disclosure disclosed herein, whether implemented independently of or combined with any other aspect of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. As used herein, the term “connected with” in the various tenses of the verb “connect” may mean that element A is directly connected to element B or that other elements may be connected between elements A and B (i.e., that element A is indirectly connected with element B). In the case of electrical components, the term “connected with” may also be used herein to mean that a wire, trace, or other electrically conductive material is used to electrically connect elements A and B (and any components electrically connected therebetween). An Example Wireless System FIG.1illustrates a wireless communications system100with access points110and user terminals120, in which aspects of the present disclosure may be practiced. For simplicity, only one access point110is shown inFIG.1. An access point (AP) is generally a fixed station that communicates with the user terminals and may also be referred to as a base station (BS), an evolved Node B (eNB), or some other terminology. A user terminal (UT) may be fixed or mobile and may also be referred to as a mobile station (MS), an access terminal, user equipment (UE), a station (STA), a client, a wireless device, or some other terminology. A user terminal may be a wireless device, such as a cellular phone, a personal digital assistant (PDA), a handheld device, a wireless modem, a laptop computer, a tablet, a personal computer, etc. Access point110may communicate with one or more user terminals120at any given moment on the downlink and uplink. The downlink (i.e., forward link) is the communication link from the access point to the user terminals, and the uplink (i.e., reverse link) is the communication link from the user terminals to the access point. A user terminal may also communicate peer-to-peer with another user terminal. A system controller130couples to and provides coordination and control for the access points. Wireless communications system100employs multiple transmit and multiple receive antennas for data transmission on the downlink and uplink. Access point110may be equipped with a number Napof antennas to achieve transmit diversity for downlink transmissions and/or receive diversity for uplink transmissions. A set Nuof selected user terminals120may receive downlink transmissions and transmit uplink transmissions. Each selected user terminal transmits user-specific data to and/or receives user-specific data from the access point. In general, each selected user terminal may be equipped with one or multiple antennas (i.e., Nut≥1). The Nuselected user terminals can have the same or different number of antennas. Wireless communications system100may be a time division duplex (TDD) system or a frequency division duplex (FDD) system. For a TDD system, the downlink and uplink share the same frequency band. For an FDD system, the downlink and uplink use different frequency bands. Wireless communications system100may also utilize a single carrier or multiple carriers for transmission. Each user terminal120may be equipped with a single antenna (e.g., to keep costs down) or multiple antennas (e.g., where the additional cost can be supported). In certain aspects of the present disclosure, the access point110and/or user terminal120may include a circuit for monitoring an oscillation signal, as described in more detail herein. FIG.2shows a block diagram of access point110and two user terminals120mand120xin wireless communications system100. Access point110is equipped with Napantennas224athrough224ap. User terminal120mis equipped with Nut,mantennas252mathrough252mu, and user terminal120xis equipped with Nut,xantennas252xathrough252xu. Access point110is a transmitting entity for the downlink and a receiving entity for the uplink. Each user terminal120is a transmitting entity for the uplink and a receiving entity for the downlink. As used herein, a “transmitting entity” is an independently operated apparatus or device capable of transmitting data via a frequency channel, and a “receiving entity” is an independently operated apparatus or device capable of receiving data via a frequency channel. In the following description, the subscript “dn” denotes the downlink, the subscript “up” denotes the uplink, Nupuser terminals are selected for simultaneous transmission on the uplink, Ndnuser terminals are selected for simultaneous transmission on the downlink, Nupmay or may not be equal to Ndn, and Nupand Ndnmay be static values or can change for each scheduling interval. Beam-steering or some other spatial processing technique may be used at the access point and user terminal. On the uplink, at each user terminal120selected for uplink transmission, a TX data processor288receives traffic data from a data source286and control data from a controller280. TX data processor288processes (e.g., encodes, interleaves, and modulates) the traffic data {dup} for the user terminal based on the coding and modulation schemes associated with the rate selected for the user terminal and provides a data symbol stream {sup} for one of the Nut,mantennas. A transceiver front end (TX/RX)254(also known as a radio frequency front end (RFFE)) receives and processes (e.g., converts to analog, amplifies, filters, and frequency upconverts) a respective symbol stream to generate an uplink signal. The transceiver front end254may also route the uplink signal to one of the Nut,mantennas for transmit diversity via a radio-frequency (RF) switch, for example. The controller280may control the routing within the transceiver front end254. Memory282may store data and program codes for the user terminal120and may interface with the controller280. A number Nupof user terminals120may be scheduled for simultaneous transmission on the uplink. Each of these user terminals transmits its set of processed symbol streams on the uplink to the access point. At access point110, Napantennas224athrough224apreceive the uplink signals from all Nupuser terminals transmitting on the uplink. For receive diversity, a transceiver front end222may select signals received from one of the antennas224for processing. The signals received from multiple antennas224may be combined for enhanced receive diversity. The access point's transceiver front end222also performs processing complementary to that performed by the user terminal's transceiver front end254and provides a recovered uplink data symbol stream. The recovered uplink data symbol stream is an estimate of a data symbol stream {sup} transmitted by a user terminal. An RX data processor242processes (e.g., demodulates, deinterleaves, and decodes) the recovered uplink data symbol stream in accordance with the rate used for that stream to obtain decoded data. The decoded data for each user terminal may be provided to a data sink244for storage and/or a controller230for further processing. In certain aspects, the transceiver front end (TX/RX)222of access point110and/or the transceiver front end254of user terminal120may include one or more frequency synthesizers to generate oscillating signals used for signal transmission and/or reception. In certain aspects, the controller230of access point110and/or the controller280of user terminal120may include or be coupled to an oscillation circuit for generating oscillating signals used for clocking synchronous logic. At least one of the frequency synthesizers and/or at least one of the oscillation circuits may include or be coupled to a circuit for monitoring an oscillation signal, as described in more detail herein. On the downlink, at access point110, a TX data processor210receives traffic data from a data source208for Ndnuser terminals scheduled for downlink transmission, control data from a controller230and possibly other data from a scheduler234. The various types of data may be sent on different transport channels. TX data processor210processes (e.g., encodes, interleaves, and modulates) the traffic data for each user terminal based on the rate selected for that user terminal. TX data processor210may provide a downlink data symbol streams for one of more of the Ndnuser terminals to be transmitted from one of the Napantennas. The transceiver front end222receives and processes (e.g., converts to analog, amplifies, filters, and frequency upconverts) the symbol stream to generate a downlink signal. The transceiver front end222may also route the downlink signal to one or more of the Napantennas224for transmit diversity via an RF switch, for example. The controller230may control the routing within the transceiver front end222. Memory232may store data and program codes for the access point110and may interface with the controller230. At each user terminal120, Nut,mantennas252receive the downlink signals from access point110. For receive diversity at the user terminal120, the transceiver front end254may select signals received from one of the antennas252for processing. The signals received from multiple antennas252may be combined for enhanced receive diversity. The user terminal's transceiver front end254also performs processing complementary to that performed by the access point's transceiver front end222and provides a recovered downlink data symbol stream. An RX data processor270processes (e.g., demodulates, deinterleaves, and decodes) the recovered downlink data symbol stream to obtain decoded data for the user terminal. FIG.3is a block diagram of an example transceiver circuit300, such as transceiver front ends222,254inFIG.2, in which aspects of the present disclosure may be practiced. The transceiver circuit300includes at least one transmit (TX) path302(also known as a “transmit chain”) for transmitting signals via one or more antennas and at least one receive (RX) path304(also known as a “receive chain”) for receiving signals via the antennas. When the TX path302and the RX path304share an antenna303, the paths may be connected with the antenna via an interface306, which may include any of various suitable RF devices, such as a duplexer, a switch, a diplexer, and the like. Receiving in-phase (I) or quadrature (Q) baseband analog signals from a digital-to-analog converter (DAC)308, the TX path302may include a baseband filter (BBF)310, a mixer312, a driver amplifier (DA)314, and a power amplifier (PA)316. The BBF310, the mixer312, and the DA314may be included in one or more radio frequency integrated circuits (RFICs). The PA316may be external to the RFIC(s) for some implementations. The BBF310filters the baseband signals received from the DAC308, and the mixer312mixes the filtered baseband signals with a transmit local oscillator (LO) signal to convert the baseband signal of interest to a different frequency (e.g., upconvert from baseband to RF). This frequency-conversion process produces the sum and difference frequencies of the LO frequency and the frequency of the signal of interest. The sum and difference frequencies are referred to as “beat frequencies.” The beat frequencies may be in the RF range, such that the signals output by the mixer312may be RF signals, which may be amplified by the DA314and/or by the PA316before transmission by the antenna303. While one mixer312is illustrated, several mixers may be used to upconvert the filtered baseband signals to one or more intermediate frequencies and to thereafter upconvert the intermediate frequency (IF) signals to a frequency for transmission. The RX path304includes a low noise amplifier (LNA)322, a mixer324, and a baseband filter (BBF)326. The LNA322, the mixer324, and the BBF326may be included in one or more RFICs, which may or may not be the same RFIC(s) that include the TX path components. RF signals received via the antenna303may be amplified by the LNA322, and the mixer324mixes the amplified RF signals with a receive local oscillator (LO) signal to convert the RF signal of interest to a different baseband frequency (i.e., downconvert). The baseband signals output by the mixer324may be filtered by the BBF326before being converted by an analog-to-digital converter (ADC)328to digital I or Q signals for digital signal processing. Certain transceivers may employ frequency synthesizers with a voltage-controlled oscillator (VCO) to generate a stable, tunable LO with a particular tuning range. Thus, the transmit LO frequency may be produced by a TX frequency synthesizer318, which may be buffered or amplified by amplifier320before being mixed with the baseband signals in the mixer312. Similarly, the receive LO frequency may be produced by an RX frequency synthesizer330, which may be buffered or amplified by amplifier332before being mixed with the RF signals in the mixer324. In certain aspects, the TX frequency synthesizer318and/or the RX frequency synthesizer330may include or be coupled to a circuit for monitoring an oscillation signal, as described in more detail herein. WhileFIGS.1-3provide a wireless communication system as an example application in which certain aspects of the present disclosure may be implemented to facilitate understanding, certain aspects described herein may be used for monitoring an oscillation signal in any of various other suitable systems. An Example Oscillation Circuit with Improved Failure Detection Electronic devices (e.g., wireless communication devices) may include oscillation circuits and performance monitoring systems to ensure robust operation in the oscillation circuits and to switch to backup circuitry, if indicated. In certain oscillation circuits (e.g., oscillation circuits in automotive safety systems), non-invasive performance monitoring systems are desirable. One or more automatic gain control (AGC) circuits may serve as a first-order monitor of the oscillation circuit. The AGC circuit may monitor an amplitude of the output signal of an oscillator in the oscillation circuit and regulate a bias current of the oscillator based on the amplitude of the output signal. FIG.4Ais a block diagram of an example oscillation circuit400A. The oscillation circuit400A may include a first AGC circuit402, a second AGC circuit408, a safety monitor416, a resonator418, a first oscillator420(e.g., a main oscillator), and a second oscillator422(e.g., a backup oscillator). The first oscillator420may include a first oscillator core circuit404and a first current source406, and the second oscillator422may include a second oscillator core circuit410and a second current source412. The first and second oscillator core circuits404,410may be coupled to the resonator418and configured to generate an oscillation signal to enable the resonator418to resonate. The first and second oscillator core circuits404,410may include any suitable type of oscillator core, such as a crystal oscillator core, for driving the resonator. An output node426of the oscillation circuit400A may provide a clock signal to one or more other systems. As shown, the first AGC circuit402may have an input coupled to an output of the first oscillator420and may have an output coupled to a control input of the first current source406. Similarly, the second AGC circuit408may have an input coupled to an output of the second oscillator422and may have an output coupled to a control input of the second current source412. The first current source406and the second current source412may be adjustable current sources. The first AGC circuit402may be configured to monitor the amplitude of the signal output from the first oscillator420. In one example case, if the first AGC circuit402senses that the amplitude of the signal output by the first oscillator420is too low, the first AGC circuit402may control the first current source406to increase the current supplied to the first oscillator core circuit404. The second AGC circuit408may be configured to monitor the amplitude of the signal output from the second oscillator422and control the second current source412in a similar manner. If one of the AGC circuits402and408fails, the oscillation circuit400A may be left without a first-order monitoring system to monitor the health of the oscillators420and422. Accordingly, the first AGC circuit402and the second AGC circuit408may both be communicatively coupled to the safety monitor416. The safety monitor416may be configured to monitor the performance (e.g., the health) of the first AGC circuit402and the second AGC circuit408. In certain aspects, the safety monitor416may send an inquiry to (e.g., ping) each of the first AGC circuit402and the second AGC circuit408. In response to the inquiry, the first AGC circuit402and/or the second AGC circuit408may respond with a status signal indicating whether each AGC circuit is functional and operating correctly. For other aspects, the first AGC circuit402and/or the second AGC circuit408may periodically or continuously send such a status signal to the safety monitor416. The oscillation circuit400A (and more specifically, the safety monitor416) may select between the signal generated by the first oscillator420and the signal generated by the second oscillator422as the output signal at the output node426. In certain aspects, the safety monitor416may output (e.g., via a bus) a first enable signal for enabling and/or disabling the first oscillator420and a second enable signal for enabling and/or disabling the second oscillator422, as shown inFIG.4A. In certain aspects, for example, the first oscillator420may be initially enabled, while the second oscillator422may be initially disabled. If the first AGC circuit402fails (e.g., as indicated by its associated status signal), or if the first AGC circuit402detects a failure from the first oscillator420, the safety monitor416may indicate (e.g., via the enable signals) for the oscillation circuit400A to switch to using the signal generated by the second oscillator422as the output signal (e.g., the safety monitor416may disable the first oscillator420and enable the second oscillator422). Accordingly, the first oscillator420may be considered as the main oscillator, and the second oscillator422may be considered as the backup oscillator. In an example, the oscillation circuit400A may include additional circuitry (not shown) configured to monitor the output signal of the first oscillator420at the output node426. In response to a detected failure at this location, the oscillation circuit400A may switch to using the signal generated by the second oscillator422as the output signal. However, sensing the output node426of the oscillation circuit400A may introduce frequency shift and/or phase noise degradation into the oscillation signal. In one example, the additional circuitry may include a clock halt detector. However, halt detectors are generally configured to monitor for complete clock failures (e.g., after the clock signal has already been lost), which may lead to the failure detection occurring too late. FIG.4Bis a graph400B illustrating an example of detecting a failure after an output signal (e.g., the signal at the output node426) of an oscillator (e.g. the first oscillator420) is lost. As shown, before a time430, the system may operate using a main oscillator (e.g., the first oscillator420). At time430, the main oscillator core circuit (e.g., the first oscillator core circuit404) may fail, and the output signal may begin to droop (e.g., the amplitude of the output signal may decrease). As illustrated, the system (and more specifically, circuitry such as a clock halt detector) may not detect the failure until a time440when the output signal has drooped significantly (e.g., has been lost). Once the failure is detected, the system may switch to a backup oscillator (e.g., the second oscillator422). However, once the output signal is lost, even if the system switches to the backup oscillator, other systems that use the output signal may be reset, restarted, or in an unintentional state, which is undesirable. Accordingly, certain aspects of the present disclosure provide a system with improved failure detection where the failure may be detected before the output signal is lost.FIG.4Cis a graph400C illustrating an example of improved failure detection, in accordance with certain aspects of the present disclosure. As shown, when the main oscillator fails at time430, the system with improved failure detection detects the failure at a time450, before the output signal is lost. As a result, the system is able to switch to the backup oscillator before the output signal is lost, thereby avoiding undesirable consequences for other components that use the output signal as a clock signal. To achieve this improved failure detection, aspects of the present disclosure provide apparatus and techniques for using an output of an AGC circuit (as opposed to an output of the oscillator) and logic (as opposed to a halt detector, for example) to monitor the health of the oscillator core circuit. Several methods exist for monitoring the health of the oscillator core circuit and the AGC circuit based on an output of the AGC circuit. However, these methods tend to be invasive, unreliable, and/or negatively affect performance of the oscillation circuit. For example, one method involves breaking the feedback loop between the oscillator and the AGC circuit, and verifying the gain of the AGC circuit using a test input signal with a known amplitude. However, this invasive method may only be used when the oscillation system is in a factory test mode (as opposed to a mission mode) and does not provide real-time failure detection in mission mode. Another method involves using an analog-to-digital converter (ADC) to measure the amplitude of the AGC input signal, but the ADC may generate kickback noise that can cause a frequency shift and/or phase noise degradation in the oscillation circuit. The ADC also consumes additional power and occupies additional area. Yet another method involves monitoring only the amplitude of the output signal of the AGC circuit. In this case, only the output amplitude (not the input amplitude) is known, so the gain cannot be accurately calculated. This method may not be reliable because the amplitude of the AGC output signal is sensitive to variations in process, voltage, and temperature (PVT) conditions, so a monitoring system may not be able to catch all failures across PVT variations. For example,FIG.5Ais a graph500A illustrating a constant failure threshold for an AGC circuit output that does not change in response to PVT variations. In this case, a fixed threshold value for the AGC output may be set high enough to avoid false failures across PVT variations. As illustrated, when a constant threshold is used to indicate a failure, the system may be unable to detect the failure across all PVT variations. That is, an amplitude of the output of an AGC circuit may be below the failure threshold (a passing indication) at certain PVT conditions, even when the oscillation circuit (and, more specifically, the AGC circuit and/or an oscillator core circuit of the oscillation circuit) is failing. The constant failure threshold may therefore cause the monitoring system to generate a false “passing” indication. Furthermore, using a constant threshold value cannot detect a situation referred to as a “weak failure”—in which the oscillation circuit is still functional, but performance has been degraded (e.g., in terms of phase noise and/or frequency accuracy)—across PVT variations. Accordingly, certain aspects of the present disclosure provide techniques and apparatus for detecting failure based on the output of the AGC circuit that takes into consideration the PVT variations of the system and effectively provide a PVT-tracking threshold for oscillation monitoring.FIG.5Bis a graph500B illustrating a failure threshold for the AGC circuit output that changes in response to PVT variations, in accordance with certain aspects of the present disclosure. The PVT-tracking threshold may allow the system to avoid false passes and to detect even “weak” failures (e.g., before the clock signal is lost, as at time450inFIG.4C) regardless of PVT variations. The PVT-tracking threshold may effectively be self-generated by a constant-transconductance bias generator, discussed further below. FIG.6is a block diagram of an example oscillation circuit600, in accordance with certain aspects of the present disclosure. The oscillation circuit600may be similar to the oscillation circuit400A, but with a process monitor604(also referred to herein as “logic”). In addition to the process monitor604, the oscillation circuit600may generally include an oscillator health monitor602, an AGC circuit606, an oscillator core circuit608, a current source610, and a resonator618. The oscillator core circuit608and the current source610may compose an oscillator, such as oscillator420or422inFIG.4A. In certain aspects, at least a portion of the process monitor604may be integrated with the AGC circuit606. The oscillator core circuit608may be configured to generate an oscillation signal to enable the resonator618to resonate. The current source610may be adjustable and may be configured to control an amplitude of the oscillation signal by providing an adjustable bias current. The AGC circuit606may have an input coupled to an output of the oscillator core circuit608. The AGC circuit606may be configured to output, at614, a first output signal to control the current source610as described above with reference toFIG.4A. Additionally, as illustrated at616, the first output signal may also be used as a first input of the process monitor604. The AGC circuit606may be further configured to output, at612, a second output to be used as a second input of the process monitor. The second output of the AGC circuit606may be a replica output that is produced by a replica AGC circuit configured to replicate a primary AGC circuit in the AGC circuit606, as described below. For example, the replica AGC circuit may have the same or a similar topology as the main AGC circuit and may be fabricated using the same semiconductor process as the main AGC circuit. The oscillator health monitor602may be configured to check (e.g., periodically, intermittently, or continuously) if the oscillator is functioning properly (e.g., is generating an adequate output signal). For example, the oscillator health monitor602may send a request to (e.g., ping) the process monitor604for an indication regarding the health of the oscillator. The process monitor604may be configured to compare the first output signal (e.g., at616) and the second output signal (e.g., at612) of the AGC circuit606, and report the comparison to the oscillator health monitor602. Accordingly, the process monitor604may be configured to effectively monitor the oscillation signal generated by the oscillator based on the two outputs of the AGC circuit606. In this manner, AGC amplitude and gain are evaluated without monitoring the AGC input amplitude. Instead, the AGC output amplitude is monitored with process-tracking amplitude sensing, as explained below. FIG.7Ais a block diagram of an example implementation of the process monitor604and the AGC circuit606ofFIG.6, in accordance with certain aspects of the present disclosure. The AGC circuit606may include a first AGC circuit702and a second AGC circuit704configured to replicate the first AGC circuit702. The process monitor604may include a comparator706, which may be tunable. The comparator706may have a first input (e.g., the negative input) coupled to the output of the first AGC circuit702and may have a second input (e.g., the positive input) coupled to the output of the second AGC circuit704. The comparator706may be configured to compare the output signals from the first AGC circuit702and the second AGC circuit704to determine whether the oscillation circuit600(and, more specifically, the AGC circuit606and/or the oscillator core circuit608) is failing. The output of the first AGC circuit702(labeled “OUT”) may be determined as OUT=(REF−IN)*A where IN is an input signal of the first AGC circuit702(the output signal from the oscillator), and A is a gain of the first AGC circuit702. REF may be a self-generated reference signal. That is, the oscillation circuit (e.g., the first AGC circuit702and the second AGC circuit704) may generate REF via a constant-transconductance bias generator. When the oscillation circuit is functioning properly, the amplitude of the input signal IN will sufficiently high, and the amplitude of the output signal OUT will be relatively low (due to the decreased difference between REF and IN). Accordingly, the health of the oscillation circuit may be monitored in some cases by monitoring the output of the first AGC circuit702and determining when the output signal OUT is sufficiently high. One example method for monitoring when the output signal OUT is sufficiently high involves comparing the output signal OUT with another signal (e.g., a threshold). As explained with respect toFIGS.5A and5B, having the threshold vary according to PVT variations is desirable so that weak failures may be detected across PVT conditions. Accordingly, certain aspects of the present disclosure provide a method for comparing the output signal OUT with a replica output signal from the second AGC circuit704. The output of the second AGC circuit704(labeled “OUTREP”) may be determined as OUTREP=REF*AREP where AREPis a gain of the second AGC circuit704. Because the second AGC circuit704is configured to replicate the first AGC circuit702, the output signal OUTREPwill vary according to PVT in a manner similar to the output signal OUT, thereby providing a PVT-dependent threshold. Thus, a relative comparison between OUT and OUTREPis performed, as opposed to an absolute comparison to a fixed threshold. The comparator706may be configured to effectively monitor the oscillation signal generated by the oscillator (e.g., the oscillator core circuit608and the current source610inFIG.6) based on the output of the first AGC circuit702and the output of the second AGC circuit704. Accordingly, the oscillation circuit may be able to effectively monitor the oscillation signal generated by the oscillator without directly sensing an amplitude of the oscillation signal. According to certain aspects, the comparator706may be configured to compare the output signal OUT with the output signal OUTREP, and output (labeled “AGC_OK”) a logic high signal (e.g., logic 1) when an amplitude of OUTREPis greater than or equal to an amplitude of OUT, thereby indicating that the oscillation circuit is functioning properly. Based on the above equations and this circuit setup, the comparator706should indicate the oscillation circuit is okay (AGC_OK=1) when OUTREPis greater than or equal to OUT. In this situation, REF*AREP≥(REF−IN)*A Rearranging these equations to solve for IN leads to the following equation: IN≥REF*(1-AREPA) Therefore, the gain ratio of AREPto A may be set such that this equation is true when the amplitude of IN is acceptable (e.g., when the oscillation circuit, and more specifically the oscillator core circuit and the AGC circuit are functioning properly). The ratio of the gain of the second AGC circuit704to the gain of the first AGC circuit702(e.g., the ratio of AREPto A) may be calculated as AREPA=mn where n depends on a transistor size ratio between transistors in the first AGC circuit702and in the comparator706, and m depends on a transistor size ratio between transistors in the second AGC circuit704and in the comparator706, as discussed further below. Accordingly, the gain ratio of AREPto A may be set by designing (and in some cases adjusting during factory calibration) transistor sizes in the first AGC circuit702, the second AGC circuit704, and the comparator706, as discussed further below. FIG.7Bis a schematic diagram of an oscillation circuit700with an example implementation of the first AGC circuit702, the second AGC circuit704, and the comparator706ofFIG.7A, in accordance with certain aspects of the present disclosure. As shown, the oscillation circuit700also includes an oscillator701coupled to the first AGC circuit702. The oscillator701may be similar to the oscillator420or422inFIG.4A, and may include an adjustable current source (implemented by transistor708, for example), and an oscillator core circuit705having an input coupled to a drain of transistor708and having an output coupled to a resonator707. The oscillator701may be configured to generate an oscillation signal. More specifically, the oscillator core circuit705may be configured to generate the oscillation signal to enable the resonator707to resonate. The adjustable current source (implemented by transistor708) may be configured to provide an adjustable bias current (labeled “IOUT”) to control an amplitude of the oscillation signal. The comparator706may include a first transistor718, a second transistor720, and an inverter730. The comparator706may have a first input (e.g., a gate of transistor718) coupled to an output of the first AGC circuit702(e.g., a node711coupled to a gate of transistor708), and a second input (e.g., a gate of transistor720) coupled to an output of the second AGC circuit704(a node721). The transistor718may be a p-type transistor and may have a source coupled to a power rail (labeled “VDD”) of the oscillation circuit700. The transistor720may be an n-type transistor, which may have a source coupled to a reference potential node (e.g., electrical ground) for the oscillation circuit700and may have a drain coupled to a drain of the transistor718. In some examples, at least one of the transistor718and the transistor720may be tunable. The inverter730may have an input coupled to the drain of the transistor718and to the drain of the transistor720, and an output coupled to an output of the comparator706(labeled “AGC_OK”). The first AGC circuit702may have an input coupled to an output of the oscillator701and may have an output (e.g., at node711) coupled to a control input of the adjustable current source (e.g., a gate of the transistor708). The first AGC circuit702may include a first transistor710having a source coupled to the power rail. The transistor710may be a p-type transistor. The first AGC circuit702may also include a second transistor712having a source coupled to the reference potential node and having a drain coupled to a drain of the transistor710. The transistor712may be an n-type transistor. The first AGC circuit702may also include a third transistor714having a source coupled to the power supply rail and having a gate coupled to a drain of the transistor714, to a gate of the transistor710, and to a gate of the transistor718of the comparator706. The transistor714may be a p-type transistor. The first AGC circuit702may also include a fourth transistor716having a drain coupled to the drain and the gate of the transistor714. The transistor716may be an n-type transistor. The first AGC circuit702may also include a first resistive element R1coupled between a source of the transistor716and the reference potential node. In some examples, the first AGC circuit702may further include a second resistive element R2coupled between a gate of the transistor712and the drain of the transistor712, and a third resistive element R3coupled between the drain of the transistor712and a gate of the transistor716. In some examples, the first AGC circuit702may also include a first capacitive element C1coupled between the gate of the transistor716and the reference potential node. According to certain aspects, the transistor710, the transistor712, the transistor714, the transistor716, the first resistive element R1, the second resistive element R2, and the third resistive element R3form at least part of a constant transconductance bias generator configured to generate a reference current (labeled “IREF”) that is dependent on process, voltage, and temperature (PVT). The reference current is represented by REF in the equations above described with respect toFIG.7A. According to certain aspects, the oscillation circuit700may also include a second capacitive element C2coupled between the gate of the transistor712and the output of the oscillator701. The second capacitive element C2may be used to AC couple the output of the oscillator701to the input of the first AGC circuit702. The second AGC circuit704may be configured to replicate the first AGC circuit702. For example, the second AGC circuit704may have the same or a similar topology as the first AGC circuit702and may be fabricated using the same semiconductor process as the first AGC circuit702. The second AGC circuit704may include a first transistor722having a source coupled to the power supply rail, and a second transistor726having a source coupled to the reference potential node and a drain coupled to a drain of the transistor722, to a gate of the transistor726, and to a gate of the transistor720(e.g., to node721). The transistor722may be a p-type transistor, and the transistor726may be an n-type transistor. The second AGC circuit704may also include a third transistor724having a source coupled to the power supply rail and having a drain coupled to a gate of the transistor724and to a gate of the transistor722. The transistor724may be a p-type transistor. The second AGC circuit704may also include a fourth transistor728having a drain coupled to the drain and the gate of the transistor724and having a gate coupled to the gate and the drain of transistor726, and a resistive element R4coupled between a source of the transistor728and the reference potential node. The transistor728may be an n-type transistor. The transistor722, the transistor726, the transistor724, and the transistor728form at least part of another constant transconductance bias generator configured to generate another reference current (also labeled “IREF”). According to certain aspects and as shown inFIG.7B, an input of the second AGC circuit704is open-circuited, such that the input of the second AGC circuit704is configured to have zero current. In this manner, the second AGC circuit704amplifies the difference between REF and 0, as illustrated inFIG.7A. According to certain aspects, a transistor size ratio between the transistor720, the transistor726, and the transistor728may be m:1:1 (where m≥1), and a transistor size ratio between the transistor718, the transistor710, and the transistor714may be n:1:1 (where n≥1). The value of m may represent the gain of the second AGC circuit704, and the value of n may represent the gain of the first AGC circuit702(or at least the ratio of m to n may be considered to represent the ratio of AREPto A). Accordingly, as mentioned above, the ratio of the gain of the second AGC circuit704(e.g., AREP) to the gain of the first AGC circuit702(e.g., A) may be set by adjusting transistor sizes in the first AGC circuit702, the second AGC circuit704, and the comparator706. As described above, the first AGC circuit702may include a constant transconductance bias generator configured to generate a first reference current (e.g., IREF) that is dependent on PVT conditions of the first AGC circuit702, and the second AGC circuit704may include another constant transconductance bias generator configured to configured to generate a second reference current. If the transistors in the first and second AGC circuits are fabricated using the same semiconductor process, receive the same power supply rail voltage (Vdd), and are subjected to the same temperature, the second reference current should be equal to the first reference current. In this case, the ratio of m to n may be set such that the comparator706is configured to output a logic high signal when the amplitude of the oscillation signal is estimated to be greater than or equal to an amplitude of the reference current multiplied by (1−m/n). In certain aspects, the adjustable current source may be implemented using the transistor708. As shown, transistor708may have a source coupled to the power supply rail, a drain coupled to the oscillator core circuit705, and a gate coupled to the gate of the transistor710and to the transistor714. In certain aspects, the transistor708may be a p-type transistor. In certain aspects, a transistor size ratio between the transistor708, the transistor710, and the transistor714is x:1:1, where x≥1. According to certain aspects, the oscillation circuit700may also include a backup oscillator (e.g., as discussed with respect toFIG.4Aand analogous to oscillator701) configured to generate another oscillation signal, a third AGC circuit (analogous to the first AGC circuit702), a fourth AGC circuit (analogous to the second AGC circuit704), and another comparator (analogous to the comparator706). In certain aspects, the backup oscillator may include another oscillator core circuit for coupling to the resonator, and another adjustable current source coupled to the other oscillator core circuit and configured to control an amplitude of the other oscillation signal. In this case, the third AGC circuit may have an input coupled to an output of the backup oscillator and may have an output coupled to a control input of the other adjustable current source. In some aspects, the process monitor604(or the other comparator analogous to comparator706) may have a third input coupled to the output of the third AGC circuit and may have a fourth input coupled to the output of the fourth AGC circuit. In certain aspects, the fourth AGC circuit may be configured to replicate (e.g., replicate a topology of) the third AGC circuit. Example Operations for Oscillation Monitoring FIG.8is a flow diagram illustrating example operations800for oscillation monitoring, in accordance with certain aspects of the present disclosure. The operations800may be performed by an oscillation circuit, such as the oscillation circuit600ofFIG.6or the oscillation circuit700ofFIG.7B. The flow diagram includes blocks representing the operations800. The operations800may begin, at block802, by generating an oscillation signal (e.g., IN) with an oscillator (e.g., oscillator701) driving a resonator (e.g., resonator618,707). At block804, a first automatic gain control (AGC) circuit (e.g., the first AGC circuit702) may generate a first signal (e.g., OUT) based on the oscillation signal. At block806, the first AGC circuit may control a bias current (e.g., IOUT) for the oscillator based on the first signal. At block808, a second AGC circuit (e.g., the second AGC circuit704) may generate a second signal (e.g., OUTREP). The second AGC circuit may replicate the first AGC circuit. For example, the second AGC circuit may have the same or a similar topology as the first AGC circuit and may be fabricated using the same semiconductor process as the first AGC circuit. At block810, the oscillation circuit may effectively monitor the oscillation signal (e.g., using logic such as the comparator706or the process monitor604) based on the first signal and the second signal. According to certain aspects, effectively monitoring the oscillation signal at block810may involve effectively monitoring the oscillation signal without directly sensing an amplitude of the oscillation signal. According to certain aspects, effectively monitoring the oscillation signal at block810may involve comparing the first signal and the second signal, and outputting a status signal indicating a failure (e.g., AGC_OK is logic low) when an amplitude of the second signal is lower than an amplitude of the first signal. According to certain aspects, the operations800may further involve generating another oscillation signal with a backup oscillator (e.g., oscillator422ofFIG.4A) and, in response to the status signal indicating the failure, switching to using the other oscillation signal instead of the oscillation signal. According to certain aspects, a first transistor size ratio between a first n-type transistor (e.g., the transistor720) of the comparator and a second n-type transistor (e.g., the transistor726or the transistor728) of the second AGC circuit is m:1, and a second transistor size ratio between a first p-type transistor (e.g., the transistor718) of the comparator to a second p-type transistor (e.g., the transistor710or the transistor714) of the first AGC circuit is n:1. In certain aspects, the second signal from the second AGC circuit controls the first n-type transistor of the comparator and the second n-type transistor of the second AGC circuit, and the first signal from the first AGC circuit controls the first p-type transistor of the comparator and the second p-type transistor of the first AGC circuit. According to certain aspects, a ratio of m to n is equal to a ratio of a gain of the second AGC circuit (e.g., gain AREP) to a gain of the first AGC circuit (e.g., gain A). According to certain aspects, generating the first signal may involve generating, with a constant transconductance bias generator (e.g., transistors710,712,714, and716) of the first AGC circuit, a first reference current (e.g., IREFthrough transistor710) that is dependent on process, voltage, and temperature (PVT) of the first AGC circuit. In some aspects, generating the second signal may involve generating a second reference current (e.g., IREFthrough transistor726), with the second AGC circuit, that is equal to the first reference current. According to certain aspects, effectively monitoring the oscillation signal at block810may involve outputting a logic high signal (e.g., AGC_OK is logic high) from the comparator when an amplitude of the oscillation signal is estimated to be greater than or equal to an amplitude of the first reference current multiplied by (1−m/n). Example Aspects In addition to the various aspects described above, specific combinations of aspects are within the scope of the disclosure, some of which are detailed below: Aspect 1: An oscillation circuit comprising: an oscillator configured to generate an oscillation signal, the oscillator comprising: an oscillator core circuit for coupling to a resonator and configured to generate the oscillation signal to enable the resonator to resonate; and an adjustable current source coupled to the oscillator core circuit and configured to control an amplitude of the oscillation signal; a first automatic gain control (AGC) circuit having an input coupled to an output of the oscillator and having an output coupled to a control input of the adjustable current source; a second AGC circuit configured to replicate the first AGC circuit; and logic having a first input coupled to the output of the first AGC circuit and having a second input coupled to an output of the second AGC circuit. Aspect 2: The oscillation circuit of Aspect 1, wherein the logic is configured to effectively monitor the oscillation signal based on the output of the first AGC circuit and the output of the second AGC circuit. Aspect 3: The oscillation circuit of Aspect 1 or 2, wherein the oscillation circuit is configured to effectively monitor the oscillation signal without directly sensing an amplitude of the oscillation signal. Aspect 4: The oscillation circuit of any of the preceding Aspects, wherein the logic comprises a comparator having a first input coupled to the output of the first AGC circuit and having a second input coupled to the output of the second AGC circuit. Aspect 5: The oscillation circuit of Aspect 4, wherein the comparator comprises: a first p-type transistor having a source coupled to a power supply rail; a first n-type transistor having a source coupled to a reference potential node for the oscillation circuit and having a drain coupled to a drain of the first p-type transistor, wherein at least one of the first p-type transistor or the first n-type transistor is tunable; and an inverter having an input coupled to the drain of the first p-type transistor and to the drain of the first n-type transistor and having an output coupled to an output of the comparator. Aspect 6: The oscillation circuit of Aspect 5, wherein the first AGC circuit comprises: a second p-type transistor having a source coupled to the power supply rail; a second n-type transistor having a source coupled to the reference potential node and having a drain coupled to a drain of the second p-type transistor; a third p-type transistor having a source coupled to the power supply rail and having a gate coupled to a drain of the third p-type transistor, to a gate of the second p-type transistor, and to a gate of the first p-type transistor; a third n-type transistor having a drain coupled to the drain and the gate of the third p-type transistor; and a first resistive element coupled between a source of the third n-type transistor and the reference potential node. Aspect 7: The oscillation circuit of Aspect 6, wherein the first AGC circuit further comprises: a second resistive element coupled between a gate of the second n-type transistor and the drain of the second n-type transistor; a third resistive element coupled between the drain of the second n-type transistor and a gate of the third n-type transistor; and a first capacitive element coupled between the gate of the third n-type transistor and the reference potential node. Aspect 8: The oscillation circuit of Aspect 7, further comprising a second capacitive element coupled between the gate of the second n-type transistor and the output of the oscillator. Aspect 9: The oscillation circuit of Aspect 7 or 8, wherein the second n-type transistor, the third n-type transistor, the first resistive element, the second resistive element, and the third resistive element form at least part of a constant transconductance bias generator configured to generate a reference current that is dependent on process, voltage, and temperature (PVT). Aspect 10: The oscillation circuit of any of Aspects 6-9, wherein a transistor size ratio between the first p-type transistor, the second p-type transistor, and the third p-type transistor is n:1:1, where n≥1. Aspect 11: The oscillation circuit of any of Aspects 6-10, wherein: the adjustable current source comprises a fourth p-type transistor having a source coupled to the power supply rail, having a drain coupled to the oscillator core circuit, and having a gate coupled to the gate of the second p-type transistor and to the third p-type transistor; and a transistor size ratio between the fourth p-type transistor, the second p-type transistor, and the third p-type transistor is x:1:1, where x≥1. Aspect 12: The oscillation circuit of any of Aspects 6-10, wherein the second AGC circuit comprises: a fourth p-type transistor having a source coupled to the power supply rail; a fourth n-type transistor having a source coupled to the reference potential node and having a drain coupled to a drain of the fourth p-type transistor, to a gate of the fourth n-type transistor, and to a gate of the first n-type transistor; a fifth p-type transistor having a source coupled to the power supply rail and having a drain coupled to a gate of the fifth p-type transistor and to a gate of the fourth p-type transistor; a fifth n-type transistor having a drain coupled to the drain and the gate of the fifth p-type transistor; and a second resistive element coupled between a source of the fifth n-type transistor and the reference potential node. Aspect 13: The oscillation circuit of Aspect 12, wherein a transistor size ratio between the first n-type transistor, the fourth n-type transistor, and the fifth n-type transistor is m:1:1, where m≥1. Aspect 14: The oscillation circuit of Aspect 13, wherein a transistor size ratio between the first p-type transistor, the second p-type transistor, and the third p-type transistor is n:1:1, where n≥1. Aspect 15: The oscillation circuit of Aspect 14, wherein a ratio of m/n is equal to a ratio of a gain of the second AGC circuit to a gain of the first AGC circuit. Aspect 16: The oscillation circuit of Aspect 15, wherein: the first AGC circuit comprises a constant transconductance bias generator configured to generate a reference current; and the ratio of m/n is set such that the comparator is configured to output a logic high signal when the amplitude of the oscillation signal is estimated to be greater than or equal to an amplitude of the reference current multiplied by (1−m/n). Aspect 17: The oscillation circuit of any of Aspects 4-16, wherein the comparator is configured to output a logic high signal when an amplitude of a signal at the output of the second AGC circuit is greater than or equal to an amplitude of a signal at the output of the first AGC circuit. Aspect 18: The oscillation circuit of any of the preceding Aspects, wherein the first AGC circuit comprises a constant transconductance bias generator configured to generate a first reference current that is dependent on process, voltage, and temperature (PVT) of the first AGC circuit. Aspect 19: The oscillation circuit of Aspect 18, wherein the second AGC circuit is configured to generate a second reference current that is equal to the first reference current. Aspect 20: The oscillation circuit of any of the preceding Aspects, wherein an input of the second AGC circuit is open-circuited, such that the input of the second AGC circuit is configured to have zero current. Aspect 21: The oscillation circuit of any of the preceding Aspects, further comprising: a backup oscillator configured to generate another oscillation signal, the backup oscillator comprising: another oscillator core circuit for coupling to the resonator; and another adjustable current source coupled to the other oscillator core circuit and configured to control an amplitude of the other oscillation signal; and a third AGC circuit having an input coupled to an output of the backup oscillator and having an output coupled to a control input of the other adjustable current source, wherein the logic has a third input coupled to the output of the third AGC circuit. Aspect 22: The oscillation circuit of Aspect 21, further comprising a fourth AGC circuit configured to replicate a topology of the third AGC circuit, wherein the logic has a fourth input coupled to an output of the fourth AGC circuit. Aspect 23: A method of oscillation monitoring, comprising: generating an oscillation signal with an oscillator driving a resonator; generating a first signal with a first automatic gain control (AGC) circuit based on the oscillation signal; controlling a bias current for the oscillator based on the first signal; generating a second signal with a second AGC circuit, the second AGC circuit replicating the first AGC circuit; and effectively monitoring the oscillation signal based on the first signal and the second signal. Aspect 24: The method of Aspect 23, wherein the effectively monitoring comprises effectively monitoring the oscillation signal without directly sensing an amplitude of the oscillation signal. Aspect 25: The method of Aspect 23 or 24, wherein the effectively monitoring comprises comparing the first signal and the second signal and outputting a status signal indicating a failure when an amplitude of the second signal is lower than an amplitude of the first signal. Aspect 26: The method of Aspect 25, further comprising generating another oscillation signal with a backup oscillator, and in response to the status signal indicating the failure, switching to using the other oscillation signal instead of the oscillation signal. Aspect 27: The method of any of Aspects 23-26, wherein: a first transistor size ratio between a first n-type transistor of a comparator and a second n-type transistor of the second AGC circuit is m:1; a second transistor size ratio between a first p-type transistor of the comparator to a second p-type transistor of the first AGC circuit is n:1; the second signal from the second AGC circuit controls the first n-type transistor of the comparator and the second n-type transistor of the second AGC circuit; the first signal from the first AGC circuit controls the first p-type transistor of the comparator and the second p-type transistor of the first AGC circuit; and a ratio of m/n is equal to a ratio of a gain of the second AGC circuit to a gain of the first AGC circuit. Aspect 28: The method of any of Aspects 23-27, wherein generating the first signal comprises generating, with a constant transconductance bias generator of the first AGC circuit, a first reference current that is dependent on process, voltage, and temperature (PVT) of the first AGC circuit. Aspect 29: The method of Aspect 28, wherein generating the second signal comprises generating a second reference current, with the second AGC circuit, that is equal to the first reference current. Aspect 30: The method of Aspect 28 or 29, wherein the effectively monitoring comprises outputting a logic high signal from a comparator when an amplitude of the oscillation signal is estimated to be greater than or equal to an amplitude of the first reference current multiplied by (1−m/n). Aspect 31: An apparatus with oscillation monitoring, comprising: means for generating an oscillation signal; means for generating a first automatic gain control (AGC) signal based on the oscillation signal; means for controlling a bias current for the means for generating the oscillation signal, based on the first AGC signal; means for generating a second AGC signal, the means for generating the second AGC signal replicating the means for generating the first AGC signal; and means for effectively monitoring the oscillation signal based on the first AGC signal and the second AGC signal. Additional Considerations The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application-specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components with similar numbering. For example, means for generating an oscillation signal may include an oscillator (e.g., oscillator420or422depicted inFIG.4Aor oscillator701shown inFIG.7B) driving a resonator (e.g., resonator418depicted inFIG.4A, resonator618portrayed inFIG.6, or resonator707shown inFIG.7B). Means for generating a first AGC signal may include a main AGC circuit (e.g., the first AGC circuit702depicted inFIGS.7A and7B). Means for controlling a bias current for the means for generating the first signal may include an adjustable current source (e.g., the current source406or412shown inFIG.4A, the current source610portrayed inFIG.6, the transistor708illustrated inFIG.7B). Means for generating the second AGC signal may include a replica AGC circuit (e.g., the second AGC circuit704depicted inFIGS.7A and7B). Means for effectively monitoring may include logic, such as a processor or process monitor (e.g., the safety monitor416depicted inFIG.4A, the process monitor604and/or the oscillator health monitor602illustrated inFIG.6, or the comparator706shown inFIGS.7A and7B). As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database, or another data structure), ascertaining, and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory), and the like. Also, “determining” may include resolving, selecting, choosing, establishing, and the like. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). The various illustrative logical blocks, modules, and circuits described in connection with the present disclosure may be implemented or performed with discrete hardware components designed to perform the functions described herein. The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes, and variations may be made in the arrangement, operation, and details of the methods and apparatus described above without departing from the scope of the claims. | 64,264 |
11863145 | DETAILED DESCRIPTION Automatic gain control (AGC) may be accomplished in a radio frequency (RF) amplifier in a hybrid fiber-coaxial (HFC) network, consistent with embodiments of the present disclosure, using a wideband RF tuner to select multiple pilot channels (e.g., frequencies in lower and upper portions of an RF signal spectrum) for use in measuring power and determining a correction to be applied to the RF amplifier. The power may be measured, for example, using a received signal strength indicator (RSSI) from the wideband RF tuner or using a power detector circuit after the wideband RF tuner. Using the wideband RF tuner allows selectable gain and/or tilt control across a wideband spectrum, such as a channel spectrum of a CATV downstream RF signal, to maintain stable RF output levels as the RF amplifier performance or input level varies due to, for example, higher frequency operation or temperature induced changes. In the illustrated embodiments described herein, the RF amplifier is a line extender amplifier used in a CATV HFC network to amplify a wideband RF spectrum of up to 1.8 GHz; however, the AGC systems and methods using a wideband RF tuner described herein may be used in other types of RF amplifiers in HFC networks and at other frequency ranges. As used herein, “channel” refers to a sub-range of frequencies within a spectrum of frequencies, which are capable of being modulated to carry information. A “channel” may be identified as a single frequency in the sub-range of frequencies, and as used herein, “selecting a channel” may include selecting a single frequency that identifies the channel. As used herein, a “downstream RF signal” (also referred to as a forward RF signal) is an RF signal being sent from a source, such as a CATV headend/hub, to a destination, such as a CATV subscriber. As used herein, “composite power” refers to the total power of multiple frequencies from an RF signal. As used herein, “channel spectrum” refers to a predefined range of radio frequencies divided into a plurality of sub-ranges of frequencies (referred to as physical channels) and capable of being modulated to carry information. A “CATV channel spectrum” is a channel spectrum used for delivering video and/or data in a CATV network and is not limited to a particular range of frequencies. As used herein, “module” is a structural term referring to a self-contained assembly of components (e.g., electronic, optical or opto-electronic components) that together perform a dedicated function. The “modules” discussed herein (e.g., optical receiver module and RF amplifier module) are used as the names for structure and thus the term “module” is not being used as a nonce word in the present application. As used herein, the terms “circuit” and “circuitry” refer to physical electronic components (i.e., hardware) and any software and/or firmware (i.e., code), which may configure the hardware, be executed by the hardware, and/or otherwise be associated with the hardware. A particular processor and memory, for example, may comprise a first “circuit” when executing a first portion of code to perform a first function and may comprise a second “circuit” when executing a second portion of code to perform a second function. As used herein, the term “coupled” refers to any connection, coupling, link or the like between elements. Such “coupled” elements are not necessarily directly connected to one another and may be separated by intermediate components. Referring toFIG.1, an example of a CATV network100implementing automatic gain control (AGC) using a wideband RF tuner, consistent with embodiments of the present disclosure, is described in greater detail. The system and method for AGC using a wideband RF tuner may be implemented, for example, in line extender RF amplifiers119in the CATV network100, as described in greater detail below. In general, the CATV network100is a hybrid fiber-coaxial (HFC) network capable of delivering both cable television programming (i.e., video) and IP data services (e.g., internet and voice over IP) to customers or subscribers102through the same fiber optic cables and coaxial cables (i.e., trunk lines). Such a CATV network100is commonly used by service providers, such as Comcast Corporation, to provide combined video, voice and broadband internet services to the subscribers102. Although example embodiments of CATV networks are described herein based on various standards (e.g., Data over Cable Service Interface Specification or DOCSIS), the concepts described herein may be applicable to other embodiments of CATV networks using other standards. Multiple cable television channels and IP data services (e.g., broadband internet and voice over IP) may be delivered together simultaneously in the CATV network100by transmitting signals using frequency division multiplexing over a plurality of physical channels across a CATV channel spectrum. One example of the CATV downstream channel spectrum (also referred to as forward spectrum) includes channels from 650 MHz to 1794 MHz, but the CATV channel spectrum may be expanded even further to increase bandwidth for data transmission. In a CATV channel spectrum, some of the physical channels may be allocated for cable television channels and other physical channels may be allocated for IP data services. Other channel spectrums and bandwidths may also be used and are within the scope of the present disclosure. In addition to the signals being carried downstream (also referred to as forward signals) to deliver the video and IP data to the subscribers102, the CATV network100may also carry signals (e.g., IP data or control signals) upstream from the subscribers (also referred to as reverse signals), thereby providing bi-directional communication over the trunks. According to one example, the signal spectrum for the reverse signals carried upstream may be up to 600 MHz. The CATV network100generally includes a headend/hub110connected via optical fiber trunk lines112to one or more optical nodes114, which are connected via a coaxial cable distribution network116to customer premises equipment (CPE)118at subscriber locations102. The headend/hub110receives, processes and combines the content (e.g., broadcast video, narrowcast video, and internet data) to be carried over the optical fiber trunk lines112as optical signals. The optical fiber trunk lines112include forward path optical fibers111for carrying downstream optical signals from the headend/hub110and return or reverse path optical fibers113for carrying upstream optical signals to the headend/hub110. The optical nodes114provide an optical-to-electrical interface between the optical fiber trunk lines112and the coaxial cable distribution network116. The optical nodes114thus receive downstream optical signals and transmit upstream optical signals and transmit downstream (forward) RF electrical signals and receive upstream (reverse) RF electrical signals. The cable distribution network116includes coaxial cables115including trunk coaxial cables connected to the optical nodes114and feeder coaxial cables connected to the trunk coaxial cables. Subscriber drop coaxial cables are connected to the distribution coaxial cables using taps117and are connected to customer premises equipment118at the subscriber locations102. The customer premises equipment118may include set-top boxes for video and cable modems for data. One or more line extender RF amplifiers119may also be coupled to the coaxial cables116for amplifying the forward signals (e.g., CATV signals) being carried downstream to the subscribers102and for amplifying the reverse signals being carried upstream from the subscribers102. In this embodiment, the line extender RF amplifiers119may include an automatic gain control (AGC) system using a wideband RF tuner, as described herein, for controlling the gain of at least the downstream or forward RF signals. In other embodiments, the system and method for AGC using a wideband RF tuner may be used in other RF amplifiers. Referring toFIG.2, an RF amplifier200including automatic gain control (AGC) using a wideband RF tuner to select pilot channels, is shown and described in greater detail. In one example, the controlled RF amplifier200may be a line extender amplifier that supports DOCSIS 4.0 FDD (frequency division duplex) capabilities with downstream operation at frequencies up to 1794 MHz and upstream operation at frequencies up to 684 MHz. FDD refers to bi-directional broadband RF communication where the downstream and upstream each have their own dedicated, non-overlapping frequency spectrums. The RF amplifier200may be a line extender RF amplifier such as the line extender RF amplifier119used in the CATV network100described above. The RF amplifier200includes at least first and second ports202,204configured to be coupled to an electrical path carrying forward and reverse RF signals206,208, such as the coaxial cable115carrying forward RF signals downstream and carrying reverse RF signals upstream in the CATV network100. The RF amplifier200may be located inside an amplifier housing201, such as a weatherproof housing configured for an outdoor environment, with the ports202,204located on the outside of the amplifier housing201. The first port202provides an input for forward signals206and an output for reverse signals208, and the second port204provides an input for reverse signals208and an output for forward signals206. The RF amplifier200may include forward and reverse test point circuits212,214coupled to the respective ports202,204via respective directional couplers232,234. The forward and reverse test point circuits212,214allow testing of the forward and reverse signals206,208before and after amplification, for example, as described in U.S. Pat. No. 6,769,133, which is fully incorporated herein by reference. The RF amplifier200further includes a first diplex filter222coupled to the port202, a second diplex filter224coupled to the port204, and forward and reverse gain stages242,244coupled between the diplex filters222,224. The diplex filters222,224separate the forward and reverse signals that travel on the same electrical path at the ports202,204. The first diplex filter222separates and passes the forward signals206received on the first port202for amplification by the forward gain stage242, and the second diplex filter224separates and passes the reverse signals208received on the second port204for amplification by the reverse gain stage244. The diplex filters and gain stages may be implemented using known circuit components in RF amplifiers. The RF amplifier200may also include other circuit components (not shown) such as attenuators, equalizers, highpass filters, lowpass filters, system trim circuits, and reverse 6 dB switching circuits. In this embodiment, an automatic gain control (AGC) system250is coupled to at least the forward gain stage242that amplifies the downstream RF signal206. The AGC system250provides automatic gain control (AGC) and/or automatic level/slope control (ALSC) based on selected pilot channels in the downstream RF signal206. AGC or ALSC is used to maintain stable RF output levels of the amplifier gain stage242as RF input levels vary, for example, due to temperature induced changes and coaxial and passive losses. As will be described in greater detail below, the AGC system250uses a wideband RF tuner252to select multiple pilot channels (also referred to as test channels) from a sample of the downstream or forward RF signal206and determines the appropriate gain and/or tilt compensation. The AGC system250may take the sample of the downstream RF signal206at the input, immediate stage, output or any other location within the controlled RF amplifier200. Referring toFIG.3, an embodiment of an amplifier circuit300and an AGC system350is shown in greater detail. The amplifier circuit300is configured to receive an input downstream RF signal306aand to amplify the input downstream RF signal306ato provide an amplified output downstream RF signal306b. In one example, the operational gain at 1794 MHz may be in the 46 to 50 dB range. In this embodiment, the amplifier circuit300includes a gain stage340to provide amplification across the wideband RF spectrum and a variable tilt compensation network342to provide tilt compensation across the wideband RF spectrum. The variable tilt compensation network342may include attenuators and/or equalizers as known to those of ordinary skill in the art for use in AGC and/or ALSC, for example, to correct for frequency response movement over temperature. The attenuators may include, for example, a variable attenuator, such as an adjustable pad that has a flat response. The equalizers may include a variable equalizer that has a tilted response, such as a flat or linear tilted response or a bowed or cable tilted response. The AGC system350includes a wideband RF tuner352, a power detector circuit354, and a controller356such as a microcontroller. The wide band RF tuner352may include a commercially available terrestrial TV tuner, such as the Si2141 tuner available from Skyworks Solutions, Inc. or the MXL608 tuner available from MaxLinear, Inc. The wideband RF tuner352receives a portion or sample of the downstream RF signal306a,306band selects the pilot channels for use in AGC. Although the illustrated embodiment shows the sample taken at the output, the sample of the downstream RF signal306a,306bmay be taken at the input, immediate stage, output or any other location within the controller RF amplifier. In one example, the wideband RF tuner circuit352selects a first pilot channel in a lower portion of the frequency range of the RF spectrum (i.e., lower pilot channel) and a second pilot channel in an upper portion of the frequency range of the RF spectrum (i.e., upper pilot channel). In an embodiment of a 1.8 GHz RF amplifier where the forward band or RF spectrum frequency range is 54 MHz to 1.8 GHz, for example, the lower pilot channel may be below 750 MHz and the upper pilot channel may be above 1.2 GHz. In an embodiment of a 1.2 GHz RF amplifier where the forward band is 54 MHz to 1.2 GHz, the lower pilot channel may be below 500 MHz and the upper pilot channel may be above 700 MHz. The wideband RF tuner352pilot channels may be selected based on amplifier configuration or user input. For example, the number and/or location of the pilot channels can be set depending on the operating frequency range of the amplifier or set to particular customer defined locations. This flexibility in the location of the wideband tuner pilot channels allows the AGC operation to be adjusted as the forward bandwidth changes. In some embodiments, a lower pilot channel or frequency may be a legacy pilot channel and an upper pilot channel of frequency may be the highest channel utilized. Selecting more than two pilot channels and/or selecting pilot channels across a wider range of frequencies may allow more accurate tilt compensation, for example, to compensate for the non-linear tilt in a coaxial cable. The power detector circuit354provides a detector voltage derived from the signal level of the selected pilot channels selected by the wideband tuner and passed to it. Where multiple pilot channels are used, the power detector circuit354measures the power of each channel separately and generates the detector voltage for each. The power detector circuit354may include known power detector circuits such as the LMH2110 available from TI or the LT5537 available from Linear Technologies. The controller356receives the individual pilot channel power measurements made by the power detector circuit354and determines the required amplifier correction based on those measurements and/or user input. The controller356then generates the required control voltage or voltages to be applied to the amplifier circuit300for AGC purposes. The amplifier circuit300gain and/or tilt is adjusted in response to the control voltage or voltages to achieve the amplifier correction. The controller356may include a microcontroller, such as the STM32G0B1RCT6 available from ST Microelectronics. The controller356may be capable of receiving user input including AGC parameters such as target gain and target tilt. To adjust tilt, the controller356may sense the response level change at various frequencies (e.g., multiple pilot channels) and adjust the variable attenuators and/or equalizers that compensate for the tilt (e.g. to achieve the target tilt set by the user). The user may access the controller356(e.g., through application software) and instruct the controller356to change the variable attenuators and/or equalizers as needed. If a user looks at an input test point in the RF amplifier, for example, and determines that an attenuator and/or equalizer needs to be adjusted, the user may access the controller356to make that adjustment. The user may also instruct the controller356to set the amplifier to a predetermined configuration and the controller356will follow an algorithm to accomplish the user configuration. Referring toFIG.4, another embodiment of an amplifier circuit400and AGC system450is shown and described in greater detail. Similar to the embodiment described above, the amplifier circuit400includes a gain stage440to provide gain and a variable tilt compensation network442to provide tilt compensation across the wideband RF spectrum. In this embodiment, the AGC system450also includes a wideband RF tuner452to select the pilot channels, but an RSSI (received signal strength indication) signal454from the wideband RF tuner452is used, instead of the power detector circuit, to provide an indication of the signal level of the selected pilot channels. A controller456, such as a microcontroller, may then generate an AGC control voltage or voltages derived from the RSSI signals454of each tuned pilot channel for adjusting the gain and/or tilt in the amplifier circuit400. In this embodiment, the controller456or a separate RSSI monitoring circuit may be configured to receive the RSSI signal454. In both embodiments, the AGC system350,450may also be configured to prevent a loss of RF signal at a selected pilot channel from causing an AGC to rail the amplifier to a full gain condition. For example, the wideband RF tuner352,452may be configured to select a different pilot channel, if a loss of signal is detected at the previously selected pilot channel. If first and second pilot channels in lower and upper frequency ranges are used, the different pilot channel may be selected in the respective lower and upper frequency ranges. In other embodiments, the AGC system350,450may be configured to revert to a thermistor-based type gain control where the gain is adjusted in response to a thermal measurement. In another embodiment, the AGC system350,450may be configured to lock the gain settings to the level or state prior to the loss of the RF signal at the selected pilot channel. In some embodiments, the wideband RF tuner (e.g., a terrestrial TV tuner) may not be capable of selecting pilot channels at the higher forward band frequencies supported by the RF amplifier. Some TV tuners, for example, may only work up to 1.0 GHz, where newer RF amplifiers operate with a forward band up to 1.8 GHz. In these embodiments, the AGC system may also include a block conversion circuit to shift the higher frequencies down to lower frequencies that are in the range of the wideband tuner, as will be described in greater detail below. FIGS.5-7illustrate different embodiments a block conversion circuit560,560′,560″ that may be used in an AGC system, as described above. In the illustrated embodiments, similar toFIG.3discussed above, the AGC system includes a wideband RF tuner552to select the pilot channels, a power detector554to measure the power of the selected pilot channels, and a controller556(e.g., microprocessor) to provide the AGC control signal to the amplifier circuitry (e.g., to the variable attenuators and equalizers). As shown inFIG.5, an AGC coupler508may be used to direct a sample of the RF signal506into the AGC circuit. An attenuator561may be used to attenuate the sampled RF signal, and a splitter562divides the sampled RF signal and routes the sampled RF signal to low and high frequency paths. The low frequency path includes a low pass filter567and passes RF signals within the range of the wideband RF tuner552. The high frequency path includes a high pass filter to pass the RF signals at the high frequencies outside the range of the RF tuner552. The high frequency path block converts those RF signals above the tuning range of the RF tuner552to lower frequencies within the range of the wideband RF tuner552using a mixer564coupled to a local oscillator565followed by a low pass filter566. A switch568, such as a single pole double throw (SPDT) switch, selects which of the two paths is routed to the wideband RF tuner552at any one time. The wideband RF tuner552selects a particular pilot channel and converts the pilot channel to an intermediate frequency (IF) channel, which is filtered by a band pass filter569and sent to the power detector554. The power detector554measures the level of the IF channel and outputs a voltage representing that level. This process may be repeated at various points throughout the RF signal band to select additional pilot channels. The controller556then uses the power level information from the power detector554and other information programmed into the controller556to determine the correction needed in the amplifier circuitry, for example to determine which variable attenuators and/or equalizers to adjust and by how much. The embodiment of the block conversion circuit560′ shown inFIG.6is similar toFIG.5, but the locations of the splitter562and switch568have been reversed to improve path isolation and provide better terminations within the circuit. The low pass filter566is also moved to the common signal path following the splitter562to eliminate the need for multiple low pass filters. The embodiment of the block conversion circuit560″ shown inFIG.7is similar toFIG.6but uses a second switch568b(e.g., SPDT switch) instead of the splitter, which provides even better path isolation. Other variations of the block conversion circuit may also be used, if necessary, to shift the higher frequencies to the range of the wideband RF tuner. In other embodiments, the wideband RF tuner may be capable of selecting channels in the higher frequencies of the RF amplifier and the block conversion circuit may not be necessary. Referring toFIG.8, a method800for providing automatic gain control of an RF amplifier in an HFC network, consistent with the present disclosure, is shown and described. This method may be performed using any embodiment of the AGC system described above. According to the method800, an input downstream RF signal having a wideband RF spectrum (e.g., 1.8 GHz) is received810in an RF amplifier, such as a line extender RF amplifier in the HFC network. The input downstream RF signal is amplified812in the amplifier to produce an output downstream RF signal having the wideband RF spectrum. Amplifying the RF signal may include increasing the operational gain at 1794 MHz, for example, in a range of 46 to 50 dB. At least first and second pilot channels in the downstream RF signal (e.g., lower and upper pilot channels) are selected814using a wideband RF tuner, as described above. Power of the at least first and second pilot channels is measured816, for example, using a power detector circuit or an RSSI circuit in the wideband RF tuner. A correction is then determined818based, at least in part, on the measured power of the at least first and second pilot channels, and a control voltage (or voltages) is (are) sent820to the amplifier circuitry based on the correction (e.g., to the variable tilt compensation network and/or gain stage). The RF amplifier gain and/or tilt is adjusted in response to the control voltage or voltages determined from the measured power of the at least first and second selected pilot channels. Accordingly, using a wideband RF tuner to select pilot channels for automatically controlling gain across a wideband RF spectrum (e.g., 1.8 GHz) avoids the need to re-space RF amplifiers in an HFC network and allows RF amplifiers in the HFC network, such as a CATV network, to maintain stable RF output levels as the RF amplifier performance and/or input levels vary, for example, due to temperature induced changes. The wideband RF tuner also enables an adjustable AGC system that facilitates user configuration and adjustment of the desired gain and/or tilt, for example, to account for changes in the forward frequency band of the RF signals and the non-linear frequency response of coaxial cables. While the principles of the invention have been described herein, it is to be understood by those skilled in the art that this description is made only by way of example and not as a limitation as to the scope of the invention. Other embodiments are contemplated within the scope of the present invention in addition to the exemplary embodiments shown and described herein. Modifications and substitutions by one of ordinary skill in the art are considered to be within the scope of the present invention, which is not to be limited except by the following claims. | 25,391 |
11863146 | DETAILED DESCRIPTION The present disclosure may be understood more readily by reference to the following detailed description of the disclosure taken in connection with the accompanying drawing figures, which form a part of this disclosure. It is to be understood that this disclosure is not limited to the specific devices, methods, conditions or parameters described and/or shown herein, and that the terminology used herein is for the purpose of describing particular embodiments by way of example only and is not intended to be limiting of the claimed disclosure. Also, as used in the specification and including the appended claims, the singular forms “a,” “an,” and “the” include the plural, and reference to a particular numerical value includes at least that particular value, unless the context clearly dictates otherwise. Ranges may be expressed herein as from “about” or “approximately” one particular value and/or to “about” or “approximately” another particular value. When such a range is expressed, another embodiment includes from the one particular value and/or to the other particular value. The phrases “at least one”, “one or more”, and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together. FIG.1is a view illustrating a synchronization among nodes10and20associated with sirens and a main node15according to an embodiment of the present disclosure. Referring toFIG.1, the main node15may include an internal timer (not shown) used for synchronizing the nodes globally. The main node15may be a main box in a vehicle in communication with each node10or20over e.g., a control area network (CAN) bus network13. It is noted that in the present disclosure, a node is associated with a siren. For example, a siren may be located in a node, or in some aspects, the siren is not located in the node, but is controlled by a processing unit (e.g., processor110ofFIG.2B) belongs to the node. Still, for example, the siren can be the node itself. For the sake of description, only cases where the siren is located in each node or is the node itself are assumed. However, embodiments of the present disclosure are not limited thereto. In addition, the nodes (e.g.,10and20) may be located in a vehicle or in some aspects, may be located over two or more vehicles. For the sake of description, only a case where the nodes are located in a vehicle. However, embodiments of the present disclosure are not limited thereto. Each node10and20is in communication with the main node15over e.g., the CAN bus network13which allows the main node15and the nodes10and20to communicate with each other in applications without a host computer. The present disclosure will be described to address three technical issues as follows:(a) Controlling of a tone and a volume level to a single siren;(b) Synchronization Between More Than One Sirens—Tone Synchronization; and(c) Asynchronous Mode Tone Synchronization. Controlling of a Tone and a Volume Level to a Single Siren In this section, a method or system for controlling a tone and a volume level to a single siren are described. FIG.2Ais a view illustrating a method for an emergency sound output from a node.FIG.2Bis a block diagram illustrating an example system10aassociated with the method ofFIG.2A. Referring toFIGS.2A and2B, the system10aincludes a processor110, a memory150, a digital-to-analog converter (DAC)120, and a speaker130. The memory150includes tone information152and volume level information154. In step S210, the processor150reads the tone information152and the volume level information152which a node of the system10ais based on to generate a sound141. A tone is defined using one or more frequency lists arranged in a certain time period. Thus, the tone information152is associated with frequency variation over a predetermined period (e.g., 1 second). In addition, the volume level information154is associated with volume level variation over the same predetermined period. These predetermined periods can be independent of the volume and frequency definitions and can be associated with any tone either defined using a frequency list, or as pulse-code modulation (PCM)/MP3 data which can be made locally on the device at a later time or downloaded to the device at a later time. This allows you to define as to how a tone will ramp up and down in volume without knowing what will play or its period, this association can be figured out on runtime. Referring toFIG.3, a mapping table300among nodes (or sirens), tones, frequency variation, and volume level variation. For example, for a certain node SR1, a tone1can be preset, and the frequency variation of the tone1can be defined as [f1s, f1e], where f1sand f1erepresent a start frequency and an end frequency over the predetermined period. Referring further toFIG.3illustrating example frequency variation of the tone1and volume level variation both associated with the node SR1, the predetermined period starts at a t1and ends at a t1+Tp, where Tp is a duration of the predetermined period. As shown inFIG.4A, when the tone1is read in regard to the node SR1, the frequency may ramp linearly from f1sto f1e. By way of example only, if the frequency variation is defined as [1 kHz, 2 kHz], the frequency of a sound (or a tone signal) output from the node SR1linearly increases from 1 kHz to 2 kHz. While one frequency segment is illustrated and described in the present disclosure for the sake of description, embodiments of the present disclosure are not limited thereto. For example, the frequency table may consist of a single frequency segment, or three or more frequency segments, each of which is linearly ramped between, not allowing for jumps in frequency, while the volume table can have due to volumes being defined in pairs consisting of start and end volumes. For example, the end frequency in the previous frequency segment may act as the start frequency of the next frequency segment. The predetermined period Tp can be divided into a plurality of time points at each of which a corresponding frequency and volume level is read by the processor110. In addition, referring still toFIGS.2B,3and4B, the volume level variation can be defined as [v11s, v11e] [v12s, v12e] where each bracket represents a volume segment. While two volume segments are illustrated and described in the present disclosure for the sake of description, embodiments of the present disclosure are not limited thereto. For example, the volume level variation may consist of a single volume segment, or three or more volume segments. Referring further toFIGS.4A and4Billustrating an example frequency variation of the tone1and volume level variation both associated with the node SR1, the predetermined period starts at a ta and ends at a ta+Tp. For the example volume level variation [v11s, v11e] [v12s, v12e], the volume level will ramp from v11sto v11ein the first half period of Tp and then ramp from v12sto v12ein the second half period. Similar description can be applied in regard to other nodes including a node SR2. Thus, duplicate of description thereof will be omitted for the sake of simplicity. As shown inFIG.3, the volume level variation data can be associated with (or mapped to) any tone whether it is a predefined tone or a yet to be recorded on the fly tone, and the predetermined period Tp over which the volume level variation data are defined will be updated accordingly. In addition, since segments of the volume level variation data over the predetermined period Tp are defined in pairs, the volume level of a tone signal can vary in a discrete manner without smooth ramping requiring a number of steps to reach a target level, as exemplary shown inFIG.4B. However, the frequency lists of a tone signal might not be defined in more than one segments and in pairs. For example, if the frequency of a tone signal is defined in a single segment [0, 20 Hz] over the predetermined period Tp, the frequency may linearly ramp from 0 Hz to 20 Hz. This might not allow for instantaneous jumps between frequencies. Thus, to address this issue, multiple tones can be defined to make the instantaneous jumps between frequencies possible. For example, one tone can be defined as [0, 0] over a first predetermined period and another tone can be defined as [10 Hz, 10 Hz] over a second predetermined period following the first predetermined period, so that the frequencies can be placed in sequence with one another to instantly jump from 0 to 10 Hz. Referring toFIG.2A, in step S220, the processor110generates a tone signal111based on the tone information152and the volume level information154. The tone signal1has a corresponding frequency and a volume level for each of the plurality of time points of the predetermined period Tp, so that the tone signal111will have the frequency variation and the volume level variation as defined in the mapping table300ofFIG.3. In addition, the tone signal111has a pattern repeated for every predetermined period Tp. In step S230, the processor110generates a sound (wave) based on the tone signal111. In one embodiment, the tone signal111may be a digital signal. The system10amay further include a DAC120which converts the digital tone signal111to an analog signal121. The converted analog signal121can be provided to an amplifier130and a speaker140for generating the emergency sound141. Synchronization Between More than One Sirens—Tone Synchronization In this section, a method or system for synchronizing multiple sirens based on globally synchronizing the sirens in time. As an example, referring toFIG.6A, illustrated is a method for synchronizing two nodes6100and6200and synthesizing a desired sound whose frequency and volume level compliant to preset requirements or serving to make the sound more noticeable to other drivers. In the present disclosure, the term synchronizing nodes (or sirens) may be understood as “synchronizing tones respectively output from the nodes (or sirens)”.FIG.6Bis a block diagram illustrating a system6000associated with the method ofFIG.6A. Referring now toFIG.6B, the first node6100includes a processor6110, a memory6140coupled to the processor6110, a communication device6130, and a timer6120. Further, the second node6200includes a processor6210, a memory6240coupled to the processor6210, a communication device6230, and a timer6220. While it is illustrated inFIG.6Bthat each node has its own processor and memory for the sake of description, embodiments of the present disclosure are not limited thereto. For example, there may be a processor (not shown) and a memory (not shown), each of which is shared to control or perform processing jobs associated with each of the nodes6100and6200. For example, operations of each node ofFIG.6Bare approximately the same as the node10aofFIG.2Bexcept that they are required to a time synchronization therebetween to make a final output sound synchronized in tone. Thus, duplicate description thereof will be omitted for the sake of simplicity. Referring now toFIG.6A, the processor6110of the first node6100reads tone information6142and the volume level information6144from the memory6140(S610). The tone information6142and the volume level information6144both correspond to the first node6100(e.g., particularly a siren associated with the first node6100). Further, the processor6110generates a first tone signal6111based on the tone information6142and the volume level information6144(S620). Referring still toFIG.6A, the processor6210of the second node6200reads tone information6242and volume level information6244from the memory6240(S630). The tone information6242and the volume level information6244both correspond to the second node6200(e.g., particularly a siren associated with the second node6200). Further, the processor6210generates a second tone signal6211based on the tone information6242and the volume level information6244(S640). Referring still toFIG.6A, in step S650, the two tone signals6111and6211are mixed to generate a mixed tone signal6301using, but not limited to, a firmware implementation of a signal mixer6300. The mixed tone signal6301is a basis for generating an emergency sound6601which is compliant to preset requirements and noticeable to other drivers. When the tone signals6111and6211are mixed, a phase offset φ therebetween is an important factor in order to control aspects of a final output sound being generated from each node based on, e.g., they are in (e.g., φ=0) or output of phase (e.g., φ=π). In the present disclosure, “mixing tone signals” may be understood as “adding, superimposing, or multiplying the tone signals”. Thus, for the precise control of the phase offset φ, each of the nodes6100and6200is required to timely synchronized to a global time of a reference node (e.g., main node600) which will be obtained by a method explained below. In addition, in step S660, the mixed tone signal6301is provided to a DAC6400, an amplifier6500and a speaker660for generating the emergency sound6601. The global time being acquired can be used to synchronize tones as well as their corresponding variable volume levels defined in the mapping table300ofFIG.3. By way of example only, using the modulo of a tone time (e.g., predetermined period Tp) with the global time stamp, a corresponding frequency and an associated volume level that should be played through a speaker (e.g.,6600ofFIG.6B) can be found. In addition, the output volume level of each node (or siren) can be scaled based on the volume level determined based on the mapping table300ofFIG.3, so that a dynamic volume level scaling over the course of a tone time (e.g., the scaling can be made for a predetermined time tone) can be achieved when tone signals are mixed (or multiplied). For example, dynamically scaled multiple tones are mixed together in phase by having the voltage at each time multiplied by 1/NumberOfTones. This allows for sounding to other drivers as if there are multiple vehicles approaching from distances away. Further, for example, the mixed tone signal (e.g.,6301ofFIG.6B) can be scaled globally by taking the resulting voltage level found after all the mixing and dynamic scaling of the respective tone signals (e.g.,6111and6211ofFIG.6B) and by mapping the resulting voltage level to a digital signal (e.g., defined by a byte from 0 to 255) that corresponds to a percentage of the peak volume level of unscaled mixed tone signal. The scaled mixed tone signal can be output to the DAC6400, the amplifier6500, and the speaker6600which are sequentially arranged. This allows the system to set the output volume level of an emergency sound to be compliant to preset volume level requirements defined in associated regulation specifications and to change the volume level between the specifications. Further, by way of example only, the resulting volume level can be provided over the CAN bus and can be a value upon which the resultant mixed tone signal is based. If a tone is created on the fly, a timing associated with the created tone cannot be known until it has been created. Therefore, after it is created, the time of the on the fly tone may be substituted and scaled using the volume intensity list. Also, the entirety of the resultant tone signal may be scaled after it has been mixed and/or dynamically scaled over the period of time without using a volume intensity list. Referring still toFIG.6A, prior to the steps S610to S660, the nodes6100and6200each performs a time synch to a global time tg based on a couple of synch messages transmitted from a main node600. The main node600is in communication with each of the nodes via a CAN bus network. FIG.5is a view illustrating an example algorithm to be performed by each node to synchronize its local time to a global time tg of a main node15aaccording to an embodiment of the present disclosure. Referring toFIG.5, the main node15atransmits a first synch message201to a certain node10b(e.g., node6100or6200) for the time synch between the nodes. The main node15atransmits a first synch message201at t1. Then, the node10breceives the first synch message201using a communication device (e.g.,6130) and acquires a first receive time tr1upon receipt of the first synch message2using a timer (e.g.,6120). The time difference Δt11may be understood as a propagation time over a channel between the node10band the main node15a. Next a second message is sent tr2containing the transmit complete time of t1based on the master's internal clock and a processor determines a time difference delta t11between this transmit complete time and the first received time tr1of the slave. The node10bthen determines a global time tg by adding the time difference Δt11its current internal time. For instance, the node10bmay replace the second receive time tr2by t_local+Δt11(e.g., tg=t_local+Δt11). The aforementioned time synchronization method can be applied to each of the nodes6100and6200, so that the local time of each node will be synchronized to the global time tg of the main node. Once each node is set to be synchronized to the global time tg, the global time may not be modified by other emergency signaling units such as light bars using the aforementioned method. For example, the frequency or intensity variations of light bars may be based on the synchronized global time. The time synchronization using the global time allows the system to instantaneous jump in frequency and volume level of a mixed tone (analog) signal or square wave6301(e.g., square-wave) being played through the amplifier (e.g.,6500) to resync the tone globally to other sirens playing that same pattern in the same phase, this can allow light patterns of the same length to sync to flash patterns making them more noticeable to the human eye. Alternatively, a reload time of the internal timer (e.g., 80-microsecond timer) of the siren node (e.g.,600) can be reduced to allow that siren to catch up to and track the global time of the main node. For example, if each node is behind the global time of the main node by a predetermined time (e.g., a few milliseconds box), the reload time of the timer of that node can be reduced for the node to catch itself up smoothly by running the threads in the real-time operating system (RTOS) more often. Further, this can further be enhanced using a proportional-integral-derivative (PID) controller to catch up more quickly or slowly based on whether a local time of the node is far or close to the global time accordingly. Referring back toFIGS.5and6B, in one embodiment, the transmit times t1and t2included in the respective synch messages201and202can be obtained using the internal timer of the main node (e.g.,600) based on the length of time that has elapsed since the system has powered up. In another embodiment, the transmit times t1and t2can be obtained from a node containing a global positioning system (GPS) module which tracks and relays a GPS Time which began counting on midnight between Jan. 5, 1980 and Jan. 6, 2019, which is the same on any GPS module allowing nodes (or sirens) between any of two or more systems (e.g., Whelen products) to be synchronized globally. In one embodiment, once a global time is synchronized and tracked at all the nodes. A time into the tone tintcan be determined by the following equation (1): tint=(degreeOfPhase/360×timeOfTone)+(currentGobalAbsoluteTime Mod timeOfTone) Equation (1), where the time into the tone tintis where a tone should be currently positioned in the tone cycle that simulates its repetitive pattern since the main box booted up. The degreeOfPhase represents a phase in degree where the playback of the tone would have been, at a global time zero. The currentGlobalAbsoluteTime represents a current global time. The “Mod” represent a modulo operator. The timeOfTone is a time period where a tone period exists. For example, if the tone is being played at degreeOfPhase of 0, at the global time zero, the tone will start at the beginning at this point in time. Further, if the tone is being playing at degreeOfPhase 180, the tone would start halfway through its period at global time zero. Therefore, any tones or patterns with this as their period (e.g., timeOfTone) will be synchronized with one another because they are based on the same global time (e.g., zero) and the same repeating periodic time base to figure out where they are into the pattern. FIG.4Cis a view illustrating an example timing diagram for determining a time into tone tintaccording to an embodiment of the present disclosure. For example, referring toFIG.4C, the tone period (e.g., timeOfTone) repeats for every 1000 ms, the tone is played at degreeOfPhase of 0, at the global time zero, and the global time is 2600 ms. Asynchronous Mode Tone Synchronization In this section, an example method or system for synchronizing multiple sirens in an asynchronous mode are described with reference toFIGS.7A and7B. For example, in case nodes7100and7200ofFIG.7Acannot be synchronized using the method on the global time stamping, as described with reference toFIGS.5,6A and6Bof the previous section, a tone signal7111output from one node7100may be output to an aux input of another node7200, which allows the nodes7100and7200to quickly change between tones being generated locally on the respective nodes. In addition, the node7200receiving the tone signal7111through an aux input from the node7100may be configured to selectively output the tone signal to the amplifier7220and the speaker7230sequentially arranged. Referring toFIGS.7A and7B, the synchronization system7000can be described in associated with a processor7310, a memory7320coupled to the processor7310, a DAC7330, a first node7100including a first mux7110, an amplifier7120and a speaker7130, and a second node7200including a second mux7210, an amplifier7220, and a speaker7230. In step S710, the first node7100generates a tone signal7111having a single tone or mixed multiple tones using the processor7310. Similar to what is described with reference toFIG.2B, the processor7310reads tone information and volume level information both corresponding to the first node7100and provides a digital tone signal to the DAC7330which converts the digital tone signal to an analog tone signal7331. The output analog tone signal7331of the DAC7330is provided to the first mux7110. For example, the analog tone signal7331provided by the DAC7330can be a sine wave and a square wave depending on, e.g., whether the analog tone signal consists of a single tone or multiple tones. In one embodiment, the first mux7110may be an electric switch and may be configured to selectively output one of the sine wave or the square wave to an aux output thereof. For the sake of simplicity, the sine wave and the square wave are collectively referred to a (analog) tone signal (e.g.,7331). In addition, the first mux7110also provides the tone signal7111to the amplifier7120and the speaker7130sequentially arranged through a regular port7112and to the second mux7210through an aux port7113(S720). Further, in step S740, the second mux7210receives the tone signal7111from the first mux7110and provides the received tone signal7111to the amplifier7220and the speaker7230sequentially arranged through a regular port and to an input port of another node (not shown) which will be synchronized to the nodes7100and7200. In the first node7100, the tone signal7111outputted through the regular port7112can be a basis for generating an emergency sound7131using the amplifier7120and the speaker7130(S730). Similarly, in the second node7200, the tone signal7111outputted through the regular port can be a basis for generating an emergency sound7231using the amplifier7220and the speaker7230(S760). It is noted that the emergency sounds7131and7231are synchronized in time (or phase) and/or tone. Although it is illustrated and described with reference toFIG.7A, the first node7100includes the first mux7110, the amplifier7120, and the speaker7130; the second node7200includes the second mux7210, the amplifier7220, and the speaker7230; and the processing part7300including the processor7310, the memory7320, and the DAC7330is separated from each of the nodes7100and7200. However, these configurations are only examples for the sake of description, and thus embodiments of the present disclosure are not limited thereto. For example, the processing part7300can be part of the first node7100. Still, for example, the amplifiers7120and7220(or the speakers7130and7230) may be implemented with a single amplifier (or a single speaker) which can be shared by the first and second nodes7100and7200. The asynchronous mode synchronization method described in this section may particularly be useful when tones (or tone signals) generated by respective nodes cannot be synchronized, at least because the tones do not have a constant or know length at an amplifier or a tone does a message in the beginning once and then repeats the second half of the tone exclusively. These tones can be synchronized between nodes by using an auxiliary output from one node (e.g.,7100ofFIG.7A) to an auxiliary input of a next node (e.g.,7200ofFIG.7A) and stringing them together. The auxiliary output can then be selected and played on each node individually resulting in the same tone with no phase offset therebetween or only a slight phase shift due to capacitance in the hardware. Regarding the processors110ofFIG.2B,6110and6210ofFIG.6B, and7310ofFIG.7A, at least one of the processors can be implemented with an m4 microcontroller which allows for floating point calculations. Further, a real-time operating system (RTOS) can be used to allocate computing times to the proper process in a node in a timely manner. Regarding the amplifiers130ofFIG.2B,6500ofFIG.6B, and7120and7220of FIG.7A, at least one of the amplifiers can be implemented with a class-D amplifier to vary the volume level of a tone signal. This may allow for the amplifier to be made that is much more power efficient instead of dissipating much power in the form of heat over the output transistors. This would eliminate the need for a transformer as well which would bring costs down significantly. It is because class AB amplifiers may be inefficient particularly when they work with an input analog tone signal, substantial amount of output power from each class-AB amplifier may be dropped over the output transistors in the form of heat. In one embodiment, the aforementioned global synchronization can be used for noise cancellation when it is combined with the ability to shift the phase in our sine wave lookup table that is a basis for finding an appropriate frequency and volume level values. For example, if an amplifier outputs a sine wave of an 1 kHz frequency having a phase of 0 degree, a cabin speaker can output a sine wave of the same frequency having a phase of 180 degrees, so the sine wave output from the amplifier can be canceled out due to the sine wave output from the cabin speaker. Example embodiments regarding the noise cancellation in a vehicle are disclosed in Applicant's copending U.S. patent application Ser. No. 16/295,236, filed on Mar. 7, 2019, entitled “SYSTEM AND METHOD FOR NOISE CANCELLATION IN EMERGENCY RESPONSE VEHICLES”, the entire disclosure of which is incorporated by reference herein. FIG.8is a block diagram of a computing system4000according to an exemplary embodiment of the present disclosure. Referring toFIG.8, the computing system4000may be used as a platform for performing: the functions or operations described hereinabove with respect to at least one of the systems10aofFIG.2B,6000ofFIG.6B and7000ofFIG.7Aand/or the method described with reference toFIGS.2A,6A and7B. Referring toFIG.8, the computing system4000may include a processor4010, I/O devices4020, a memory system4030, a display device4040, and/or a network adaptor4050. The processor4010may drive the I/O devices4020, the memory system4030, the display device4040, and/or the network adaptor4050through a bus4060. The computing system4000may include a program module for performing: the functions or operations described hereinabove with respect to at least one of the systems10aofFIG.2B,6000ofFIG.6B and7000ofFIG.7Aand/or the method described with reference toFIGS.2A,6A and7B. For example, the program module may include routines, programs, objects, components, logic, data structures, or the like, for performing particular tasks or implement particular abstract data types. The processor (e.g.,4010) of the computing system4000may execute instructions written in the program module to perform: the functions or operations described hereinabove with respect to at least one of the systems10aofFIG.2B,6000ofFIG.6B and7000ofFIG.7Aand/or the method described with reference toFIGS.2A,6A and7B. The program module may be programmed into the integrated circuits of the processor (e.g.,4010). In an exemplary embodiment, the program module may be stored in the memory system (e.g.,4030) or in a remote computer system storage media. The computing system4000may include a variety of computing system readable media. Such media may be any available media that is accessible by the computer system (e.g.,4000), and it may include both volatile and non-volatile media, removable and non-removable media. The memory system (e.g.,4030) can include computer system readable media in the form of volatile memory, such as RAM and/or cache memory or others. The computer system (e.g.,4000) may further include other removable/non-removable, volatile/non-volatile computer system storage media. The computer system (e.g.,4000) may communicate with one or more devices using the network adapter (e.g.,4050). The network adapter may support wired communications based on Internet, local area network (LAN), wide area network (WAN), or the like, or wireless communications based on code division multiple access (CDMA), global system for mobile communication (GSM), wideband CDMA, CDMA-2000, time division multiple access (TDMA), long term evolution (LTE), wireless LAN, Bluetooth, Zig Bee, or the like. FIG.9is a view illustrating an example algorithm to be performed for synchronization according to another embodiment of the present disclosure. This embodiment is preferred for sirens to prevent audible jumps in tones, but is also applicable to other components requiring synchronization, e.g., lights. When syncing using a PID controller as opposed to just instantaneously jumping to a new timestamp using the method already specified above, three consecutive sync messages901-903are required as opposed to two (seeFIG.5). This is due to the need for the PID controller to be tuned before use allowing for the same PID controller with different numbers to be used across varying temperature ranges and varying drifts between the main and external node. The first two synch messages and the processes performed are the same as those described inFIG.5. That is, the global time tg is calculated using two synch messages901and902, i.e., tg=t2+Δt11. Once this difference is added to the internal clock in node10cand the two devices (15cand10c) are in theory perfectly synced for that instance in time, slow drift is still occurring from the main node15cbased on the difference in running rates between the oscillators on the two nodes (15cand10c). To prevent the need for further jumps, which is more perceptible in a siren jump, this variance in running rates must be eliminated which is where the PID controller is utilized. One more message is therefore needed in order to see the difference of how far the slave node was from the main node in time initially as compared to how far it is from the main nodes clock one sync message later. The main node15ctransmits a third synch message903to node10c(e.g., node6100or6200). The main node15ctransmits the third synch message903which includes the second transmit complete message tr2. The third synch message903is received at node10cand the receive time tr3is recorded. For example, if the time difference was initially 50 ms and it is now 55 ms one sync message later, this indicates that drift is occurring at a rate of 5 ms. The PID controller is not necessarily meant to eliminate the 5 ms difference, but adjust for the continuous 5 ms drift which is occurring after the jump. Thus, the P, I, and D values can be tuned based on a percentage of this drift to slow down or speed up the clock in node10cto bring the devices into synchronization without an abrupt jump. In addition, a maximum allowable drift can be preset, which, if exceed, will trigger the need for a new jump and thereafter a new tuning will be needed in order to allow the PID controller the ability to compensate for the new drift, which is often caused by temperature variations or some other external factor. The PID controller and the values used for tuning can then be used to alter the auto reload register time of that clock in node10cas to what constitutes 1 ms for instance and speed it up or slow it down so that its 1 ms now happens at a slightly different rate as measured by external devices; but, the internal tasks to node10cstill treat this time as if 1 ms has occurred slowing down or speeding up there tasks in real time to compensate. A method for synchronizing a local node to a global time, comprising: receiving, using one or more processors, a first time at which a main node transmits a first synch message including the first time; determining, using the one or more processors, a second time at which the local node receives the first synch message; determining, using the one or more processors, a difference between the first time and the second time; receiving, using one or more processors, a third time at which the main node transmits a second synch message including the third time; determine, using the one or more processors, a fourth time at which the local node receives the second synch message; and determine, using the one or more processors, the global time by adding the first difference to the third time. Further comprising receiving, using one or more processors, a fifth time at which the main node transmits a third synch message; determine, using the one or more processors, a sixth time at which the local node receives the third synch message; determining, using the one or more processors, a difference between the fifth time and the sixth time; adjusting the values of a proportional-integral-derivative (PID) controller based on a percentage of the difference between the fifth time and the sixth time; and adjusting a clock of a controlled device based on the adjusted values of the PID controller. Further comprising determining if the adjusting of the values exceeds a preset limit; and if the adjusting of the values exceeds the preset limit, resetting the global time. Exemplary embodiments of the present disclosure may include a system, a method, and/or a non-transitory computer readable storage medium. The non-transitory computer readable storage medium (e.g., the memory system4030) has computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EEPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, or the like, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to the computing system4000from the computer readable storage medium or to an external computer or external storage device via a network. The network may include copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card (e.g.,4050) or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the computing system. Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the computing system (e.g.,4000) through any type of network, including a LAN or a WAN, or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In an exemplary embodiment, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, system (or device), and computer program products (or computer readable medium). It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The corresponding structures, materials, acts, and equivalents of all means or step plus function elements, if any, in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present disclosure has been presented for purposes of illustration and description but is not intended to be exhaustive or limited to the present disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present disclosure. The embodiment was chosen and described in order to best explain the principles of the present disclosure and the practical application, and to enable others of ordinary skill in the art to understand the present disclosure for various embodiments with various modifications as are suited to the particular use contemplated. While the present invention has been particularly shown and described with respect to preferred embodiments thereof, it will be understood by those skilled in the art that the foregoing and other changes in forms and details may be made without departing from the spirit and scope of the present invention. It is therefore intended that the present invention not be limited to the exact forms and details described and illustrated but fall within the scope of the appended claims. | 43,627 |
11863147 | DETAILED DESCRIPTION Various embodiments will be described in detail with reference to the drawings, wherein like reference numerals represent like parts and assemblies throughout the several views. People spend a significant amount of time travelling in vehicles. Many of them find that time to be more enjoyable when they are listening to music, watching videos, or otherwise consuming media content. Media content includes audio and video content. Examples of audio content include songs, albums, playlists, radio stations, podcasts, audiobooks, and other audible media content items. Examples of video content include movies, music videos, television programs, and other visible media content items. In many cases, video content also includes audio content. As used herein, the term “vehicle” can be any machine that is operable to transport people or cargo. Vehicles can be motorized or non-motorized. Vehicles can be for public or private transport. Examples of vehicles include motor vehicles (e.g., cars, trucks, buses, motorcycles), rail vehicles (e.g., trains, trams), tracked vehicles, watercraft (e.g., ships, boats), aircraft, human-powered vehicles (e.g., bicycles), wagons, and other transportation means. A user can drive a vehicle or ride in as a passenger for travelling. As used herein, the term “travel” and variants thereof refers to any activity in which a user is in transit between two locations. Consuming media content in a vehicle presents many challenges. In general, a user in a moving vehicle may have limited attention available for interacting with a media playback device due to the need to concentrate on travel related activities, such as driving and navigation. Therefore, while a vehicle is moving, it can be difficult for a user in the vehicle to interact with a media playback device without disrupting the driving or navigation. Further, the user interface of a media playback device can be overly complex, or may require such fine motor skills that it can be difficult to use while traveling in a vehicle. Voice-based user interfaces also encounter significant challenges to use in a vehicle environment. The passenger areas of a vehicle are often noisy due to engine noise, road noise, wind and weather noises, passenger noises, and the sound of any media content that may be playing on a media playback system in the vehicle. This noise hampers the ability of the voice-based user interface to interact with a user. Moreover, accessing media content while travelling may be difficult, expensive, or impossible depending on network availability or capacity along the route of travel. Further, accessing and playing media content can require significant amounts of electric power. Thus, use of a mobile device for media content playback during travel may be undesirable because it will drain the battery. It can also be challenging to connect a media playback device to a vehicle's built-in audio system because of the requirement to connect to auxiliary cables or undergo a complicated wireless pairing process. Embodiments disclosed herein address some or all of these challenges. It should be understood, however, that various aspects described herein are not limited to use of a media playback device during travel. On the other hands, many users desire a personalized media consuming experience. For example, a user can access almost limitless catalogs of media content through various free or fee-based media delivery services, such as media streaming services. Users can use mobile devices or other media playback devices to access large catalogs of media content. Due to such large collections of media content, it is desired to make it possible to customize a selection of media content to match users' individual tastes and preferences so that users can consume their favorite media content while traveling in a vehicle. Many vehicles include a built-in media playback device, such as a radio or a fixed media player, such as a player that can play media content from a CD, USB driver, or SD cards. However, the media content that is delivered using these built in vehicle media playback devices is greatly limited and is not flexible or customizable to the user. Alternatively, a mobile device, such as a smartphone or a tablet, can be used by a user to enjoy personalized and flexible music consuming experience in a vehicle by running music streaming applications thereon. However, mobile devices are not well suited for use in a vehicle environment for various reasons. For example, mobile devices are not readily accessible or controllable while driving or navigating. Further, connection between a mobile device and a vehicle audio system is often inconvenient and unreliable. Moreover, the music streaming application is not automatically ready to run and play media content, and the user needs to pick up the mobile device and open the music streaming application and control a sophisticated user interface to play media content. Additionally, many users have limited mobile data available via their mobile devices and are concerned about data usage while using the music streaming application in the vehicle. Battery drainage and legal restrictions on use while driving are further drawbacks to using mobile devices for playing media content in the vehicle. To address these challenges, the present disclosure provides a special-purpose personal appliance that can be used for streaming media in a vehicle. The appliance is also referred to herein as the personal media streaming appliance (PMSA). In some embodiments, the appliance is specially designed to be dedicated for media streaming purposes in a vehicle, and there is no other general use. Some embodiments of the appliance can operate to communicate directly with a media content server and receive streamed media content from the server via a cellular network. In these embodiments, other computing devices, such mobile devices, are not involved in this direct communication between the appliance and the media content server. Mobile data cost can be included in the subscription of the media streaming service or a purchase price of the personal appliance. Therefore, the customer's possible concern about mobile data usage can be eliminated. In other embodiments, the appliance can connect to another computing device, such as a mobile device, that provides a mobile hotspot to enable the appliance to communicate with the media content server rather than the appliance communicating with it directly. For example, a mobile device is used to assist in communication between the appliance and the media content server. Further, the appliance can be associated with a user account of the user for the media streaming service so that the user can enjoy personalized media content. In some embodiments, the appliance provides a simplified user interface so that a user can easily control playback of media content in a vehicle while maintaining his or her focus on other tasks such as driving or navigating. For example, the appliance has a limited set of physical control elements that are intuitively controllable for playback of media content with little (often only one) input from a user. Examples of such physical control elements include a rotatable knob and one or more physically-depressible buttons. Further, in some embodiments, the appliance is configured to be easily mounted to an interior structure of a vehicle, such as a dashboard, so that the user can easily reach the appliance. In some embodiments, the appliance also provides an output interface that can be easily connected to a vehicle audio system, such as via an auxiliary input port or Bluetooth. Therefore, the media content streamed to the appliance can then be transmitted from the appliance to the vehicle audio system for playback in the vehicle. In some embodiments, the appliance can includes a voice interaction system designed for voice interaction with a user in the noisy environment of a vehicle. In some embodiments, the appliance includes multiple microphones that reduce the effects of ambient noise in the passenger area of the vehicle. In an example, the appliance includes at least three microphones: two directed to the passenger area of the vehicle and another facing away from the passenger area of the vehicle to pick up vibrations and low frequency noise for cancellation. The appliance also applies spectral noise cancellation to reduce non-voice frequencies. In addition, omni-directional noise cancellation is applied in some embodiments to reduce omni-directional sound (e.g., vehicle noise). Directional noise is detected by determining a difference between audio input detected by the two microphones facing the passenger area. The difference is preserved as directional audio input. The appliance further cancels out audio that it is currently playing, allowing the appliance to detect voice commands even over loud music, for instance. In this manner, the appliance is arranged to provide an improved voice-based interface in a vehicle environment. Embodiments described herein are directed to playing media content with a media playback device in a vehicle. In these examples, a sound level in the vehicle can be measured. Based on that sound level, the playback of the media content can be modified (e.g., paused). SeeFIGS.7-9for additional details. As described herein, consuming media content may include one or more of listening to audio content, watching video content, or consuming other types of media content. For ease of explanation, the embodiments described in this application are presented using specific examples. For example, audio content (and in particular music) is described as an example of one form of media consumption. As another example, a vehicle is described as an example of an environment in which media content is consumed. Further, traveling (and in particular driving) in a vehicle is described as an example of an activity during which media content is consumed. However, it should be understood that the same concepts are similarly applicable to other forms of media consumption and to other environments or activities, and at least some embodiments include other forms of media consumption and/or are configured for use in other environments or during other activities. FIG.1illustrates an example system100for streaming media content for playback. The system100can be used in a vehicle80. The vehicle80includes a dashboard82or a head unit84. The system100includes one or more media playback devices104configured to play media content, such as a personal media streaming appliance (PMSA) system110, a media delivery system112, a vehicle media playback system114, and a mobile computing device118. The system100further includes a data communication network116and an in-vehicle wireless data communication network122. The PMSA system110operates to receive media content that is provided (e.g., streamed, transmitted, etc.) by a system external to the PMSA system110, such as the media delivery system112, and transmit the media content to the vehicle media playback system114for playback. In some embodiments, the PMSA system110is a portable device which can be carried into and used in the vehicle80. The PMSA system110can be mounted to a structure of the vehicle80, such as the dashboard82or the head unit84. In other embodiments, the PMSA system110can be configured to be built in a structure of the vehicle80. An example of the PMSA system110is illustrated and described in more detail with reference toFIGS.2and6. The media delivery system112operates to provide media content to one or more media playback devices104via the network116. In the illustrated example, the media delivery system112provides media content to the PMSA system110for playback of media content using the vehicle media playback system114. An example of the media delivery system112is illustrated and described in further detail herein, such as with reference toFIG.3. The vehicle media playback system114operates to receive media content from the PMSA system110and generates a media output124to play the media content in the vehicle80. An example of the vehicle media playback system114is illustrated and described in further detail herein, such as with reference toFIG.4. The network116is a data communication network that facilitates data communication between the PMSA system110and the media delivery system112. In some embodiments, the mobile computing device118can also communicate with the media delivery system112across the network116. The network116typically includes a set of computing devices and communication links between the computing devices. The computing devices in the network116use the links to enable communication among the computing devices in the network. The network116can include one or more routers, switches, mobile access points, bridges, hubs, intrusion detection devices, storage devices, standalone server devices, blade server devices, sensors, desktop computers, firewall devices, laptop computers, handheld computers, mobile telephones, vehicular computing devices, and other types of computing devices. In various embodiments, the network116includes various types of communication links. For example, the network116can include wired and/or wireless links, including cellular, Bluetooth, ultra-wideband (UWB), 802.11, ZigBee, and other types of wireless links. Furthermore, in various embodiments, the network116is implemented at various scales. For example, the network116can be implemented as one or more vehicle area networks, local area networks (LANs), metropolitan area networks, subnets, wide area networks (WAN) (such as the Internet), or can be implemented at another scale. Further, in some embodiments, the network116includes multiple networks, which may be of the same type or of multiple different types. In some embodiments, the network116can also be used for data communication between other media playback devices104(e.g., the mobile computing device118) and the media delivery system112. Because the network116is configured primarily for data communication between computing devices in the vehicle80and computing devices outside the vehicle80, the network116is also referred to herein as an out-of-vehicle network for out-of-vehicle data communication. Unlike the network116, the in-vehicle wireless data communication122can be used for direct data communication between computing devices (e.g., the media playback devices104) in the vehicle80. In some embodiments, the in-vehicle wireless data communication122is used for direct communication between the PMSA system110and the mobile computing device118. In other embodiments, the mobile computing device118can communicate with the PMSA system110in the data communication network116. In some embodiments, the in-vehicle wireless data communication122can also be used for data communication between the PMSA system110and the vehicle media playback system114. Various types of wireless communication interfaces can be used for the in-vehicle wireless data communication122. In some embodiments, the in-vehicle wireless data communication122includes Bluetooth® technology. In other embodiments, the in-vehicle wireless data communication122includes Wi-Fi® technology. In yet other embodiments, other suitable wireless communication interfaces can be used for the in-vehicle wireless data communication122, such as near field communication (NFC) and a ultrasonic data transmission. In some embodiments, the mobile computing device118is configured to play media content independently from the PMSA system110. In some embodiments, the mobile computing device118is a standalone computing device that, without the PMSA system110involved, can communicate with the media delivery system112and receive media content from the media delivery system112for playback in the vehicle80. An example of the mobile computing device118is illustrated and described in further detail herein, such as with reference toFIG.5. FIG.2is a block diagram of an example embodiment of the PMSA system110of the media streaming system100shown inFIG.1. In this example, the PMSA system110includes a user input device130, a display device132, a wireless data communication device134, a movement detection device136, a location determining device138, a media content output device140, an in-vehicle wireless communication device142, a power supply144, a power input device146, a processing device148, and a memory device150. In some embodiments, the PMSA system110is a system dedicated for streaming personalized media content in a vehicle environment. At least some embodiments of the PMSA system110have limited functionalities specifically selected for streaming media content from the media delivery system112at least via the network116and/or for providing other services associated with the media content streaming service. The PMSA system110may have no other general use such as found in other computing devices, such as smartphones, tablets, and other smart devices. For example, in some embodiments, when the PMSA system110is powered up, the PMSA system110is configured to automatically activate a software application that is configured to perform the media content streaming and media playback operations of the PMSA system110using at least one of the components, devices, and elements of the PMSA system110. In some embodiments, the software application of the PMSA system110is configured to continue running until the PMSA system110is powered off or powered down to a predetermined level. In some embodiments, the PMSA system110is configured to be free of any user interface control that would allow a user to disable the automatic activation of the software application on the PMSA system110. As described herein, the PMSA system110provides various structures, features, and functions that improve the user experience of consuming media content in a vehicle. As illustrated, the PMSA system110can communicate with the media delivery system112to receive media content via the network116and enable the vehicle media playback system114to play the media content in the vehicle. In some embodiments, the PMSA system110can communicate with the mobile computing device118that is in data communication with the media delivery system112. As described herein, the mobile computing device118can communicate with the media delivery system112via the network116. The user input device130operates to receive a user input152from a user U for controlling the PMSA system110. As illustrated, the user input152can include a manual input154and a voice input156. In some embodiments, the user input device130includes a manual input device160and a sound detection device162. The manual input device160operates to receive the manual input154for controlling playback of media content via the PMSA system110. In addition, in some embodiments, the manual input154is received for managing various pieces of information transmitted via the PMSA system110and/or controlling other functions or aspects associated with the PMSA system110. In some embodiments, the manual input device160includes one or more manual control elements configured to receive various manual control actions, such as pressing actions and rotational actions. As described herein, the physical input device160includes a manual control knob510and one or more physical buttons512, which is further illustrated and described with reference toFIG.6. The sound detection device162operates to detect and record sounds from proximate the PMSA system110. For example, the sound detection device162can detect sounds including the voice input156. In some embodiments, the sound detection device162includes one or more acoustic sensors configured to detect sounds proximate the PMSA system110. For example, acoustic sensors of the sound detection device162includes one or more microphones. Various types of microphones can be used for the sound detection device162of the PMSA system110. In some embodiments, the voice input156is a user's voice (also referred to herein as an utterance) for controlling playback of media content via the PMSA system110. In addition, the voice input156is a user's voice for managing various data transmitted via the PMSA system110and/or controlling other functions or aspects associated with the PMSA system110. In some embodiments, the sound detection device162is configured to cancel noises from the received sounds so that a desired sound (e.g., the voice input156) is clearly identified. For example, the sound detection device162can include one or more noise-canceling microphones which are configured to filter ambient noise from the voice input156. In addition or alternatively, a plurality of microphones of the sound detection device162are arranged at different locations in a body of the PMSA system110and/or oriented in different directions with respect to the body of the PMSA system110, so that ambient noise is effectively canceled from the voice input156or other desired sounds being identified. In some embodiments, the sounds detected by the sound detection device162can be processed by the sound processing engine180of the PMSA system110as described below. Referring still toFIG.2, the display device132operates to display information to the user U. Examples of such information include media content playback information, notifications, and other information. In some embodiments, the display device132operates as a display screen only and is not capable of receiving a user input. By receiving the manual input154only via the manual input device160and disabling receipt of manual input via the display device132, the user interface of the PMSA system110is simplified so that the user U can control the PMSA system110while maintaining focus on other activities in the vehicle80. It is understood however that, in other embodiments, the display device132is configured as a touch-sensitive display screen that operates as both a display screen and a user input device. In yet other embodiments, the PMSA system110does not include a display device. As described herein, in some embodiments, the display device132is arranged at the manual input device160. In other embodiments, the display device132is arranged separate from the manual input device160. The wireless data communication device134operates to enable the PMSA system110to communicate with one or more computing devices at a remote location that is outside the vehicle80. In the illustrated example, the wireless data communication device134operates to connect the PMSA system110to one or more networks outside the vehicle80, such as the network116. For example, the wireless data communication device134is configured to communicate with the media delivery system112and receive media content from the media delivery system112at least partially via the network116. The wireless data communication device134can be a wireless network interface of various types which connects the PMSA system110to the network116. Examples of the wireless data communication device134include wireless wide area network (WWAN) interfaces, which use mobile telecommunication cellular network technologies. Examples of cellular network technologies include LTE, WiMAX, UMTS, CDMA2000, GSM, cellular digital packet data (CDPD), and Mobitex. In the some embodiments, the wireless data communication device134is configured as a cellular network interface to facilitate data communication between the PMSA system110and the media delivery system112over cellular network. The movement detection device136can be used to detect movement of the PMSA system110and the vehicle80. In some embodiments, the movement detection device136is configured to monitor one or more factors that are used to determine movement of the vehicle80. The movement detection device136can include one or more sensors that are configured to detect movement, position, and/or orientation of the PMSA system110. As an example, the movement detection device136is operable to determine an orientation of the PMSA system110. The movement detection device136can detect changes in the determined orientation and interpret those changes as indicating movement of the PMSA system110. In some embodiments, the movement detection device136includes an accelerometer. In other embodiments, the movement detection device136includes a gyroscope. Other sensors can also be used for the movement detection device136, such as a magnetometer, a GPS receiver, an altimeter, an odometer, a speedometer, a shock detector, a vibration sensor, a proximity sensor, and an optical sensor (e.g., a light sensor, a camera, and an infrared sensor). The location determining device138is a device that determines the location of the PMSA system110. In some embodiments, the location determining device138uses one or more of Global Positioning System (GPS) technology (which may receive GPS signals), Global Navigation Satellite System (GLONASS), cellular triangulation technology, network-based location identification technology, Wi-Fi positioning systems technology, and combinations thereof. The media content output device140is an interface that enables the PMSA system110to transmit media content to the vehicle media playback device114. Some embodiments of the PMSA system110do not have a speaker and thus cannot play media content independently. In these embodiments, the PMSA system110is not regarded as a standalone device for playing media content. Instead, the PMSA system110transmits media content to another media playback device, such as the vehicle media playback device114to enable the other media playback device to play the media content, such as through the vehicle stereo system. As illustrated, the PMSA system110(e.g., a media content processing engine176thereof inFIG.2) can convert media content to a media content signal164, the media content output device140transmits the media content signal164to the vehicle media playback system114. The vehicle media playback system114can play the media content based on the media content signal164. For example, the vehicle media playback system114operates to convert the media content signal164into a format that is readable by the vehicle media playback system114for playback. In some embodiments, the media content output device140includes an auxiliary (AUX) output interface166and a wireless output interface168. The AUX output interface166is configured to connect the PMSA system110to the vehicle media playback system114via a cable (e.g., a media content output line550inFIG.6) of the PMSA system110. In some embodiments, as illustrated inFIG.6, the media content output line550extending from the PMSA system110is connected to an input connector340(e.g., an auxiliary input jack or port) of the vehicle media playback system114. As illustrated herein, the media content output line550can be of various types, such as an analog audio cable or a USB cable. The wireless output interface168is configured to connect the PMSA system110to the vehicle media playback system114via a wireless communication protocol. In some embodiments, the wireless output interface168is configured for Bluetooth connection. In other embodiments, the wireless output interface168is configured for other types of wireless connection. In some embodiments, the wireless output interface168is incorporated into, or implemented with, the in-vehicle wireless communication device142. For example, when the media content output device140wirelessly transmits media content to the vehicle media playback system114, the in-vehicle wireless communication device142can be used to implement the wireless output interface168of the media content output device140. Referring still toFIG.2, the in-vehicle wireless communication device142operates to establish a wireless data communication, such as the in-vehicle wireless data communication122, between computing devices in a vehicle80. In the illustrated example, the in-vehicle wireless communication device142is used to enable the PMSA system110to communicate with other computing devices, such as the mobile computing device118, in the vehicle80. Various types of wireless communication interfaces can be used for the in-vehicle wireless communication device142, such as Bluetooth Technology®, Wi-Fi® technology, a near field communication (NFC), and an ultrasound data transmission. The in-vehicle wireless communication is also referred to herein as a short-range wireless communication. The power supply144is included in the example PMSA system110and is configured to supply electric power to the PMSA system110. In some embodiments, the power supply144includes at least one battery. The power supply144can be rechargeable. For example, the power supply144can be recharged using the power input device146that is connected to an external power supply. In some embodiments, the power supply144is included inside the PMSA system110and is not removable from the PMSA system110. In other embodiments, the power supply144is removable by the user from the PMSA system110. The power input device146is configured to receive electric power to maintain activation of components of the PMSA system110. As described herein, the power input device146is connected to a power source of the vehicle80(e.g., a vehicle power supply540inFIG.6) and use the electric power from the vehicle80as a primary power source to maintain activation of the PMSA system110over an extended period of time, such as longer than several minutes. The processing device148, in some embodiments, comprises one or more central processing units (CPU). In other embodiments, the processing device148additionally or alternatively includes one or more digital signal processors, field-programmable gate arrays, or other electronic circuits. The memory device150typically includes at least some form of computer-readable media. Computer readable media includes any available media that can be accessed by the PMSA system110. By way of example, computer-readable media include computer readable storage media and computer readable communication media. Computer readable storage media includes volatile and nonvolatile, removable and non-removable media implemented in any device configured to store information such as computer readable instructions, data structures, program modules, or other data. Computer readable storage media includes, but is not limited to, random access memory, read only memory, electrically erasable programmable read only memory, flash memory and other memory technology, compact disc read only memory, blue ray discs, digital versatile discs or other optical storage, magnetic storage devices, or any other medium that can be used to store the desired information and that can be accessed by the PMSA system110. In some embodiments, computer readable storage media is non-transitory computer readable storage media. Computer readable communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, computer readable communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency, infrared, and other wireless media. Combinations of any of the above are also included within the scope of computer readable media. The memory device150operates to store data and instructions. In some embodiments, the memory device150stores instructions for a media content cache172, a caching management engine174, a media content processing engine176, a manual input processing engine178, a sound processing engine180, and a voice interaction engine182. Some embodiments of the memory device150include the media content cache172. The media content cache172stores media content items, such as media content items that have been received from the media delivery system112. The media content items stored in the media content cache172may be stored in an encrypted or unencrypted format. In some embodiments, the media content cache172also stores metadata about media content items such as title, artist name, album name, length, genre, mood, era, etc. The media content cache172can further store playback information about the media content items and/or other information associated with the media content items. The caching management engine174is configured to receive and cache media content in the media content cache172and manage the media content stored in the media content cache172. In some embodiments, when media content is streamed from the media delivery system112, the caching management engine174operates to cache at least a portion of the media content into the media content cache172so that at least a portion of the cached media content can be transmitted to the vehicle media playback system114for playback. In other embodiments, the caching management engine174operates to cache at least a portion of media content into the media content cache172while online so that the cached media content is retrieved for playback while the PMSA system110is offline. The media content processing engine176is configured to process the media content that is received from the media delivery system112, and generate the media content signal164usable for the vehicle media playback system114to play the media content. The media content signal164is transmitted to the vehicle media playback system114using the media content output device140, and then decoded so that the vehicle media playback system114plays the media content in the vehicle80. The manual input processing engine178operates to receive the manual input154via the manual input device160. In some embodiments, when the manual input device160is actuated (e.g., pressed or rotated) upon receiving the manual input154, the manual input device160generates an electric signal representative of the manual input154. The manual input processing engine178can process the electric signal and determine the user input (e.g., command or instruction) corresponding to the manual input154to the PMSA system110. In some embodiments, the manual input processing engine178can perform a function requested by the manual input154, such as controlling playback of media content. The manual input processing engine178can cause one or more other engines to perform the function associated with the manual input154. The sound processing engine180is configured to receive sound signals obtained from the sound detection device162and process the sound signals to identify different sources of the sounds received via the sound detection device162. In some embodiments, the sound processing engine180operates to filter the user's voice input156from noises included in the detected sounds. Various noise cancellation technologies, such as active noise control or cancelling technologies or passive noise control or cancelling technologies, can be used for filter the voice input from ambient noise. In examples, the sound processing engine180filters out omni-directional noise and preserves directional noise (e.g., an audio input difference between two microphones) in audio input. In examples, the sound processing engine180removes frequencies above or below human speaking voice frequencies. In examples, the sound processing engine180subtracts audio output of the device from the audio input to filter out the audio content being provided by the device. (e.g., to reduce the need of the user to shout over playing music). In examples, the sound processing engine180performs echo cancellation. By using one or more of these techniques, the sound processing engine180provides sound processing customized for use in a vehicle environment. In other embodiments, the sound processing engine180operates to process the received sound signals to identify the sources of particular sounds of the sound signals, such as people's conversation in the vehicle, the vehicle engine sound, or other ambient sounds associated with the vehicle. In some embodiments, a recording of sounds captured using the sound detection device162can be analyzed using speech recognition technology to identify words spoken by the user. The words may be recognized as commands from the user that alter the playback of media content and/or other functions or aspect of the PMSA system110. In some embodiments, the words and/or the recordings may also be analyzed using natural language processing and/or intent recognition technology to determine appropriate actions to take based on the spoken words. Additionally or alternatively, the sound processing engine180may determine various sound properties about the sounds proximate the PMSA system110such as volume, dominant frequency or frequencies, etc. These sound properties may be used to make inferences about the environment proximate to the PMSA system110. The voice interaction engine182operates to cooperate with the media delivery system112(e.g., a voice interaction server204thereof as illustrated inFIG.3) to identify a command (e.g., a user intent) that is conveyed by the voice input156. In some embodiments, the voice interaction engine182transmits the user's voice input156that is detected by the sound processing engine180to the media delivery system112so that the media delivery system112operates to determine a command intended by the voice input156. In other embodiments, at least some of the determination process of the command can be performed locally by the voice interaction engine182. In addition, some embodiments of the voice interaction engine182can operate to cooperate with the media delivery system112(e.g., the voice interaction server204thereof) to provide a voice assistant that performs various voice-based interactions with the user, such as voice feedbacks, voice notifications, voice recommendations, and other voice-related interactions and services. FIG.3is a block diagram of an example embodiment of the media delivery system112ofFIG.1. The media delivery system112includes a media content server200, a personal media streaming appliance (PMSA) server202, and a voice interaction server204. The media delivery system112comprises one or more computing devices and provides media content to the PMSA system110and, in some embodiments, other media playback devices, such as the mobile computing device118, as well. In addition, the media delivery system112interacts with the PMSA system110to provide the PMSA system110with various functionalities. In at least some embodiments, the media content server200, the PMSA server202, and the voice interaction server204are provided by separate computing devices. In other embodiments, the media content server200, the PMSA server202, and the voice interaction server204are provided by the same computing device(s). Further, in some embodiments, at least one of the media content server200, the PMSA server202, and the voice interaction server204is provided by multiple computing devices. For example, the media content server200, the PMSA server202, and the voice interaction server204may be provided by multiple redundant servers located in multiple geographic locations. AlthoughFIG.3shows a single media content server200, a single PMSA server202, and a single voice interaction server204, some embodiments include multiple media servers, multiple PMSA servers, and/or multiple voice interaction servers. In these embodiments, each of the multiple media servers, multiple PMSA serves, and multiple voice interaction servers may be identical or similar to the media content server200, the PMSA server202, and the voice interaction server, respectively, as described herein, and may provide similar functionality with, for example, greater capacity and redundancy and/or services from multiple geographic locations. Alternatively, in these embodiments, some of the multiple media servers, the multiple PMSA servers, and/or the multiple voice interaction servers may perform specialized functions to provide specialized services. Various combinations thereof are possible as well. The media content server200transmits stream media210(FIG.2) to media playback devices such as the PMSA system110. In some embodiments, the media content server200includes a media server application212, a processing device214, a memory device216, and a network access device218. The processing device214and the memory device216may be similar to the processing device148and the memory device150, respectively, which have each been previously described. Therefore, the description of the processing device214and the memory device216are omitted for brevity purposes. The network access device218operates to communicate with other computing devices over one or more networks, such as the network116. Examples of the network access device include one or more wired network interfaces and wireless network interfaces. Examples of such wireless network interfaces of the network access device218include wireless wide area network (WWAN) interfaces (including cellular networks) and wireless local area network (WLANs) interfaces. In other examples, other types of wireless interfaces can be used for the network access device218. In some embodiments, the media server application212is configured to stream media content, such as music or other audio, video, or other suitable forms of media content. The media server application212includes a media stream service222, a media application interface224, and a media data store226. The media stream service222operates to buffer media content, such as media content items230A,230B, and230N (collectively230), for streaming to one or more streams232A,232B, and232N (collectively232). The media application interface224can receive requests or other communication from media playback devices or other systems, such as the PMSA system110, to retrieve media content items from the media content server200. For example, inFIG.2, the media application interface224receives communication from the PMSA system110, such as the caching management engine174thereof, to receive media content from the media content server200. In some embodiments, the media data store226stores media content items234, media content metadata236, media contexts238, user accounts240, and taste profiles242. The media data store226may comprise one or more databases and file systems. Other embodiments are possible as well. As discussed herein, the media content items234(including the media content items230) may be audio, video, or any other type of media content, which may be stored in any format for storing media content. The media content metadata236provide various information associated with the media content items234. In some embodiments, the media content metadata236includes one or more of title, artist name, album name, length, genre, mood, era, etc. The media content metadata236operates to provide various pieces of information associated with the media content items234. In some embodiments, the media content metadata236includes one or more of title, artist name, album name, length, genre, mood, era, etc. In some embodiments, the media content metadata236includes acoustic metadata, cultural metadata, and explicit metadata. The acoustic metadata may be derived from analysis of the track refers to a numerical or mathematical representation of the sound of a track. Acoustic metadata may include temporal information such as tempo, rhythm, beats, downbeats, tatums, patterns, sections, or other structures. Acoustic metadata may also include spectral information such as melody, pitch, harmony, timbre, chroma, loudness, vocalness, or other possible features. Acoustic metadata may take the form of one or more vectors, matrices, lists, tables, and other data structures. Acoustic metadata may be derived from analysis of the music signal. One form of acoustic metadata, commonly termed an acoustic fingerprint, may uniquely identify a specific track. Other forms of acoustic metadata may be formed by compressing the content of a track while retaining some or all of its musical characteristics. The cultural metadata refers to text-based information describing listeners' reactions to a track or song, such as styles, genres, moods, themes, similar artists and/or songs, rankings, etc. Cultural metadata may be derived from expert opinion such as music reviews or classification of music into genres. Cultural metadata may be derived from listeners through websites, chatrooms, blogs, surveys, and the like. Cultural metadata may include sales data, shared collections, lists of favorite songs, and any text information that may be used to describe, rank, or interpret music. Cultural metadata may also be generated by a community of listeners and automatically retrieved from Internet sites, chat rooms, blogs, and the like. Cultural metadata may take the form of one or more vectors, matrices, lists, tables, and other data structures. A form of cultural metadata particularly useful for comparing music is a description vector. A description vector is a multi-dimensional vector associated with a track, album, or artist. Each term of the description vector indicates the probability that a corresponding word or phrase would be used to describe the associated track, album or artist. The explicit metadata refers to factual or explicit information relating to music. Explicit metadata may include album and song titles, artist and composer names, other credits, album cover art, publisher name and product number, and other information. Explicit metadata is generally not derived from the music itself or from the reactions or opinions of listeners. At least some of the metadata236, such as explicit metadata (names, credits, product numbers, etc.) and cultural metadata (styles, genres, moods, themes, similar artists and/or songs, rankings, etc.), for a large library of songs or tracks can be evaluated and provided by one or more third party service providers. Acoustic and cultural metadata may take the form of parameters, lists, matrices, vectors, and other data structures. Acoustic and cultural metadata may be stored as XML files, for example, or any other appropriate file type. Explicit metadata may include numerical, text, pictorial, and other information. Explicit metadata may also be stored in an XML or other file. All or portions of the metadata may be stored in separate files associated with specific tracks. All or portions of the metadata, such as acoustic fingerprints and/or description vectors, may be stored in a searchable data structure, such as a k-D tree or other database format. Referring still toFIG.3, each of the media contexts238is used to identify one or more media content items234. In some embodiments, the media contexts238are configured to group one or more media content items234and provide a particular context to the group of media content items234. Some examples of the media contexts238include albums, artists, playlists, and individual media content items. By way of example, where a media context238is an album, the media context238can represent that the media content items234identified by the media context238are associated with that album. As described above, the media contexts238can include playlists239. The playlists239are used to identify one or more of the media content items234. In some embodiments, the playlists239identify a group of the media content items234in a particular order. In other embodiments, the playlists239merely identify a group of the media content items234without specifying a particular order. Some, but not necessarily all, of the media content items234included in a particular one of the playlists239are associated with a common characteristic such as a common genre, mood, or era. In some embodiments, a user can listen to media content items in a playlist239by selecting the playlist239via a media playback device104, such as the PMSA system110. The media playback device104then operates to communicate with the media delivery system112so that the media delivery system112retrieves the media content items identified by the playlist239and transmits data for the media content items to the media playback device104for playback. In some embodiments, the playlist239includes a playlist title and a list of content media item identifications. The playlist title is a title of the playlist, which can be provided by a user using the media playback device104. The list of content media item identifications includes one or more media content item identifications (IDs) that refer to respective media content items170. Each media content item is identified by a media content item ID and includes various pieces of information, such as a media content item title, artist identification (e.g., individual artist name or group name, or multiple artist names or group names), and media content item data. In some embodiments, the media content item title and the artist ID are part of the media content metadata236, which can further include other attributes of the media content item, such as album name, length, genre, mood, era, etc. as described herein. At least some of the playlists239may include user-created playlists. For example, a user of a media streaming service provided using the media delivery system112can create a playlist239and edit the playlist239by adding, removing, and rearranging media content items in the playlist239. A playlist239can be created and/or edited by a group of users together to make it a collaborative playlist. In some embodiments, user-created playlists can be available to a particular user only, a group of users, or to the public based on a user-definable privacy setting. In some embodiments, when a playlist is created by a user or a group of users, the media delivery system112operates to generate a list of media content items recommended for the particular user or the particular group of users. In some embodiments, such recommended media content items can be selected based at least on the taste profiles242as described herein. Other information or factors can be used to determine the recommended media content items. Examples of determining recommended media content items are described in U.S. patent application Ser. No. 15/858,377, titled MEDIA CONTENT ITEM RECOMMENDATION SYSTEM, filed Dec. 29, 2017, the disclosure of which is hereby incorporated by reference in its entirety. In addition or alternatively, at least some of the playlists239are created by a media streaming service provider. For example, such provider-created playlists can be automatically created by the media delivery system112. In some embodiments, a provider-created playlist can be customized to a particular user or a particular group of users. By way of example, a playlist for a particular user can be automatically created by the media delivery system112based on the user's listening history (e.g., the user's taste profile) and/or listening history of other users with similar tastes. In other embodiments, a provider-created playlist can be configured to be available for the public in general. Provider-created playlists can also be sharable with to other users. The user accounts240are used to identify users of a media streaming service provided by the media delivery system112. In some embodiments, a user account240allows a user to authenticate to the media delivery system112and enable the user to access resources (e.g., media content items, playlists, etc.) provided by the media delivery system112. In some embodiments, the user can use different devices (e.g., the PMSA system110and the mobile computing device118) to log into the user account and access data associated with the user account in the media delivery system112. User authentication information, such as a username, an email account information, a password, and other credentials, can be used for the user to log into his or her user account. The taste profiles242contain records indicating media content tastes of users. A taste profile can be associated with a user and used to maintain an in-depth understanding of the music activity and preference of that user, enabling personalized recommendations, taste profiling and a wide range of social music applications. Libraries and wrappers can be accessed to create taste profiles from a media library of the user, social website activity and other specialized databases to mine music preferences. In some embodiments, each taste profile242is a representation of musical activities, such as user preferences and historical information about the users' consumption of media content, and can include a wide range of information such as artist plays, song plays, skips, dates of listen by the user, songs per day, playlists, play counts, start/stop/skip data for portions of a song or album, contents of collections, user rankings, preferences, or other mentions received via a client device, or other media plays, such as websites visited, book titles, movies watched, playing activity during a movie or other presentations, ratings, or terms corresponding to the media, such as “comedy”, “sexy”, etc. In addition, the taste profiles242can include other information. For example, the taste profiles242can include libraries and/or playlists of media content items associated with the user. The taste profiles242can also include information about the user's relationships with other users (e.g., associations between users that are stored by the media delivery system112or on a separate social media site). The taste profiles242can be used for a number of purposes. One use of taste profiles is for creating personalized playlists (e.g., personal playlisting). An API call associated with personal playlisting can be used to return a playlist customized to a particular user. For example, the media content items listed in the created playlist are constrained to the media content items in a taste profile associated with the particular user. Another example use case is for event recommendation. A taste profile can be created, for example, for a festival that contains all the artists in the festival. Music recommendations can be constrained to artists in the taste profile. Yet another use case is for personalized recommendation, where the contents of a taste profile are used to represent an individual's taste. This API call uses a taste profile as a seed for obtaining recommendations or playlists of similar artists. Yet another example taste profile use case is referred to as bulk resolution. A bulk resolution API call is used to resolve taste profile items to pre-stored identifiers associated with a service, such as a service that provides metadata about items associated with the taste profile (e.g., song tempo for a large catalog of items). Yet another example use case for taste profiles is referred to as user-to-user recommendation. This API call is used to discover users with similar tastes by comparing the similarity of taste profile item(s) associated with users. A taste profile242can represent a single user or multiple users. Conversely, a single user or entity can have multiple taste profiles242. For example, one taste profile can be generated in connection with a user's media content play activity, whereas another separate taste profile can be generated for the same user based the user's selection of media content items and/or artists for a playlist. Referring still toFIG.3, the PMSA server202operates to provide various functionalities to the PMSA system110. In some embodiments, the PMSA server202includes a personal media streaming appliance (PMSA) server application250, a processing device252, a memory device254, and a network access device256. The processing device252, the memory device254, and the network access device256may be similar to the processing device214, the memory device216, and the network access device218, respectively, which have each been previously described. In some embodiments, the PMSA server application250operates to interact with the PMSA system110and enable the PMSA system110to perform various functions, such as receiving a user manual input, displaying information, providing notifications, performing power management, providing location-based services, and authenticating one or more users for the PMSA system110. The PMSA server application250can interact with other servers, such as the media content server200and the voice interaction server204, to execute such functions. Referring still toFIG.3, the voice interaction server204operates to provide various voice-related functionalities to the PMSA system110. In some embodiments, the voice interaction server204includes a voice interaction server application270, a processing device272, a memory device274, and a network access device276. The processing device272, the memory device274, and the network access device276may be similar to the processing device214, the memory device216, and the network access device218, respectively, which have each been previously described. In some embodiments, the voice interaction server application270operates to interact with the PMSA system110and enable the PMSA system110to perform various voice-related functions, such as voice feedback and voice notifications. In some embodiments, the voice interaction server application270is configured to receive data (e.g., speech-to-text (STT) data) representative of a voice input received via the PMSA system110and process the data to determine a user command (e.g., a user request or instruction). In some embodiments, at least one of the media content server200, the PMSA server202, and the voice interaction server204may be used to perform one or more functions corresponding the determined user command. FIG.4is a block diagram of an example embodiment of the vehicle media playback system114. In this example, the vehicle media playback system114includes a vehicle head unit302, an amplifier304, and a speaker306. The vehicle head unit302is configured to receive a user input and generate media content from various sources. In this example, the vehicle head unit302includes a receiver310, a wireless communication device312, a wired input device314, a processing device316, a memory device318, a user input assembly320, a display device322, and a stored media interface assembly324. The receiver310operates to receive media content signals from various external sources. The received signals can then be used to generate media output by the vehicle media playback system264. Some embodiments of the receiver310include one or more tuners for receiving radio signals such as FM or AM radio signals. Other embodiments of the receiver310include a receiver for receiving satellite radio signals and/or a receiver for receiving internet radio signals. The wireless communication device312operates to communicate with other devices using wireless data signals. The wireless communication device312can include one or more of a Bluetooth transceiver and a Wi-Fi transceiver. The wireless data signal may comprise a media content signal such as an audio or video signal. In some embodiments, the wireless communication device312is used to enable the vehicle media playback system114to wirelessly communicate with the PMSA system110and receive the media content signal164(FIG.2) from the PMSA system110via an in-vehicle wireless network. The in-vehicle wireless network between the PMSA system110and the vehicle media playback system114can be configured similarly to the in-vehicle wireless data communication122(FIG.2). The wired input device314provides an interface configured to receive a cable for providing media content and/or commands. The wired input device314includes an input connector340configured to receive a plug extending from a media playback device for transmitting a signal for media content. In some embodiments, the wired input device314can include an auxiliary input jack (AUX) for receiving a plug from a media playback device that transmits analog audio signals. The wired input device314can also include different or multiple input jacks for receiving plugs from media playback devices that transmit other types of analog or digital signals (e.g., USB, HDMI, Composite Video, YPbPr, DVI). In some embodiments, the wired input device314is also used to receive instructions from other devices. In some embodiments, the wired input device314provides the input connector340(e.g., an AUX port) for receiving a connector552extending from the PMSA system110, as illustrated inFIG.6. The media content signal164is then transmitted from the PMSA system110to the vehicle media playback system114via the cable550, the connector552, and the input connector340. The processing device316operates to control various devices, components, and elements of the vehicle media playback system114. The processing device316can be configured similar to the processing device148(FIG.2) and, therefore, the description of the processing device316is omitted for brevity purposes. In some embodiments, the processing device316operates to process the media content signal164received from the PMSA system110and convert the signal164to a format readable by the vehicle media playback system114for playback. The memory device318is configured to store data and instructions that are usable to control various devices, components, and elements of the vehicle media playback system114. The memory device318can be configured similar to the memory device150(FIG.2) and, therefore, the description of the memory device318is omitted for brevity purposes. The user input assembly320includes one or more input devices for receiving user input from users for controlling the vehicle media playback system114. In some embodiments, the user input assembly320includes multiple knobs, buttons, and other types of input controls for adjusting volume, selecting sources and content, and adjusting various output parameters. In some embodiments, the various input devices are disposed on or near a front surface of the vehicle head unit302. The various input devices can also be disposed on the steering wheel of the vehicle or elsewhere. Additionally or alternatively, the user input assembly320can include one or more touch sensitive surfaces, which can be incorporated in the display device322. The display device322displays information. In some embodiments, the display device322includes a liquid crystal display (LCD) panel for displaying textual information about content and/or settings of the vehicle media playback system114. The display device322can also include other types of display panels such as a light emitting diode (LED) panel. In some embodiments, the display device322can also display image or video content. The stored media interface assembly324reads media content stored on a physical medium. In some embodiments, the stored media interface assembly324comprises one or more devices for reading media content from a physical medium such as a compact disc or cassette tape. The amplifier304operates to amplify a signal received from the vehicle head unit302and transmits the amplified signal to the speaker306. In this manner, the media output124can be played back at a greater volume. The amplifier304may include a power source to power the amplification. The speaker306operates to produce an audio output (e.g., the media output124) based on an electronic signal. The speaker306can include one or more vehicle embedded speakers330disposed at various locations within the vehicle80. In some embodiments, separate signals are received for at least some of the speakers (e.g., to provide stereo or surround sound). In other embodiments, the speaker306can include one or more external speakers332which are arranged within the vehicle80. Users may bring one or more external speakers332into the vehicle80and connect the external speakers332to the vehicle head unit302using a wired interface or a wireless interface. In some embodiments, the external speakers332can be connected to the vehicle head unit302using Bluetooth. Other wireless protocols can be used to connect the external speakers332to the vehicle head unit302. In other embodiments, a wired connection (e.g., a cable) can be used to connect the external speakers332to the vehicle head unit302. Examples of the wired connection include an analog or digital audio cable connection and a universal serial bus (USB) cable connection. The external speaker332can also include a mechanical apparatus for attachment to a structure of the vehicle. FIG.5is a block diagram of an example embodiment of the mobile computing device118ofFIG.1. Similar to the PMSA system110, the mobile computing device118can also be used to play media content. For example, the mobile computing device118is configured to play media content that is provided (e.g., streamed or transmitted) by a system external to the mobile computing device118, such as the media delivery system112, another system, or a peer device. In other examples, the mobile computing device118operates to play media content stored locally on the mobile computing device118. In yet other examples, the mobile computing device118operates to play media content that is stored locally as well as media content provided by other systems. In some embodiments, the mobile computing device118is a handheld or portable entertainment device, smartphone, tablet, watch, wearable device, or any other type of computing device capable of playing media content. In other embodiments, the mobile computing device118is a laptop computer, desktop computer, television, gaming console, set-top box, network appliance, blue-ray or DVD player, media player, stereo, or radio. As described herein, the mobile computing device118is distinguished from the PMSA system110in various aspects. For example, unlike the PMSA system110, the mobile computing device118is not limited to playing media content, but configured for a wide range of functionalities in various situations and places. The mobile computing device118is capable of running a plurality of different software applications for different purposes. The mobile computing device118enables the user to freely start or stop activation of such individual software applications. In at least some embodiments, the mobile computing device118includes a location-determining device402, a display screen404, a processing device406, a memory device408, a content output device410, and a network access device412. Other embodiments may include additional, different, or fewer components. For example, some embodiments may include a recording device such as a microphone or camera that operates to record audio or video content. The location-determining device402is a device that determines the location of the mobile computing device118. In some embodiments, the location-determining device402uses one or more of Global Positioning System (GPS) technology (which may receive GPS signals), Global Navigation Satellite System (GLONASS), cellular triangulation technology, network-based location identification technology, Wi-Fi positioning systems technology, and combinations thereof. The display screen404is configured to display information. In addition, the display screen404is configured as a touch sensitive display and includes a user interface420for receiving a user input from a selector (e.g., a finger, stylus etc.) controlled by the user U. In some embodiments, therefore, the display screen404operates as both a display device and a user input device. The touch sensitive display screen404operates to detect inputs based on one or both of touches and near-touches. In some embodiments, the display screen404displays a graphical user interface for interacting with the mobile computing device118. Other embodiments of the display screen404do not include a touch sensitive display screen. Some embodiments include a display device and one or more separate user interface devices. Further, some embodiments do not include a display device. In some embodiments, the processing device406comprises one or more central processing units (CPU). In other embodiments, the processing device406additionally or alternatively includes one or more digital signal processors, field-programmable gate arrays, or other electronic circuits. The memory device408operates to store data and instructions. In some embodiments, the memory device408stores instructions for a media playback engine430. The memory device408may be configured similarly to the memory device150(FIG.2) and, therefore, the description of the memory device408is omitted for brevity purposes. The media playback engine430operates to play media content to the user U. As described herein, the media playback engine430is configured to communicate with the media delivery system112to receive one or more media content items (e.g., through the stream media232). In other embodiments, the media playback engine430is configured to play media content that is locally stored in the mobile computing device118. In some embodiments, the media playback engine430operates to retrieve one or more media content items that are either locally stored in the mobile computing device118or remotely stored in the media delivery system114. In some embodiments, the media playback engine430is configured to send a request to the media delivery system114for media content items and receive information about such media content items for playback. Referring still toFIG.5, the content output device410operates to output media content. In some embodiments, the content output device410generates media output450for the user U. In some embodiments, the content output device410includes one or more embedded speakers452which are incorporated in the mobile computing device118. Therefore, the mobile computing device118can be used as a standalone device that generates the media output450. In addition, some embodiments of the mobile computing device118include an external speaker interface454as an alternative output of media content. The external speaker interface454is configured to connect the mobile computing device118to another system having one or more speakers, such as headphones, portal speaker assemblies, and the vehicle media playback system114, so that the media output450is generated via the speakers of the other system external to the mobile computing device118. Examples of the external speaker interface454include an audio output jack, a Bluetooth transmitter, a display panel, and a video output jack. Other embodiments are possible as well. For example, the external speaker interface454is configured to transmit a signal through the audio output jack or Bluetooth transmitter that can be used to reproduce an audio signal by a connected or paired device such as headphones or a speaker. The network access device412operates to communicate with other computing devices over one or more networks, such as the network116and the in-vehicle wireless data communication122. Examples of the network access device412include wired network interfaces and wireless network interfaces. Wireless network interfaces includes infrared, BLUETOOTH® wireless technology, 802.11 a/b/g/n/ac, and cellular or other radio frequency interfaces in at least some possible embodiments. FIG.6schematically illustrates an example embodiment of the PMSA system110ofFIG.1. In this example, the PMSA system110includes a personal media streaming appliance (PMSA)500and a docking device502. As described herein, the PMSA system110is sized to be relatively small so that the PMSA system110can be easily mounted to a structure (e.g., a dashboard or head unit) of the vehicle80where the user can conveniently manipulate the PMSA system110. By way of example, the PMSA system110is configured to be smaller than a typical mobile computing device, such as a smartphone. Further, the PMSA500provides a simplified user interface for controlling playback of media content. For example, the PMSA500has a limited set of physical control elements, such as a single rotary knob and one or more physical buttons as described below, so that the user can easily control the PMSA system110in the vehicle80(FIG.1). The PMSA500is configured to include at least some of the devices of the PMSA system110as illustrated with reference toFIG.2. In some embodiments, the PMSA500includes all of the devices of the PMSA system110as illustrated inFIG.2. As illustrated also inFIG.2, some embodiments of the PMSA500includes the user input device130that includes the manual input device160and the sound detection device162. Some embodiments of the manual input device160include a control knob510and one or more physical buttons512. In some embodiments, the control knob510is configured to be maneuverable in multiple ways. For example, the control knob510provides a plurality of regions on a knob face514that are independently depressible upon receiving a user's pressing action against the knob face514. In the illustrated example, the control knob510has five regions516(e.g., up, down, left, right, and middle) that are separately depressible. At least some of the regions516are configured to receive inputs of different user commands (e.g., requests or instructions). In other embodiments, the control knob510is configured to be manipulated in different ways, such as tilting in multiple directions or sliding in multiple directions. In addition, the control knob510is configured to be rotatable. For example, the user can hold the control knob510and rotate with respect to a body520of the PMSA500. The control knob510can be rotatable in both directions522(e.g., clockwise and counterclockwise). In other embodiments, the control knob510is configured to rotate in only one direction. The control knob510is used to receive user inputs for controlling playback of media content. In addition or alternatively, the control knob510can be used to receive user inputs for other purposes or functions. The physical buttons512are configured to be depressed upon receiving a user's pressing action against the physical buttons512. In the illustrated example, the PMSA500has four physical buttons512A-512D. In some embodiments, each of the physical buttons512is configured to receive a single user command. In other embodiments, at least one of the physical buttons512is configured to receive multiple user commands. In some embodiments, the physical buttons512are used as buttons that are preset to be associated with particular media content, thereby facilitating playback of such media content. In these embodiments, the physical buttons512are also referred to as preset buttons512. In addition, the PMSA500also includes the display screen132. In some embodiments, the display screen132is arranged at the knob face514of the control knob510. As described herein, in some embodiments, the display screen132does not include a touch sensitive display screen, and is configured as a display device only. In other embodiments, however, the display screen132can be configured to be touch sensitive and receive a user input through the display screen132as well. Referring still toFIG.6, the docking device502is configured to mount the PMSA500to a structure of the vehicle80. The docking device502is configured to removably mount the PMSA500thereto. The docking device502is further configured to attach to a structure of the vehicle80(FIG.1) so that the PMSA500is positioned at the structure of the vehicle80. In some embodiments, an interface between the PMSA500and the docking device502is configured to prevent the PMSA500from rotating relative to the docking device502when the control knob510is manipulated by a user. For example, the docking device502has a portion (e.g., a front portion of the docking device502) configured to interlock a corresponding portion of the PMSA500(e.g., a rear portion of the PMSA500) when the PMSA500is mounted to the docking device502such that the portion of the docking device502and the corresponding portion of the PMSA500form the interface therebetween. In addition or alternatively, the PMSA500and the docking device502include magnetic materials at the interface therebetween so that the PMSA500and the docking device502are magnetically coupled to each other. In some embodiments, the docking device502includes one or more electrical contacts530that are electrically connected to corresponding electrical contacts (not shown inFIG.6) of the PMSA500when the PMSA500is mounted to the docking device502. Such electrical connection between the PMSA500and the docking device502is provided for various functions. First, as described herein, the PMSA500does not include a battery sufficient for a prolonged use without an external power supply. In some embodiments, the PMSA500is primarily powered by a vehicle power supply540. In some embodiments, the docking device502has a power receiving line544for connection to the vehicle power supply540. For example, the power receiving line544extends from the docking device502and has a power connector546at a free end that is configured to mate with a vehicle power outlet542(e.g., a 12V auxiliary power outlet) of the vehicle power supply540. As such, the docking device502receives electric power from the vehicle power supply540via the power receiving line544, and the electrical connection between the PMSA500and the docking device502is configured to deliver electric power from the docking device502to the PMSA500. Second, as described herein, the PMSA500does not have a speaker and is designed to transmit media content signals to the vehicle media playback system114so that the media content is played through the vehicle media playback system114. In some embodiments, the docking device502includes a media content output line550(also referred to herein as a media content output cable) (e.g., an auxiliary (AUX) output) configured to connect with the vehicle media playback input connector340(e.g., an auxiliary (AUX) port) of the vehicle media playback system114. The docking device502is configured to receive media content signals from the PMSA500via the electrical connection between the PMSA500and the docking device502, and transmit the signals to the vehicle media playback system114via the media content output line550. In the illustrated embodiment, the power receiving line544and the media content output line550are combined to be a single line extending from the docking device502until the power connector546, and the media content output line550further extends (or branches out) from the power connector546and terminates at a media output connector552. The media output connector552is configured to connect to the vehicle media playback input connector340of the vehicle media playback system114. In other embodiments, the media content output line550and the power receiving line544extend separately from the docking device502. In other embodiments, one or more of the power receiving line544and the media content output line550are directly connected to, and extend from, the PMSA500so that electric power is directly supplied to the PMSA500without the docking device502involved, and that the media content is directly transmitted to the vehicle media playback system114without passing through the docking device502. Third, the electrical connection between the PMSA500and the docking device502can be used to detect connection between the PMSA500and the docking device502. Referring now toFIGS.7-9, when the PMSA system110is playing back media content, the PMSA system110can also monitor the sound level of the media content in the vehicle102to estimate whether or not the user U in the vehicle is actually listening to the media content. For example, the user U can lower the volume of the vehicle media playback system114that is playing the media content for various reasons, such as to answer a telephone call, to talk to another passenger, and/or to switch to another source of media content on the vehicle media playback system (e.g., radio) instead of listening to the media content from the PMSA system. In such cases, although the PMSA system110continues playing the media content (i.e., streaming the media content to the vehicle media playback system114), a sound level of the media content in the vehicle102is low or non-existent, as detected by the sound detection device162. When this happens, the PMSA system110operates to modify the playback of the media content. For example, the PMSA system110can pause or otherwise stop playback of the media content. Further, an elapsed playback time of the media content (or other marker indicating when playback ceased) can be stored so that, when the user U comes back and plays the media content again, the media content can play from the point at which the playback was paused or otherwise stopped. Referring now toFIG.7, additional details of the sound processing engine180of the PMSA system110are provided. Generally, the sound processing engine180is programmed to determine a sound level of the media content playback within the vehicle102using, for example, the sound detection device162. Based upon the sound level, the sound processing engine180is programmed to pause the playback of the media content by the PMSA system110for certain scenarios. As illustrated, the sound processing engine180includes a sound level determination application702, a sound level deviation application704, and an automatic mode application706. The example sound level determination application702receives input from the sound detection device162. As noted, the sound detection device162can, in one example, be one or more microphones positioned in the vehicle102to determine sound within the vehicle. For example, the one or more microphones can be arranged as a component of the PMSA system110. The sound level determination application702is programmed to receive sound data from the sounds in the vehicle102and use that sound data to determine a sound level associated with the playback of the media content by the PMSA system110in the vehicle, such as through the vehicle media playback system114. For example, various filtering techniques can be used to remove other sources of sound (e.g., humans, road noise, etc.) from the sound data to determine the sound level for the playback of the media content. The sound level is passed to the sound level deviation application704. The sound level deviation application704is programmed to determine a change in the sound level for the playback of the media content. For example, various changes can occur in the sound level of the playback of the media content, such as: (i) a lowering of the sound level by the user below a threshold; and/or (ii) a substantial change in the sound level by the user at a given point in time. In one instance, the sound level deviation application704is programmed with a threshold that represents a sound level below which it would be difficult for a user to hear the playback of the media content. The threshold could be preset, such as set at a specific absolute level below which it is difficult for the user U to hear. In another instance, the threshold can be dynamic, such as by providing a variable threshold that is modified based upon the behavior of the user U. For example, the sound level deviation application704can use artificial intelligence and/or other machine learning techniques to determine an appropriate threshold for the user U. This could be developed, for example, by examining user behavior. For instance, when the user reduces the sound to a particular sound level and later rewinds the playback of the media content, the sound level deviation application704can use that particular sound level when determining the threshold. In another instance, the sound level of the playback of media content can vary naturally over time depending on the media content. For example, some songs have portions that are softer and other portions that are louder. To accommodate such variations in the media content, the sound level deviation application704can be programmed to delay a duration of time with the sound level below the threshold before determining a deviation in the sound level. For example, the sound level deviation application704can require that the sound level be below the threshold for a given duration (e.g., five seconds, ten seconds, 30 seconds, one minute, five minutes, etc.) before determining a determination is made as described further below. A threshold is just one example way the sound level deviation application704could be programmed to determine when the sound level is such that modification of playback is desirable. Multiple thresholds could be used, or other techniques with different metrics could be used. For example, the sound level deviation application704can also be programmed to vary the threshold or expected sound level based upon the actual expected sound level associated with the playback of the media content. For example, expected sound levels can be defined for various portions of media content, and the sound level deviation application704can be programmed to vary the expected sound level threshold based upon the portion of the media content currently being played back. In such a scenario, the threshold can be lowered for portions of the media content that are softer and increased for portions that are louder. This can allow the sound level deviation application704to accommodate media content having varying sound levels. The sound level deviation application704therefore must determine the appropriate threshold given the current point of playback of the media content and then apply that threshold to the sound data from the sound level determination application702. In yet other instances, the sound level deviation application704can, in addition or alternatively, be programmed to look at the rate of change of the sound level. When the rate of change is substantial over a period of time, the sound level deviation application704is programmed to identify that change. For example, when the user U reduces the sound level of the playback of the media content significantly in less than one second, such behavior could indicate that the user is no longer listening, but is instead performing another task, such as talking. The sound level deviation application704outputs a signal to the automatic mode application706when the sound level deviation application704determines that the user U is no longer listening to the playback of the media content. The automatic mode application706is programmed to pause or otherwise stop the playback of the media content by the PMSA system110. As noted, the PMSA system110can also be programmed to save the location at which the media content is paused so that the user U can later resume playback of the media content at that location. The automatic mode application706can also be programmed to perform other activities. For example, the automatic mode application706can be programmed to change the mode of the PSM system110to a lower power or standby mode. Other configurations are possible. Referring now toFIG.8, an example method800is shown for pausing playback of media content based upon a sound level of the playback of the media content within a vehicle. At operation810, a determination is made regarding the sound level of the playback of the media content. As noted, this determination can be made, for example, using one or more microphones positioned in the vehicle. Next, at operation820, the sensed sound level of the playback of the media content is compared to an expected sound level. For example,FIG.9illustrates additional details regarding one possible embodiment of such a comparison at operation820. At operation910, the expected volume threshold is determined. This determination can, as noted, simply be a preset threshold and/or a dynamic threshold that changes over time. Next, at operation920, the sensed volume is compared to the expected volume threshold. Other configurations are possible. Referring again toFIG.8, based on the comparison at operation820, a determination is made at operation830as to whether or not a deviation in the sound level has occurred. If so, the playback of the media content can automatically be paused at operation840. There can be various advantages associated with such a configuration. For example, automatically pausing the media content allows the user to resume listening at a later point without having to rewind or otherwise hunt for a desired playback location. Further, the automatic pausing of the content can avoid wasting power and bandwidth and reduce computational cost. Where user data is used, it can be handled according to a defined user privacy policy and can be used to the extent allowed by the user. Where the data of other users is used, it can be handled in an anonymized matter so the user does not learn of the details of other users generally or specifically. The various examples and teachings described above are provided by way of illustration only and should not be construed to limit the scope of the present disclosure. Those skilled in the art will readily recognize various modifications and changes that may be made without following the examples and applications illustrated and described herein, and without departing from the true spirit and scope of the present disclosure. | 89,833 |
11863148 | DETAILED DESCRIPTION OF THE INVENTION The overall concept of the very low frequency active impedance tuner (VLFT) for base-band load/source pull testing is demonstrated inFIG.6: any base-band signal component <b> at a frequency Fi, wherein Fi can be any of a multitude of frequency components, chosen either randomly, following a linear law Fi=(N+1)*δF, or exponentially Fi=2N*δF, etc., with N=0, 1, 2, 3 . . . 5, enters the tuner into the input port of the coupling module60. Since multi-octave directional couplers in the kHz or low MHz range are not available, the coupling module uses a signal combiner/divider (SCD)97(FIG.9) as coupling module. This type of coupling module has one common (CP) port (the test port of the active tuner) and two input/output (I/O) ports. A first I/O port is used as output port of the coupling module and is connected with an up-conversion module61. This module transfers the very low frequency component Fi of the base-band to RF frequencies around the local oscillator62frequency FLO, which is in the GHz frequency range. The frequency products of this up-conversion are injected into the digital electronic tuning module63, of which the operation mechanism is outlined inFIG.5. This digital electronic tuning module modifies amplitude and phase of this entering signal by reflection and re-injection at port 2 of the circulator50, using a selection of tuning states of the digital electronic tuner51, and injects it from port 3 of the circulator into a down-conversion module64. The outgoing spectrum includes a low frequency component in the base-band area and a high frequency component around the double local oscillation frequency, which is cut off using the low pass filter65, which, if the mixers used are not image rejection mixers (see ref.15), will be required. The outcoming base-band signal is then amplified by the amplifier66and injected into the output (I/O) port of the coupling module60. The output port of module60is the second I/O port of the signal combiner divider97(seeFIG.9). The load/source pull implementation of the very low frequency active impedance tuner (VLFT) is shown inFIGS.2and3: The DC bias line leading to the input (or output) port304of the DUT36is connected at node X to the test port of the active tuner (shown in detail inFIG.4) through the inductor305and an optional low pass filter LPF39. The input signal originates at the signal source30and includes a carrier frequency32and low-level sidebands (base-band signals)31; this signal traverses the capacitor33, is injected into the input port304of the DUT, is amplified by the DUT and is extracted from the DUT output port (not shown). The (nonlinear) DUT creates up- and down-converted306base-band signals both at its input and output terminals, which signals37containing the reduced carrier32travel38along the DC bias network inductor305through the low pass filter39to the VLFT302; on the way the inductor305suppresses the carrier from a 32 level amplitude to a 37 level amplitude and after the LPF39only the baseband301signal components remain. The VLFT tuner creates an active reflection303which transforms the base-band signal301to the amplitude and phase modulated base-band signal35; this modified base-band signal35travels back34, reaches the main line at node X and interacts with the original base-band signals and is injected into port304of the DUT36, because the low value capacitor33prevents the low frequency base-band signal35from travelling back to the signal source30; this injected-back base-band signal35interacts with the internal cross-modulation mechanisms of the DUT. If this returning signal is properly modified in amplitude and phase, its interaction with the internal nonlinear mixing of the DUT affects spectral phenomena such as ACPR and EVM that can this way be controlled and optimized. This is the essence of the base-band load/source pull measurement method. The concept is applicable both to the input and output side of the DUT. This is shown inFIG.2. The DUT sees two admittances Yin at its input and Yout at its output port. At the input the admittance is composed by a parallel connection of the RF admittance YinRF and the base-band admittance YinBB; the YinRF admittance is created by the impedance tuners205and204cascaded with the (quasi transparent at RF) low value capacitor20and the (typically) 50 Ohm internal impedance of the source. The YinBB admittance is created by the inductors23and21, the capacitor26and the active VLFT tuner27, since the low value capacitor20represents a quasi-open circuit at base-band frequencies. The low value capacitor20is transparent for the RF frequency signals but blocks the stand-alone base-band signals travelling between the capacitor20and the DUT24through the (transparent for very low frequencies), slide screw impedance tuner205(see ref.2). The base-band signal traverses the inductor23and the capacitor26and allows the VLFT active tuner27to modify and reflect it back (as shown as item303inFIG.3) and send it back to the DUT. Inductor21is a chock that is dimensioned to block the base-band signal from escaping, but to allow DC current through coming from the bias supply (or battery)22. The same mechanism repeats on the output side of the DUT24; the output admittance Yout is composed by the parallel sum of YoutRF and YoutBB, wherein YoutRF is created by tuner204and YoutBB by the combined inductors201and203, the capacitors25and29and the active VLFT tuner28; again capacitor25and inductor203block the base-band frequencies and capacitor29blocks the DC current. DC bias is controlled by the bias supply202. This is important for the calibration of the system, since inFIG.7only the input side at reference plane A is shown for the economy of the presentation, whereas all procedures apply equally to the output side et reference plane B. The complete very low frequency active tuner is shown in a basic embodiment inFIG.4: it includes a signal divider/combiner, two simple or image rejection mixers (IRM)1and2, a digital electronic tuning module, an optional low pass filter, and an amplifier. Qualitatively the apparatus works as follows: knowing that the base-band signal includes a multitude of signal components Fi, we choose to select a representative number of base-band frequencies following either a linear relation Fi=(N+1)*SF or an exponential relation Fi=2N*δF, wherein N=0, 1, 2, 3 . . . 5 and SF typically 0.05 to 0.1 MHz; the exponential law covers a wider frequency band without increasing the number of frequencies Fi. A low frequency, or base-band signal component <b(2N*δF)> entering into the test port is split in two parts: one part travels through to mixer IRM1, is up-converted with the signal of the local oscillator FLOto FLO±2N*δF and enters the digital tuning module; processing inside the digital tuning module is shown inFIG.5: the signal enters into port 1 of the circulator50and exits from port 2; there it is reflected back by the digital electronic tuner51, which is calibrated at the frequency FLO+2N*δF; the reflected signal exits from port 3 and is injected into mixer2(IRM2)52. Mixer2generates two sidebands: F1=2FLO±2N*δF and F2=±2N*δF; the low pass filter53eliminates F1 and allows only signal power at F2 to proceed to the amplifier98and, through the signal combiner97back to the test port, where it arrives as <a(2N*δF)> to generate the virtual reflection factor Γ(2N*δF)=<a(2N*δF)>/<b(2N*δF)> for each specific base-band signal component 2N*δF. FIG.5is a comprehensive presentation of the frequency conversion and processing mechanism showing how impedances at the very low base-band frequency components (N+1)*δF or 2N*δF, wherein δF is of the order of 0.1 to 1 MHz and N varies usually from 0 to 5, for which usual RF components, like directional couplers, circulators and automatic impedance tuners, simply do not exist. Instead, the base-band frequency Fi=(N+1)*δF or Fi=2N*δF can be controlled by transferring it and operating into a frequency range where such components are readily available, rendering the solution realistic and practical. Last, not least, a less than one octave 1 GHz frequency bandwidth at the local oscillator frequency range, which is quite ordinary in a range from 2-3 GHz, may generate control over several decades of base-band frequency bandwidth in the range between 100 kHz and 100 MHz (0.1-1 MHz, 1-10 MHz, 10-100 MHz); this kind of base-band bandwidth is required when dealing with high speed modulated telecommunication signal processing. As an example: a base-band frequency component 2N*δF=1 MHz and a local oscillator frequency of FLO=2.000 MHz is upconverted by mixer1to Fi=2.000±1 MHz, processed by the circulator and the digital electronic tuner at this frequency Fi and is send to mixer2to be down-converted, after low pass filtering, back to 2N*δF=1 MHz. Since the electronic tuner is calibrated at one of the two sidebands, this is the information carried forward: mixer2creates the sum and the difference between Fi and FLO: 2.000±1 MHz-2.000 MHz=±1 MHz and 2.000±1 MHz+2.000 MHz=4.000±1 MHz; the low pass filter easily suppresses the 4 GHz component and the remaining signal is the baseband 1 MHz with controlled amplitude and phase to be amplified and sent back to the DUT. FIGS.10A and10Bclarify the operation of a frequency mixer that can be used for up-conversion from the base-band to the local oscillator high (carrier) frequency range and from the high frequency back to the base-band range. Image rejection mixers are more complicated and allow at least partial suppression of undesired signal mixing products (see ref.15) at the input port of the coupling module; a portion of the baseband 2N*δF signal is coupled out and injected into the IF port of the up-conversion module100(Mixer1). There it is mixed up with the pump LO signal of the high frequency local oscillator101(GHz range) and generates an RF (GHz range) output signal around FLOwhich is now the carrier frequency102. This RF signal (FLO±2N*δF) is injected into the first port 1 of the circulator50(FIG.5); in a circulator the signal is transferred quasi without insertion loss from port 1 to port 2, from port 2 to port 3 and back from port 3 to port 1; there is no transmission from port 2 to port 1 or from port 3 to port 2; the signal is then transferred, with minimal loss, from port 1 to port 2 of the circulator, where it is injected and reflected back at the test port of the digital electronic tuner51, whose idle port is terminated with the characteristic impedance Zo (50 Ohm). The modified signal with controlled amplitude and phase is then injected into mixer52and cleaned up from the higher mixing products using low pass filter53to be injected into the amplifier98and back to the DUT. With regards to bandwidth this architecture allows a several octave base-band operations: for instance, a simple one octave wide 1-2 GHz circulator and a standard 1-2 GHz electronic tuner, combined with a 995 MHz local oscillator will cover 5-1005 MHz IF operation bandwidth, or 7.7 octaves (1005/5=201≈27.7). This has never been expected to be feasible up to now; of course, the practical setup will also be associated with the availability of adjustable or exchangeable low-pass or band-pass filters in the up- and down-conversion paths. The signal flow and processing, controlled by PC96, are schematically shown inFIG.9: a base-band frequency component <b(2N*δF)> enters the test port90; it splits in two equal parts at the signal combiner/divider (SCD)97. One part, <b1>=<b>*LD attenuated by the splitting factor LD (≈0.5 or −3 dB) proceeds to the up-conversion module91, powered by the local oscillator93, where it is up-converted to the range of the local oscillator FLOfrequency. This conversion reduces the signal by the conversion loss CL1 (typically a factor 0.2 or ≈−7 dB) to <b1>*CL1; the digital tuning module92modulates the amplitude and phase of this signal and the outcome is approximately <b1>*CL1*S212*TET; herein it is assumed that the transmission factors of the circulator50between its ports 1 and 2 (S21) and 2 and 3 (S32) are quasi equal and considered here to be equal to S21; The reflection factor TET at the test port of the digital electronic tuner51is spread over the Smith chart as shown inFIG.8. The processed signal <b1>*CL1*S212*TET then enters down-conversion module94and exits reduced by the conversion loss CL2 to <b1>*CL1*CL2*S212*TET; the high frequency components are previously suppressed by the low pass filter95, of which the low insertion loss is here included in the factors S21. The remaining signal is amplified G by the amplifier98and reaches the DUT after losing again half of its amplitude at the SCD97as: <a>≈<b>LD2*CL1*CL2*S212*ΓET*G, leading to Γ≈LD2*CL1*CL2*S212*G*TET; assuming LD=0.5, S21=0.95 and CL1≈CL2≈0.2, this leads to Γ≈0.009*G*ΓET. This means two things: (1) that the digital tuner ΓETfully controls Γ and (2) that a moderate amplifier gain of 20 dB (100) to 25 dB (300) is able to compensate for the factor 0.009 and generate a |Γ| up to 1, which are the objectives of any active load/source pull system. The digital electronic tuner51used in the digital electronic tuning module (FIG.5) has the major advantage of allowing high tuning speed between the irregular (FIG.8), but still sufficiently spread over the Smith chart, tuner states in the range of milli-seconds, compared with several seconds required by mechanical tuners. The use of the heterodyne concept also allows using a single, inexpensive, fixed frequency internal or external local oscillator, instead of an external variable frequency signal generator. At medium or high-power signal operation, active tuners cannot be used in an open-loop environment, i.e., they cannot be calibrated ahead of time for the calibration data to be used posteriori in actual measurements. The reason for that is that active tuners include amplifiers, and the gain of amplifiers becomes notoriously non-linear in amplitude and phase when compressed, i.e. when the input or output power exceeds a certain level. In the case of the very low signal level signal components at the base-band frequencies discussed here, though, this is not the case. The base-band active tuner can be calibrated and the calibration can be used for upcoming measurements, since the amplifier is presumed to remain linear throughout the test sequence. The active base-band tuner to be used in base-band load pull (FIG.2) is calibrated as follows: the input A (or output B) terminal of the test setup facing the DUT as seen inFIG.2is connected to a pre-calibrated VNA (FIG.7). FIG.7shows the computer77controlled calibration of the VLFT impedance tuner into reference plane and DUT terminal A, since the impedance tuner71and the bias network including items70,72,73and74are part of the base-band network. The calibration data will remain valid only if the system operates in small signal, strictly linear domain and in a steady state non-drifting condition. Changes in gain or phase of the amplifier or LO power will affect the accuracy of the calibration data. The configuration is exactly symmetrical for reference plane and terminal B (seeFIG.2). During calibration the tuner71is initialized, i.e. the tuning probe(s) is/are withdrawn from the slabline. The signal source shown inFIG.2is replaced by its internal impedance Zo (50 Ohm)76(seeFIG.7); the capacitors70and74and the inductors73and72are the same as inFIG.2. The DC supply22(battery or electronic) is replaced by its internal resistance of 0 Ohm (short circuit). Calibration consists of measuring by the pre-calibrated VNA78the reflection factor S11 (Γ(Fi)) at point A and a multitude of frequencies {Fi} with Fi=(N+1)*δF or Fi=2N*δF for a multitude of up to P=2Mpermutations of the M diodes Di in the digital electronic VLFT tuner75. Throughout this work the index N starts at N=0 and could reach typical maximum values between 5 and 10. Using the exponential format 2Nis a practical choice of frequencies leading to factors 1, 2, 4, 8, 16, 32 . . . because, realistically. a bias network will also have a spread-out frequency response. SF on the other hand can be any number, preferably 50 kHz, 100 kHz etc. Such choice would adequately cover the base-band frequency spectrum; this choice determines also the number of frequencies to be calibrated. As a rule, and because the digital electronic tuner states (permutations of diodes Di) are always the same, the calibration shall execute broadband; i.e., if N=5 and δF=50 kHz, one should calibrate at 50, 100, 200, 400, 800, 1600 kHz at once and save the data in frequency blocks. If the electronic tuner has 10 diodes (FIG.8) then each frequency sweep, including 32 frequencies (linearly from 50 kHz to 1600 kHz), will have to execute 1024 times, leading to a total calibration time of (typically) 0.04 sec*1024*32≈1.31 minutes, assuming each frequency triggering lasts 40 ms (the number is experimentally verified). If the VNA can be set to operate in a concrete STEP mode, then only the 5 frequencies in the list would have to be measured, reducing the calibration time to a mere 0.21 minutes or around 13 seconds. This application discloses the concept of a very low frequency heterodyne, broadband, high-speed active impedance tuner. Obvious alternatives shall not impede on the originality of the concept. | 17,399 |
11863149 | DESCRIPTION OF EMBODIMENTS The term “couple (or connect)” throughout the specification (including the claims) of this application are used broadly and encompass direct and indirect connection or coupling means. For instance, if the disclosure describes a first apparatus being coupled (or connected) to a second apparatus, then it should be interpreted that the first apparatus can be directly connected to the second apparatus, or the first apparatus can be indirectly connected to the second apparatus through other devices or by a certain coupling means. In addition, terms such as “first” and “second” mentioned throughout the specification (including the claims) of this application are only for naming the names of the elements or distinguishing different embodiments or scopes and are not intended to limit the upper limit or the lower limit of the number of the elements not intended to limit sequences of the elements. Moreover, elements/components/steps with same reference numerals represent same or similar parts in the drawings and embodiments. Elements/components/notations with the same reference numerals in different embodiments may be referenced to the related description. Please refer toFIG.1, which illustrates a schematic diagram of a signal transmitter according to an embodiment of present disclosure. The signal transmitter100includes a plurality of driver slices SC[N:1], wherein the driver slices SC[N:1] are coupled in parallel and generate an output signal OUT commonly. Each of the driver slices SC[N:1] includes a driving circuit110, a plurality of transistors MP1˜MPN, a plurality of transistor MN1˜MNM, and a resistor R1(termination resistor). The driving circuit110includes transistors MPD and MND, and is used to receive an input signal IN and output the output signal OUT. The transistors MPD and MND are coupled in series between the transistors MP1˜MPN and the transistors MN1˜MNM. Gate terminals of the transistors MPD and MND form an input terminal of the driving circuit110for receiving the input signal IN, and a coupling terminal of the transistors MPD and MND forms an output terminal of the driving circuit110for generating the output signal OUT. The transistors MP1˜MPN are coupled in parallel. Each of the transistors MP1˜MPN is coupled between a power terminal PT of the signal transmitter100and the driving circuit110, wherein the power terminal PT of the signal transmitter100is used to receive a power voltage VTERM. The driver slices SC[N:1] are used to provide an impedance of a pull-up branch of the driving circuit110, wherein the impedance of the pull-up branch is adjusted to be close to a first impedance setting value based on adjusting the resistance of each of the parallel-coupled transistors MP1˜MPN in each driver slice. For example, when the first impedance setting value (the impedance of the pull-up branch) is 50 Ohm and the quantity of the driver slices SC[N:1] is twenty driver slices, each driver slice has to contribute 1K Ohm such that the twenty driver slices in parallel can make the impedance of the pull-up branch reach 50 Ohm. In such a case, the 1K Ohm is contributed by the parallel-coupled transistors MP1˜MPN, the transistor MPD and the termination resistor R1of each driver slice. Therefore, the first impedance setting value is reached by adjusting the resistance of each of the transistors MP1˜MPN to compensate for the process variation of transistors. The transistors MN1˜MNM are coupled in parallel. Each of the transistors MP1˜MPN is coupled between a reference ground terminal GT of the signal transmitter100and the driving circuit110, wherein the reference ground terminal GT of the signal transmitter100is used to receive a reference ground voltage GND. The driver slices SC[N:1] are also used to provide an impedance of a pull-down branch of the driving circuit110, wherein the impedance of the pull-down branch is adjusted to be close to a second impedance setting value based on adjusting the resistance of each of the parallel-coupled transistors MN1˜MNM in each driver slice. In detail, gate terminals of transistors MP1˜MPN respectively receive a plurality of control voltages PC1˜PCN. Each of the transistors MP1˜MPN is set to operate in a triode region or in a deep triode region according to a corresponding one of control voltages PC1˜PCN applied to a gate terminal of each of the transistors MP1˜MPN. Anyone of the transistors MP1˜MPN which operates in the deep triode region (a voltage drop between drain and source (Vds) thereof is approximate to zero) behaves as a switch. In present embodiment, a part of the transistors MP1˜MPN may be selected to behave as the switches and another part of the transistors MP1˜MPN may be selected to operate in triode region. Take the transistor MP1as an example. When the transistor MP1is selected to behave as the switch according to the control voltage PC1applied to the gate terminal of the transistor MP1, the transistor MP1can be turned on or cut off according to the control voltage PC1. If the transistor MP1is turned on and behave as the switch, the impedance provided by the transistor MP1is a drain-to-source resistance of the transistor MP1in the deep triode region. Furthermore, if the transistor MP1is select to operate in the triode region according to the control voltage PC1, the impedance provided by the transistor MP1can be adjusted by a voltage level of the control voltage PC1and in other words, the impedance provided by the transistor MP1is a drain-to-source resistance of the transistor MP1in the triode region. The transistor MP1operating in the triode region likes a voltage-controlled resistor. Therefore, the first impedance setting value is reached by adjusting the resistance of each of the transistors MP1˜MPN to compensate for the process variation of transistors. Please be noted here, the number of the transistors MP1˜MPN selected to behave as the switches is not limited. In this embodiment, one to all of the transistors MP1˜MPN may be selected to behave as the switches. Also, the number of the transistors MP1˜MPN selected to operate in the triode region is also not limited, too. One to all of the transistors MP1˜MPN may be selected to operate in the triode region. On the other hand, gate terminals of transistors MN1˜MNM respectively receive a plurality of control voltages NC1˜NCM. Each of the transistors MN1˜MNM is set to operate in the triode region or in a deep triode region according to a corresponding one of control voltages NC1˜NCM applied to a gate terminal of each of the transistors MN1˜MNM. Anyone of the transistors MN1˜MNM which operates in the deep triode region (Vds is approximate to zero) behaves as a switch. In present embodiment, a part of the transistors MN1˜MNM may be selected to behave as the switches and another part of the transistors MN1˜MNM may be selected to operate in triode region. Take the transistor MN1as an example. When the transistor MN1is selected to behave as the switch according to the control voltage NC1applied to the gate terminal of the transistor MN1, the transistor MN1can be turned on or cut off according to the control voltage NC1. If the transistor MN1is turned on and behave as the switch, the impedance provided by the transistor MN1is a drain-to-source resistance of the transistor MN1in the deep triode region. Furthermore, if the transistor MN1is select to operate in the triode region according to the control voltage NC1, the impedance provided by the transistor MN1can be adjusted by a voltage level of the control voltage NC1. The transistor MN1operating in the triode region likes a voltage-controlled resistor. Therefore, the second impedance setting value is reached by adjusting the resistance of each of the transistors MN1˜MNM to compensate for the process variation of transistors. Please be noted here, number of the transistors NP1˜NPM selected to behave as the switches is not limited. In this embodiment, one to all of the transistors MN1˜MNM may be selected to behave as the switches. Also, the number of the transistors MN1˜MNM selected to operate in the triode region is also not limited, too. One to all of the transistors MN1˜MNM may be selected to operate in the triode region. Furthermore, in this embodiment, the number of the transistor MP1˜MPN and the number of the transistor MP1˜MPN may be the same or different. It is noted that the first impedance setting value and the second impedance setting value may be individually predetermined. Since the impedance of the pull-up branch and the impedance of the pull-down branch are individually adjusted, PMOS and NMOS mismatch may be compensated. Please refer toFIG.2, which illustrates a schematic diagram of a signal transmitter according to another embodiment of present disclosure. The signal transmitter200includes a driver slice201, and the driver slice201includes driving circuit210, a plurality of transistors MP1˜MPN, a plurality of transistors MN1˜MNM, a resistor R1, a plurality of selectors221˜22N, a plurality of selectors231˜23M and a bias voltage generator240. The selectors221˜22N and selectors231˜23M form a control voltage generator. The selectors221˜22N are respectively coupled to gate terminals of the transistors MP1˜MPN, and the selectors231˜23M are respectively coupled to gate terminals of the transistors MN1˜MNM. The selectors221˜22N respectively receive a plurality of bias voltages PBV1˜PBVN, and also respectively receive a plurality of predetermined voltages PPV1˜PPVN. Each of the bias voltages PBV1˜PBVN is used for controlling a transistor to operate in the triode region, and ach of the predetermined voltages PPV1˜PPVN is used for either cutting off a transistor or turning on a transistor to behave as a switch (i.e., operate in the deep triode region). The selectors221˜22N are controlled by selection signal SP1˜SPN. Each of the selectors221˜22N respectively selects the received predetermined voltage or the received bias voltage according to one of the selection signals SP1˜SPN and output the selected voltage as a corresponding control voltage of a plurality of control voltages PC1˜PCN to the gate terminals of a corresponding transistor of the transistors MP1˜MPN. Take the selector221as an example, the selectors221generates the control voltage PC1according to either the bias voltage PBV1or the predetermined voltage PPV1which control the drain-to-source resistance of the transistor MP1. Such as that, an equivalent resistance of a pull-up branch of the signal transmitter200can be adjusted to close to the first impedance setting value by applying the control voltages PC1˜PCN to the gate terminals of the transistors MP1˜MPN. On the other hand, the selectors231˜23M respectively receive a plurality of bias voltages NBV1˜PBVM and also respectively receive a plurality of predetermined voltages NPV1˜NPVM. Each of the bias voltages NBV1˜NBVN is used for controlling a transistor to operate in the triode region, and ach of the predetermined voltages NPV1˜NPVM is used for either cutting off a transistor or turning on a transistor to behave as a switch (i.e., operate in the deep triode region). The selectors231˜23M are controlled by selection signal SP1˜SPN. The selectors231˜23M respectively generate a plurality of control voltages NC1˜NCM to the gate terminals of the transistors MN1˜MNM. Each of the selectors231˜23M respectively selects the received predetermined voltage or the received bias voltage according to one of the selection signals SN1˜SNM and output the selected voltage as a corresponding control voltage of a plurality of control voltages NC1˜NCM to the gate terminals of a corresponding transistor of the transistors MN1˜MNM. Take the selector231as an example, the selectors231generates the control voltage NC1according to the bias voltage NBV1or the predetermined voltage NPV1which control the drain-to-source resistance of the transistor MN1. Such as that, an equivalent resistance of a pull-down branch of the signal transmitter200can be adjusted to close to the second impedance setting value by applying the control voltages NC1˜NCM to the gate terminals of the transistors MN1˜MNM. The driving circuit210includes transistors MPD and MND. The transistors MPD and MND are coupled in series between the transistors MP1˜MPN and the transistors MN1˜MNM. Gate terminals of the transistors MPD and MND form an input terminal of the driving circuit210for receiving the input signal IN, and a coupling terminal of the transistors MPD and MND forms an output terminal of the driving circuit210for generating the output signal OUT. The bias voltage generator240is coupled to the selectors221˜22N and231˜23M. The bias voltage generator240is used to generate the bias voltage PBV1˜PBVN and NBV1˜NBVM. The bias voltage generator240may be implemented by any voltage generator well known by a person skilled in this art, and no special limitation here. In this embodiment, for setting each of the transistors each of the transistors MP1˜MPN to operate in the triode region, each of the bias voltage PBV1˜PBVN may be in a range between 0V and a half of the power voltage VTERM of the signal transmitter200. Also, for setting each of the transistors MN1˜MNM to operate in the triode region, each of the bias voltages NBV1˜NBVM may be in a range between the power voltage VTERM of the signal transmitter200and a reference voltage VDD. The power voltage VTERM and the reference voltage VDD are two supply voltages separately applied to the signal transmitter200. In some embodiments, the reference voltage VDD may be larger than the power voltage VTERM. For example, the power voltage VTERM=0.8V and the reference voltage VDD=1.5V. On the other hand, in this embodiment, for setting each of the transistors MP1˜MPN to behave as the switch, each of the predetermined voltages PPV1˜PPVN may be a first predetermined voltage or a second predetermined voltage, wherein the first predetermined voltage and the second predetermined voltage may be constant voltages and the first predetermined voltage may be larger than the second predetermined voltage. In this embodiment, the first predetermined voltage may be the reference voltage VDD, and the second predetermined voltage may be a ground voltage (=0V). Also, for setting each of the transistors MN1˜MNM to behave as the switch, each of the predetermined voltages NPV1˜NPVM may be the reference voltage VDD or the ground voltage. Please refer toFIG.3, which illustrates a schematic diagram of a signal transmitter according to another embodiment of present disclosure. The signal transmitter300includes a plurality of driver slices for generating an output signal OUT according to an input signal IN. Each of the driver slices includes a driving circuit310, a plurality of transistors MP1˜MPN, a plurality of transistors MN1˜MNM and a control voltage generator350and a termination resistor R1. The transistors MP1˜MPN are coupled in parallel, and provide an equivalent resistance of a pull-up branch of the signal transmitter300. The transistors MN1˜MNM are coupled in parallel, and provide an equivalent resistance of a pull-down branch of the signal transmitter300. The transistors MP1˜MPN respectively receive a plurality of control voltages PC1˜PCN to control the equivalent resistance of the pull-up branch close to a first impedance setting value. The transistors MN1˜MNM respectively receive a plurality of control voltages NC1˜NCM to control the equivalent resistance of the pull-down branch close to a second impedance setting value. The control voltage generator350is coupled to gate terminals of the transistors MP1˜MPN and MN1˜MNM. The control voltage generator350is configured to provide the control voltages PC1˜PCN to the gate terminals of the transistors MP1˜MPN, and provide a plurality of control voltages NC1˜NCM to the gate terminals of the transistors MN1˜MNM. In this embodiment, each of the transistors MP1˜MPN may operate in a triode region or behave as a switch according to each of the control voltages PC1˜PCN, and each of the transistors MN1˜MNM may operate in the triode region or behave as the switch according to each of the control voltages NC1˜NCM. In this embodiment, the control voltage generator350includes a plurality of selectors SS11˜SS1N and SS21˜SS2M. The selectors SS11˜SS1N are configured to provide the control voltages PC1˜PCN and the selectors SS21˜SS2M are configured to provide the control voltages NC1˜NCM. Each of the selectors SS11˜SS1N may include three switches, and the three switches of each of the selectors SS11˜SS1N may respectively receive corresponding bias voltage PBV1˜PBVN, a reference voltage VDD and a ground voltage (=0V). Merely one of the three switches of each of the selectors SS11˜SS1N can be turned-on to select one of the bias voltages PBV1˜PBVN, the reference voltage VDD or the ground voltage to generate each of the control voltages PC1˜PCN. Each of the selectors SS21˜SS2M may include three switches, and the three switches of each of the selectors SS21˜SS2M may respectively receive corresponding bias voltage NBV1˜NBVM, the reference voltage VDD and the ground voltage (=0V). Merely one of the three switches of each of the selectors SS21˜SS2N can be turned-on to select one of the bias voltages NBV1˜NBVM, the reference voltage VDD or the ground voltage to generate each of the control voltages NC1˜NCM. It should be noted, the transistors MP1˜MPN may be P-type transistors, and the transistors MN1˜MNM may be N-type transistors. Please refer toFIG.3andFIG.4commonly, whereinFIG.4illustrates schematic diagram for an impedance adjustment scheme of a signal transmitter according to an embodiment of present disclosure. InFIG.4, a vertical axis represents the impedance of the pull-down branch of the signal transmitter300, and a horizontal axis represents voltage states of the control voltages NC1˜NC3. In this embodiment, there are three transistors MN1˜MN3in each driver slice are used to form the pull-down branch. For adjusting the resistance of pull-down branch of the signal transmitter400, in a full digital adjustment manner, each of the control voltages NC1˜NC3may be the reference voltage VDD or the ground voltage (=0V), and inFIG.4the control voltages (NC3, NC2, NC1) may respectively equal (0V, 0V, VDD), (0V, VDD, VDD), and (VDD, VDD, VDD) as examples. When the control voltages (NC3, NC2, NC1) respectively equal (0V, 0V, VDD), the resistance of pull-down branch may be a resistance RV3which is approximate to the termination resistor R1, the drain-to-source resistance of the transistor MND, and the drain-to-source resistance of the transistor MN1coupled in series; when the control voltages (NC3, NC2, NC1) respectively equal (0V, VDD, VDD), the resistance of pull-down branch may be a resistance RV2which is approximate to the termination resistor R1, the drain-to-source resistance of the transistor MND, and a parallel-coupled resistance coupled in series, wherein the parallel-coupled resistance is formed by the drain-to-source resistance of the transistor MN1in parallel with the drain-to-source resistance of the transistor MN2(in the deep triode region); and when the control voltages (NC3, NC2, NC1) respectively equal (VDD, VDD, VDD), the resistance of pull-down branch may be a resistance RV1which is approximate to the termination resistor R1, the drain-to-source resistance of the transistor MND, and a parallel-coupled resistance coupled in series, wherein the parallel-coupled resistance is formed by the drain-to-source resistance of the transistor MN1in parallel with the drain-to-source resistance of the transistor MN2and in parallel with the drain-to-source resistance of the transistor MN3, wherein the resistance RV3>the resistance RV2>the resistance RV1. When an impedance setting value RSV with a relative larger value is set, in an adjusting operation, the control voltages (NC3, NC2, NC1) may be respectively set to (0V, 0V, VDD), and the resistance RV3which closes to the impedance setting value RSV can be obtained. Please refer toFIG.3andFIG.5commonly, whereinFIG.5illustrates schematic diagram for another impedance adjustment scheme of a signal transmitter according to an embodiment of present disclosure. InFIG.5, a vertical axis represents the impedance of the pull-down branch of the signal transmitter300, and a horizontal axis represents voltage states of the control voltages NC1˜NC3. In this embodiment, there are three transistors MN1˜MN3are used to form the pull-down branch. A curve510represents the all possible impedances of the pull-down branch of the signal transmitter300, which are obtained by configuring each of the control voltages NC1˜NC3to be in a range from a half of the power voltage VTERM to the reference voltage VDD, do not reach the impedance setting value RSV. A curve520represents the all possible impedance of the pull-down branch of the signal transmitter300, which are obtained by configuring the control voltage NC3to be 0V and configuring the control voltages NC2and NC1to be in a range from the half of the power voltage VTERM to the reference voltage VDD, and in such a condition, a proper setting of the control voltages NC2and NC1may help the impedance of the pull-down branch reach the impedance setting value RSV. A curve530represents the all possible impedance of the pull-down branch of the signal transmitter300, which are obtained by configuring the control voltages NC3and NC2to be 0V and configuring the control voltage NC1to be in a range from the half of the power voltage VTERM to the reference voltage VDD, and in such a condition, a proper setting of the control voltage NC1may help the impedance of the pull-down branch reach the impedance setting value RSV. For adjusting the resistance of pull-down branch of the signal transmitter300, each of the control voltages NC1˜NC3may be the reference voltage VDD or the ground voltage (=0V) initially. In an analog adjustment manner, at least one of the control voltages NC1˜NC3which is gradually reduced from the reference voltage VDD to a half of the power voltage VTERM of the signal transmitter300, and the resistance of the pull-down branch can be increased gradually. In a curve510, the control voltages NC1˜NC3are all initially set to the reference voltage VDD. In an adjusting operation (which may be offline or online adjusting operation), all of the control voltages NC1˜NC3may be gradually reduced to half of the power voltage VTERM, and the resistance of the pull-down branch can be increased gradually. In a curve520, the control voltages (NC3, NC2, NC1) are respectively initially set to (0V, VDD and VDD). In the adjusting operation, the control voltages NC2and NC1may be gradually reduced to half of the power voltage VTERM and the control voltage NC3keeps 0V, and the resistance of the pull-down branch can be increased gradually. In a curve530, the control voltages (NC3, NC2, NC1) are all initially set to (0V, 0V and VDD). In the adjusting operation, the control voltage NC1may be gradually reduced to half of the power voltage VTERM and the control voltages NC2and NC3keeps 0Vs, and the resistance of the pull-down branch can also be increased gradually. For adjusting the resistance of the pull-down branch close to the impedance setting value RSV, in this embodiment, the curve530can be selected. By setting the control voltages NC2and NC3to 0V, and setting the control voltage NC1in a range between the reference VDD and half of the power voltage VTERM, the resistance of the pull-down branch can be close to (or equal to) the impedance setting value RSV. Please be noted here, by a person skilled in the art knows, a resistance of the pull-down branch of the signal transmitter300can also be adjusted according to the embodiments ofFIG.4andFIG.5, and no more repeated description here. Please refer toFIG.6, which illustrates a schematic diagram of a signal transmitter according to another embodiment of present disclosure. The signal transmitter600includes a plurality of driver slices, and each of the driver slices includes a plurality of transistors MP1˜MP3, a plurality of transistors MN1˜MN3, a driver circuit610, and a termination resistor R1. In this embodiment, different from the signal transmitter300, the transistor MP2is formed by a plurality of sub-transistors Mrp1[1], Mrp2[1], Mrp3[1], Mrp4[1], and the transistor MP3is formed by two sub-transistors Mrp1[2], Mrp2[2]. The sub-transistors Mrp1[1], Mrp2[1], Mrp3[1], Mrp4[1] are coupled in series between the power voltage VTERM and the driver circuit610to form the transistor MP2. The sub-transistors Mrp1[2], Mrp2[2] are coupled in series between the power voltage VTERM and the driver circuit610to form the transistor MP3. Gate terminals of the sub-transistors Mrp1[1], Mrp2[1], Mrp3[1], Mrp4[1] can be tied together to receive same control voltage. Gate terminals of the sub-transistors Mrp1[2], Mrp2[2] can be tied together to receive same control voltage, too. The transistor MN2is formed by a plurality of sub-transistors Mrn1[1], Mrn2[1], Mrn3[1], Mrn4[1], and the transistor MN3is formed by two sub-transistors Mrn1[2], Mrn2[2]. The sub-transistors Mrn1[1], Mrn2[1], Mrn3[1], Mrn4[1] are coupled in series between the reference ground voltage GND and the driver circuit610to form the transistor MN2. The sub-transistors Mrn1[2], Mrn2[2] are coupled in series between the reference ground voltage GND and the driver circuit610to form the transistor MN3. Gate terminals of the sub-transistors Mrn1[1], Mrn2[1], Mrn3[1], Mrn4[1] can be tied together to receive same control voltage. Gate terminals of the sub-transistors Mrn1[2], Mrn2[2] can be tied together to receive same control voltage, too. In this embodiment, number of sub-transistors of each of the transistors MP1˜MP3and MN1˜MN3has no special limitation. A designer can decide to implement each of the transistors MP1˜MP3and MN1˜MN3by one any number of sub-transistors. The driver circuit610includes transistors MPD and MND coupled in series. The driver circuit610receives an input signal IN and generates an output signal OUT. Please refer toFIG.7, which illustrates a flow chart of an impedance adjustment method according to an embodiment of present disclosure. The impedance adjustment method is adapted for a signal transmitter where the signal transmitter has a plurality of first transistors, a driving circuit and a plurality of second transistors coupled in series. In a step S710, the impedance adjustment method can be executed by selectively providing a bias voltage to each of gate terminals of a first part of the first transistors and a first part of the second transistors to control the corresponding first transistor or second transistor to operate in a triode region. In a step S720, the impedance adjustment method can be also executed by selectively providing a predetermined voltage to each of the gate terminals of a second part of the first transistors and a second part of the second transistors to control the corresponding first transistor or second transistor to behave as a switch. Detail operations about the steps S710and S720have been detail described in the embodiments mentioned before, and no more description here. In summary, present disclosure provides the signal transmitter having constant number of driver slices and can accuracy control a de-emphasis level thereof. In present disclosure, the resistances of the pull-up branch and the pull-down branch can be adjusted individually, and the resistances of the pull-up branch and the pull-down branch can be adjusted by both of the digital manner and the analog manner. Such as that, a resolution of the impedance adjustment can be increased without increasing number of driver circuit, and a circuit size of the signal transmitter can be reduced. Also, the resistances of the pull-up branch and the pull-down branch can be adjusted precisely to effectively improve a bandwidth for signal transmission. It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the disclosed embodiments without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the disclosure cover modifications and variations of this disclosure provided they fall within the scope of the following claims and their equivalents. | 28,440 |
11863150 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS An embodiment of the present invention will now be described in detail with reference to the drawings. First, the configuration of a multilayer electronic component (hereinafter simply referred to as electronic component)1according to the embodiment of the invention will be outlined with reference toFIG.1.FIG.1shows a branching filter (diplexer) as an example of the electronic component1. The branching filter includes a first filter10that selectively passes a first signal of a frequency within a first passband, and a second filter20that selectively passes a second signal of a frequency within a second passband higher than the first passband. The electronic component1further includes a common port2, a first signal port3, a second signal port4, a first signal path5connecting between the common port2and the first signal port3, a second signal path6connecting between the common port2and the second signal port4. In the circuit configuration, the first filter10is provided between the common port2and the first signal port3, the second filter20is provided between the common port2and the second signal port4. The first signal path5is a path leading from the common port2to the first signal port3via the first filter10. The second signal path6is a path leading from the common port2to the second signal port4via the second filter20. The first signal of a frequency within the first passband selectively passes through the first signal path5on which the first filter10is provided. The second signal of a frequency within the second passband selectively passes through the second signal path6on which the second filter20is provided. In such a manner, the electronic component1separates the first signal and the second signal. Next, an example of configuration of the first filter10will be described with reference toFIG.1. The first filter10includes inductors L11, L12, and L13, and capacitors C11, C12, C13, C14, C15, and C16. In the circuit configuration, the inductors L11and L12are provided on the first signal path5. In the circuit configuration, the inductor L11is provided at a position closer to the first signal port3than the inductor L12. One end of the inductor L11is connected to the first signal port3. The other end of the inductor L11is connected to one end of the inductor L12. The other end of the inductor L12is connected to the common port2. The capacitor C11is connected in parallel with the inductor L11. The capacitor C12is connected in parallel with the inductor L12. One end of the capacitor C13is connected to the one end of the inductor L11. The other end of the capacitor C13is connected to the other end of the inductor L12. One end of the capacitor C14is connected to the one end of the inductor L11. One end of the capacitor C15is connected to a connection point between the inductor L11and the inductor L12. The other ends of the capacitors C14and C14are connected to one end of the inductor L13. The other end of the inductor L13is connected to the ground. The capacitor C16is connected in parallel with the inductor L13. In the circuit configuration, the inductor L13is provided between the first signal path5and the ground. Next, an example of configuration of the second filter20will be described with reference toFIG.2. The second filter20includes inductors L21and L22, and capacitors C21, C22, C23, C24, C25, C26, C27, C28, C29, C30, and C31. One end of the capacitor C21is connected to the second signal port4. The other end of the capacitor C21is connected to one end of the capacitor C22. The other end of the capacitor C22is connected to one end of the capacitor C23. The other end of the capacitor C23is connected to the common port2. One end of the capacitor C24is connected to the one end of the capacitor C21. The other end of the capacitor C24is connected to the other end of the capacitor C22. One end of the capacitor C25is connected to a connection point between the capacitor C22and the capacitor C23. In the circuit configuration, the inductor L21is provided between the second signal path6and the ground. The inductor L21includes inductor portions211and212. One end of the inductor portion211is connected to a connection point between the capacitor C21and the capacitor C22. The other end of the inductor portion211is connected to one end of the inductor portion212. The other end of the inductor portion212is connected to the ground. In the circuit configuration, the inductor L22is provided between the second signal path6and the ground. Furthermore, in the circuit configuration, the inductor L22is provided at a position closer to the common port2than the inductor L21. The inductor L22includes inductor portions221and222. One end of the inductor portion221is connected to the other end of the capacitor C25. The other end of the inductor portion221is connected to one end of the inductor portion222. The other end of the inductor portion222is connected to the ground. The inductor portion211of the inductor L21and the inductor portion221of the inductor L22are magnetically coupled to each other. The inductor portion212of the inductor L21and the inductor portion222of the inductor L22are not magnetically coupled to each other. The capacitor C26is connected in parallel with the inductor portion211of the inductor L21. The capacitor C27is connected in parallel with the inductor portion212of the inductor L21. One end of the capacitor C28is connected to the one end of the inductor portion211. The other end of the capacitor C28is connected to the other end of the inductor portion212. The capacitor C29is connected in parallel with the inductor portion221of the inductor L22. The capacitor C30is connected in parallel with the inductor portion222of the inductor L22. One end of the capacitor C31is connected to the one end of the inductor portion221. The other end of the capacitor C31is connected to the other end of the inductor portion222. Next, other configurations of the electronic component1will be described with reference toFIG.3.FIG.3is a perspective view showing an appearance of the electronic component1. The electronic component1further includes a stack50including a plurality of dielectric layers and a plurality of conductors stacked together. The stack50is intended to integrate the common port2, the first signal port3, the second signal port4, the inductors L11, L12, L13, L21, and L22, and the capacitors C11to C16and C21to C31. The first filter10and the second filter20are each constituted by using a plurality of conductors. The stack50has a bottom surface50A and a top surface50B located at both ends in a stacking direction T of the plurality of dielectric layers, and four side surfaces50C to50F connecting the bottom surface50A and the top surface50B. The side surfaces50C and50D are opposite to each other. The side surfaces50E and50F are opposite to each other. The side surfaces50C to50F are perpendicular to the top surface50B and the bottom surface50A. Here, X, Y, and Z directions are defined as shown inFIG.3. The X, Y, and Z directions are orthogonal to one another. In the present embodiment, a direction parallel to the stacking direction T will be referred to as the Z direction. The opposite directions to the X, Y, and Z directions are defined as −X, −Y, and −Z directions, respectively. As shown inFIG.3, the bottom surface50A is located at the end of the stack50in the −Z direction. The top surface50B is located at the end of the stack50in the Z direction. The bottom surface50A and the top surface50B each have a rectangular shape extending in the X direction. The side surface50C is located at the end of the stack50in the −X direction. The side surface50D is located at the end of the stack50in the X direction. The side surface50E is located at the end of the stack50in the −Y direction. The side surface50F is located at the end of the stack50in the Y direction. A planar shape of the stack50, in other words, the shape of the bottom surface50A (the shape of the top surface50B) in a view in the Z direction is a rectangle. Long sides of the rectangle are parallel to the X direction, and short sides of the rectangle are parallel to the Y direction. The electronic component1further includes signal terminals112,113, and114provided on the bottom surface50A of the stack50, and ground terminals111,115,116,117,118, and119connected to the ground. The ground terminal111is disposed near a corner at a position where the bottom surface50A, the side surface50D, and the side surface50E intersect one another. The signal terminal113is disposed near a corner at a position where the bottom surface50A, the side surface50D, and the side surface50F intersect one another. The signal terminal114is disposed near a corner at a position where the bottom surface50A, the side surface50C, and the side surface50F intersect one another. The ground terminal115is disposed near a corner at a position where the bottom surface50A, the side surface50C, and the side surface50E intersect one another. The signal terminal112is located between the ground terminal111and the ground terminal115. The ground terminal116is located between the ground terminal111and the signal terminal113. The ground terminal117is located between the signal terminal113and the signal terminal114. The ground terminal118is located between the signal terminal114and the ground terminal115. The ground terminal119is disposed at a center of the bottom surface50A. The terminal112corresponds to the common port2, the signal terminal113to the first signal port3, and the signal terminal114to the second signal port4. The common port2, the first signal port3, and the second signal port4are thus provided on the bottom surface50A of the stack50. Next, an example of the plurality of dielectric layers and the plurality of conductors constituting the stack50will be described with reference toFIG.4AtoFIG.9B. In this example, the stack50includes twenty-four dielectric layers stacked together. The twenty-four dielectric layers will be referred to as a first to a twenty-fourth dielectric layer in the order from bottom to top. The first to twenty-fourth dielectric layers are denoted by reference numerals51to74, respectively. InFIG.4AtoFIG.8C, each circle represents a through hole. The dielectric layers51to72each have a plurality of through holes. The through holes are each formed by filling a hole intended for a through hole with a conductive paste. Each of the through holes is connected to a conductor layer or another through hole. FIG.4Ashows the patterned surface of the first dielectric layer51. The terminals111to119are formed on the patterned surface of the dielectric layer51.FIG.4Bshows the patterned surface of the second dielectric layer52. Conductor layers521,522,523,524, and525are formed on the patterned surface of the dielectric layer52. FIG.4Cshows the patterned surface of the third dielectric layer53. Conductor layers531,532,533,534,535,536,537,538,539,5310,5311, and5312are formed on the patterned surface of the dielectric layer53. One end of the conductor layer531is connected to the conductor layer5311. The other end of the conductor layer531is connected to the conductor layer5312. InFIG.4C, the boundary between the conductor layer531and the conductor layer5311and the boundary between the conductor layer531and the conductor layer5312are indicated by dotted lines. FIG.5Ashows the patterned surface of the fourth dielectric layer54. Conductor layers541,542,543,544,545,546,547, and548are formed on the patterned surface of the dielectric layer54. The conductor layers541and543are connected to the conductor layer542.FIG.5Bshows the patterned surface of the fifth dielectric layer55. Conductor layers551,552,553, and554are formed on the patterned surface of the dielectric layer55. The conductor layer554is connected to the conductor layer553.FIG.5Cshows the patterned surface of the sixth dielectric layer56. Conductor layers561and562are formed on the patterned surface of the dielectric layer56. FIG.6Ashows the patterned surface of the seventh dielectric layer57. Conductor layers571and572are formed on the patterned surface of the dielectric layer57. The conductor layer572is connected to the conductor layer571.FIG.6Bshows the patterned surface of the eighth dielectric layer58. No conductor layer is formed on the patterned surface of the dielectric layer58.FIG.6Cshows the patterned surface of the ninth dielectric layer59. A conductor layer591is formed on the patterned surface of the dielectric layer59. FIG.7Ashows the patterned surface of the tenth dielectric layer60. A conductor layer601is formed on the patterned surface of the dielectric layer60.FIG.7Bshows the patterned surface of the eleventh dielectric layer61. No conductor layer is formed on the patterned surface of the dielectric layer61.FIG.7Cshows the patterned surface of the twelfth dielectric layer62. Conductor layers621and622are formed on the patterned surface of the dielectric layer62. Shapes of the conductor layers621and622may be the same in a view in one direction (the Z direction) parallel to the stacking direction T. FIG.8Ashows the patterned surface of the thirteenth dielectric layer63. Conductor layers631and632are formed on the patterned surface of the dielectric layer63. Shapes of the conductor layers631and632may be the same in a view in one direction (the Z direction) parallel to the stacking direction T.FIG.8Bshows the patterned surface of each of the fourteenth to twenty-first dielectric layers64to71. No conductor layer is formed on the patterned surface of the dielectric layers64to71.FIG.8Cshows the patterned surface of the twenty-second dielectric layer72. Conductor layers721,722,723,724,725,726, and727are formed on the patterned surface of the dielectric layer72. Shapes of the conductor layers722,723, and724may be the same in a view in one direction (the Z direction) parallel to the stacking direction T. Shapes of the conductor layers726and727may be the same in a view in one direction (the Z direction) parallel to the stacking direction T. FIG.9Ashows the patterned surface of the twenty-third dielectric layer73. Conductor layers731,732,733,734,735,736, and737are formed on the patterned surface of the dielectric layer73. Shapes of the conductor layers732,733, and734may be the same in a view in one direction (the Z direction) parallel to the stacking direction T. Shapes of the conductor layers736and737may be the same in a view in one direction (the Z direction) parallel to the stacking direction T.FIG.9Bshows the patterned surface of the twenty-fourth dielectric layer74. A mark741made of a conductor layer is formed on the patterned surface of the dielectric layer74. The stack50shown inFIG.2is formed by stacking the first to twenty-fourth dielectric layers51to74such that the patterned surface of the first dielectric layer51serves as the bottom surface50A of the stack50and the surface of the twenty-fourth dielectric layer74opposite to the patterned surface thereof serves as the top surface50B of the stack50. Each of the plurality of through holes shown inFIG.4AtoFIG.8Cis connected to, when the first to twenty-second dielectric layers51to72are stacked, a conductor layer overlapping in the stacking direction T or to another through hole overlapping in the stacking direction T. Of the plurality of through holes shown inFIG.4AtoFIG.8C, the ones located within a terminal or a conductor layer are connected to the terminal or conductor layer. FIG.10andFIG.11show an inside of the stack50formed by stacking the first to twenty-fourth dielectric layers51to74. As shown inFIG.10andFIG.11, the plurality of conductor layers and the plurality of through holes shown inFIG.4AtoFIG.9Aare stacked together inside the stack50. InFIG.10andFIG.11, the mark741is omitted. For example, the stack50is fabricated by a low-temperature co-firing method, using ceramic as the material of the dielectric layers51to74. In this case, a plurality of ceramic green sheets, which eventually become the dielectric layers51to74, are fabricated first. Each ceramic green sheet has a plurality of unfired conductor layers formed thereon and a plurality of unfired through holes formed therein. The plurality of unfired conductor layers eventually become a plurality of conductor layers. The plurality of unfired through holes eventually become a plurality of through holes. Next, the plurality of ceramic green sheets are stacked together into a green sheet stack. The green sheet stack is then cut to form an unfired stack. The ceramic and conductor in the unfired stack are then fired by a low-temperature co-firing method to thereby complete the stack50. Next, configurations of the inductors L11, L12, L13, L21, and L22will be described in detail with reference toFIG.4AtoFIG.15.FIG.12toFIG.15are side views showing a part of the inside of the stack50.FIG.12shows the part of the inside of the stack50in a view from the side surface50D side and mainly shows the inductors L11, L12, and L13.FIG.13shows the part of the inside of the stack50in a view from the side surface50E side and mainly shows the inductors L12, L13, and L22.FIG.14shows the part of the inside of the stack50in a view from the side surface50C side and mainly shows the inductors L21and L22.FIG.15shows the part of the inside of the stack50in a view from the side surface50F side and mainly shows the inductors L11and L21. The inductors L11, L12, L13, L21, and L22are each integrated with the stack50. As described later, the inductors L11, L12, L21, and L22each include a plurality of through hole columns. The plurality of through hole columns are each constituted by two or more through hole columns arranged in the stacking direction T and connected in series to each other. First, the configuration of the inductor L11will be described. As shown inFIG.12andFIG.15, the inductor L11is wound about an axis A11parallel to a direction orthogonal to the stacking direction T. In the present embodiment, in particular, the axis A11extends in a direction parallel to the Y direction. The inductor L11includes one conductor portion wound less than once about the axis A11. The conductor portion of the inductor L11includes a conductor layer portion11C1(refer toFIG.10andFIG.11). The conductor layer portion11C1has a shape that is long in a direction parallel to the X direction. The conductor layer portion11C1includes conductor layers721and731(refer toFIG.8CandFIG.9A) disposed at positions different from each other in the stacking direction T and connected in parallel to each other through four through holes. The conductor layers721and731each extend in the direction parallel to the X direction. The conductor portion of the inductor L11further includes two through hole columns11T1and two through hole columns11T2(refer toFIG.10andFIG.11). The two through hole columns11T1are connected in parallel to a part near one end of the conductor layer portion11C1in a longitudinal direction. The two through hole columns11T2are connected in parallel to a part near the other end of the conductor layer portion11C1in the longitudinal direction. Next, the configuration of the inductor L12will be described. As shown inFIG.12andFIG.13, the inductor L12is wound about an axis A12parallel to a direction orthogonal to the stacking direction T. In the present embodiment, in particular, the axis A12extends in a direction parallel to the X direction. The inductor L12includes conductor portions L12A, L12B, and L12C each wound less than once about the axis A12, a connection portion L12D connecting the conductor portions L12A and L12B in series, and a connection portion L12E connecting the conductor portions L12B and L12C in series. The conductor portions L12A, L12B, and L12C include conductor layer portions12C1,12C2, and12C3, respectively (refer toFIG.10andFIG.11). The conductor layer portions12C1,12C2, and12C3each have a shape that is long in the direction parallel to the Y direction. The conductor layer portion12C1includes conductor layers722and732(refer toFIG.8CandFIG.9A) disposed at positions different from each other in the stacking direction T and connected in parallel to each other through two through holes. The conductor layer portion12C2includes conductor layers723and733(refer toFIG.8CandFIG.9A) disposed at positions different from each other in the stacking direction T and connected in parallel to each other through two through holes. The conductor layer portion12C3includes conductor layers724and734(refer toFIG.8CandFIG.9A) disposed at positions different from each other in the stacking direction T and connected in parallel to each other through two through holes. The conductor layers722to724and732to734each extend in the direction parallel to the Y direction. The conductor portion L12A further includes through hole columns12T1and12T2(refer toFIG.10andFIG.11). The through hole column12T1is connected to a part near one end of the conductor layer portion12C1in a longitudinal direction. The through hole column12T2is connected to a part near the other end of the conductor layer portion12C1in the longitudinal direction. The conductor portion L12B further includes through hole columns12T3and12T4(refer toFIG.10andFIG.11). The through hole column12T3is connected to a part near one end of the conductor layer portion12C2in a longitudinal direction. The through hole column12T4is connected to a part near the other end of the conductor layer portion12C2in the longitudinal direction. The conductor portion L12C further includes through hole columns12T5and12T6(refer toFIG.10andFIG.11). The through hole column12T5is connected to a part near one end of the conductor layer portion12C3in a longitudinal direction. The through hole column12T6is connected to a part near the other end of the conductor layer portion12C3in the longitudinal direction. The connection portion L12D connects the through hole column12T2of the conductor portion L12A and the through hole column12T3of the conductor portion L12B. The connection portion L12D includes a conductor layer portion12C4(refer toFIG.10). The conductor layer portion12C4includes the conductor layers621and631(refer toFIG.7CandFIG.8A) disposed at positions different from each other in the stacking direction T and connected in parallel to each other through two through holes. The connection portion L12E connects the through hole column12T4of the conductor portion L12B and the through hole column12T5of the conductor portion L12C. The connection portion L12E includes a conductor layer portion12C5(refer toFIG.10). The conductor layer portion12C5includes the conductor layers622and632(refer toFIG.7CandFIG.8A) disposed at positions different from each other in the in the stacking direction T and connected in parallel to each other through two through holes. The conductor layers542and552shown inFIG.5AandFIG.5Bare disposed at positions different from each other in the stacking direction T and connected in parallel to each other through three through holes. The conductor layers542and552connects through hole columns11T3and11T4of the conductor portion of the inductor L11and the through hole column12T1of the conductor portion L12A of the inductor L12. Next, the configuration of the inductor L13will be described. The inductor L13is wound about an axis A13parallel to the stacking direction T. The inductor L13is constituted by the conductor layer531(refer toFIG.4C). Next, the configuration of the inductor L21will be described. As shown inFIG.14andFIG.15, the inductor L21is wound about an axis A21parallel to a direction orthogonal to the stacking direction T. In the present embodiment, in particular, the axis A21extends in a direction parallel to the Y direction. The inductor L21includes one conductor portion wound less than once about the axis A21. The conductor portion of the inductor L21includes a conductor layer portion21C1(refer toFIG.10andFIG.11). The conductor layer portion21C1includes conductor layers725and735(refer toFIG.8CandFIG.9A) disposed at positions different from each other in the stacking direction T and connected in parallel to each other through two through holes. The conductor layers725and735each include a first portion extending in the X direction and a second portion extending in the Y direction. The conductor portion of the inductor L21further includes through hole columns21T1and21T2(refer toFIG.10andFIG.11). The through hole column21T1is connected to a part near one end of the conductor layer portion21C1in a longitudinal direction. The through hole column21T2is connected to a part near the other end of the conductor layer portion21C1in the longitudinal direction. The inductor L21further includes conductor layer portions21C2and21C3(refer toFIG.11). The conductor layer portion21C1connects one end of the through hole column21T1and one end of the through hole column21T2. The conductor layer portion21C2is connected to the other end of the through hole column21T1and extends close to the other end of the through hole column21T2. The conductor layer portion21C3is connected to the other end of the through hole column21T2and extends close to the other end of the through hole column21T1. The conductor layer portion21C2includes conductor layers561and571(refer toFIG.5CandFIG.6A) disposed at positions different from each other in the stacking direction T and connected in parallel to each other through two through holes. The conductor layer portion21C3includes conductor layers544and553(refer toFIG.5AandFIG.5B) disposed at positions different from each other in the stacking direction T and connected in parallel to each other through two through holes. The conductor layer portions21C1and21C2and the through hole columns21T1and21T2constitute the inductor portion211of the inductor L21. The conductor layer portion21C3constitutes the inductor portion212of the inductor L21. The conductor layer portion21C3(conductor layers544and553) is connected to the ground terminal117through the conductor layers526and5310(refer toFIG.4BandFIG.4C) and a plurality of through holes. Next, the configuration of the inductor L22will be described. As shown inFIG.13andFIG.14, the inductor L22is wound about an axis A22parallel to a direction orthogonal to the stacking direction T. In the present embodiment, in particular, the axis A22extends in a direction parallel to the Y direction. The inductor L22includes conductor portions L22A and L22B each wound less than once about the axis A22, and a connection portion L22C connecting the conductor portions L22A and L22B in series. The conductor portions L22A and L22B include conductor layer portions22C1and22C2, respectively (refer toFIG.10andFIG.11). The conductor layer portions22C1and22C2each have a shape that is long in the direction parallel to the X direction. The conductor layer portion22C1includes conductor layers726and736(refer toFIG.8CandFIG.9A) disposed at positions different from each other in the stacking direction T and connected in parallel to each other through four through holes. The conductor layer portion22C2includes conductor layers727and737(refer toFIG.8CandFIG.9A) disposed at positions different from each other in the stacking direction T and connected in parallel to each other through four through holes. The conductor layers726,727,736, and737each extend in the direction parallel to the X direction. The conductor portion L22A further includes two through hole columns22T1and two through hole columns22T2(refer toFIG.10andFIG.11). The two through hole columns22T1are connected in parallel to a part near one end of the conductor layer portion22C1in a longitudinal direction. The two through hole columns22T2are connected in parallel to a part near the other end of the conductor layer portion22C1in the longitudinal direction. The conductor portion L22B further includes two through hole columns22T3and two through hole columns22T4(refer toFIG.10andFIG.11). The two through hole columns22T3are connected in parallel to a part near one end of the conductor layer portion22C2in a longitudinal direction. The two through hole columns22T4are connected in parallel to a part near the other end of the conductor layer portion22C2in the longitudinal direction. The connection portion L22C connects the two through hole columns22T2of the conductor portion L22A and the two through hole columns22T3of the conductor portion L22B. The connection portion L22C includes a conductor layer portion22C3(refer toFIG.10andFIG.11). The conductor layer portion22C3includes the conductor layers591and601(refer toFIG.6CandFIG.7A) disposed at positions different from each other in the in the stacking direction T and connected in parallel to each other through four through holes. The conductor layer portion L22A constitutes the inductor portion221of the inductor L22. The conductor layer portion L22B constitutes the inductor portion222of the inductor L22. In the circuit configuration, the conductor portion L22B is provided between the conductor portion L22A and the ground. The two through hole columns22T4of the conductor portion L22B are connected to the ground terminals115and118through the conductor layers525and539(refer toFIG.4BandFIG.4C) and a plurality of through holes. Correspondences between the capacitors C11to C16and C21to C31and the internal components of the stack50shown inFIG.4AtoFIG.9Bwill now be described. The capacitor C11is composed of the conductor layers521,532,541, and551shown inFIG.4BtoFIG.5A,FIG.8C, andFIG.9A, and the dielectric layers52,53, and54each interposed between two of those conductor layers. The capacitor C12is composed of the conductor layers621,622,631,632,722to724, and732to734shown inFIG.7C,FIG.8A,FIG.8C, andFIG.9A, and the dielectric layers62and72each interposed between two of those conductor layers. The capacitor C13is composed of the conductor layers721to724and731to734. The capacitor C14is composed of the conductor layers5311and532shown inFIG.4C. The capacitor C15is composed of the conductor layer5311, the conductor layer542shown inFIG.5A, and the dielectric layer53interposed between two of those conductor layers. The capacitor C16is composed of the conductor layers5312and543shown inFIG.4CandFIG.5A, and the dielectric layer53interposed between two of those conductor layers. The capacitor C21is composed of the conductor layers533and545shown inFIG.4CandFIG.5A, and the dielectric layer53interposed between those conductor layers. The capacitor C22is composed of the conductor layers534and545shown inFIG.4C,FIG.5A, andFIG.5C, and the dielectric layer53interposed between those conductor layers. The capacitor C23is composed of the conductor layers535and546shown inFIG.4CandFIG.5A, and the dielectric layer53interposed between those conductor layers. The capacitor C24is composed of the conductor layers533and534. The capacitor C25is composed of the conductor layers536,546, and547shown inFIG.4C,FIG.5A, andFIG.5C, and the dielectric layer53interposed between two of those conductor layers. The capacitor C26is composed of the conductor layers561,571,725, and735shown inFIG.5C,FIG.6A,FIG.8C, andFIG.9A, and the dielectric layers56and72interposed between two of those conductor layers. The capacitor C27is composed of the conductor layers544and553shown inFIG.5AandFIG.5B, and the dielectric layer54interposed between two of those conductor layers. The capacitor C28is composed of the conductor layers554and572shown inFIG.5BandFIG.6A, and the dielectric layers55and56each interposed between two of those conductor layers. The capacitor C29is composed of the conductor layers591,601,726, and736shown inFIG.6C,FIG.7A,FIG.8C, andFIG.9A, and the dielectric layers59and72each interposed between two of those conductor layers. The capacitor C30is composed of the conductor layers591and601, and the conductor layers727and737shown inFIG.8CandFIG.9A, and the dielectric layers59and72each interposed between those conductor layers. The capacitor C31is composed of the conductor layers537and548shown inFIG.4CandFIG.5A, and the dielectric layer53interposed between those conductor layers. Next, structural features of the electronic component1according to the present embodiment will be described with reference toFIG.10toFIG.17.FIG.16andFIG.1are plan views showing a part of an inside of the stack50shown inFIG.10andFIG.11. As shown inFIG.10toFIG.15, the inductor L12is disposed after the inductor L11in one direction orthogonal to the stacking direction T, in other words, the −Y direction. The inductor L21and the inductor L22are disposed after the inductor L11and the inductor L12, respectively, in one direction orthogonal to the stacking direction T, in other words, the −X direction. InFIG.12andFIG.15, a region surrounded by a dashed line denoted by a reference numeral S11shows a space including the axis A11and surrounded by the inductor L11. InFIG.12andFIG.13, a region surrounded by a dashed line denoted by a reference numeral S12shows a space including the axis A12and surrounded by the inductor L12. InFIG.14andFIG.15, a region surrounded by a dashed line denoted by a reference numeral S21shows a space including the axis A21and surrounded by the inductor L21. InFIG.13andFIG.14, a region surrounded by a dashed line denoted by a reference numeral S22shows a space including the axis A22and surrounded by the inductor L22. InFIG.15, the region surrounded by a dashed line denoted by the reference numeral S11is also a region obtained by vertically projecting the space S11onto a virtual plane (XZ plane) perpendicular to the axis A11. Hereinafter, the region is referred to as a projection region of the space S11. Area of the projection region of the space S11corresponds to opening area of the inductor L11. InFIG.12, the region surrounded by a dashed line denoted by the reference numeral S12is also a region obtained by vertically projecting the space S12onto a virtual plane (YZ plane) perpendicular to the axis A12. Hereinafter, the region is referred to as a projection region of the space S12. Area of the projection region of the space S12corresponds to opening area of the inductor L12. InFIG.15, the region surrounded by a dashed line denoted by the reference numeral S21is also a region obtained by vertically projecting the space S21onto a virtual plane (XZ plane) perpendicular to the axis A21. Hereinafter, the region is referred to as a projection region of the space S21. Area of the projection region of the space S21corresponds to opening area of the inductor L21. InFIG.13, the region surrounded by a dashed line denoted by the reference numeral S22is also a region obtained by vertically projecting the space S22onto a virtual plane (XZ plane) perpendicular to the axis A22. Hereinafter, the region is referred to as a projection region of the space S22. Area of the projection region of the space S22corresponds to opening area of the inductor L22. As shown inFIG.12andFIG.15, the area of the projection region of the space S11is larger than the area of the projection region of the space S12. As shown inFIG.12andFIG.15, the area of the projection region of the space S21is larger than the area of the projection region of the space S12. As shown inFIG.12andFIG.13, the area of the projection region of the space S22is larger than the area of the projection region of the space S12. As shown inFIG.13andFIG.15, the area of the projection region of the space S21and the area of the projection region of the space S22are different from each other. In the present embodiment, in particular, the area of the projection region of the space S21is larger than the area of the projection region of the space S22. A dimension of the projection region of the space S21in the stacking direction T is larger than a dimension of the projection region of the space S22in the stacking direction T. The inductor L11is disposed such that part of the space S11overlaps at least part of the space S12in a view in one direction (the Y direction) parallel to the axis A11. The inductor L12is disposed such that at least part of the space S12overlaps the space S22in a view in one direction (the X direction) parallel to the axis A12. The inductor L12is disposed such that the axis A12is parallel to the long sides of the bottom surface50A of the stack50(the long sides of the top surface50B). The inductor L13is disposed such that the axis A13does not intersect the spaces S11, S21, and S22but intersects the space S12. In other words, the inductor L13is disposed such that the inductor L13overlaps the inductor L12in a view in the Z direction. No capacitor conductor layer used to constitute a capacitor is interposed between the inductor L12and the inductor L13, more specifically, between the conductor layer531(refer toFIG.4C) and the conductor layers621and622(refer toFIG.7C). The inductor L21is disposed such that at least part of the space S21overlaps at least part of the space S22in a view in one direction (the Y direction) parallel to the axis A21. In other words, the inductor L22is disposed such that at least part of the space S22overlaps part of the space S21in a view in one direction (the Y direction) parallel to the axis A22. The conductor layer portion21C3of the inductor L21is disposed between the conductor layer portion21C1of the inductor L21and the bottom surface50A. The conductor layer portion21C3extends across the signal terminal114in a view in one direction (the Z direction) parallel to the stacking direction T. The inductor L21is electrically connected to the ground terminal117. The inductor L22is electrically connected to the ground terminals115and118. The inductor L22includes the conductor portion L22A constituting the inductor portion221of the inductor L22, the conductor portion L22B constituting the inductor portion222of the inductor L22, and the connection portion L22C connecting the conductor portions L22A and L22B in series. The conductor portion L22A (inductor portion221) is magnetically coupled to the conductor layer portions21C1and21C2constituting the inductor portion211of the inductor L21, and the through hole columns21T1and21T2in the inductor L21. FIG.17shows the two conductor layers721and731constituting the conductor layer portion11C1of the inductor L11. As shown inFIG.17, area of the conductor layer721is larger than area of the conductor layer731. The conductor layer731is disposed inside an outer edge of the conductor layer721in a view in one direction (the Z direction) parallel to the stacking direction T. A shape of the conductor layer731in a view in the Z direction is similar to a shape of the conductor layer721in a view in the Z direction. The conductor layer721is disposed between the conductor layer731and the axis A11. The above description on the conductor layers721and731also applies to pairs of the conductor layers72xand73x(x is an integer of two to seven). Description on the conductor layers72xand73xis obtained by replacing the conductor layers721and731in the above description on the conductor layers721and731with the conductor layers72xand73x, respectively. In a case of description on pairs of the conductor layers72xand73xconstituting the inductor L12, the axis A11in the above description is replaced with the axis A12. In a case of description on pairs of the conductor layers725and735constituting the inductor L21, the axis A11in the above description is replaced with the axis A21. In a case of description on pairs of the conductor layers72xand73xconstituting the inductor L22, the axis A11in the above description is replaced with the axis A22. FIG.16shows the two conductor layers621and631constituting the conductor layer portion12C4of the inductor L12. As shown inFIG.16, area of the conductor layer631is larger than area of the conductor layer621. The conductor layer621is disposed inside an outer edge of the conductor layer631in a view in one direction (the Z direction) parallel to the stacking direction T. A shape of the conductor layer621in a view in the Z direction is similar to a shape of the conductor layer631in a view in the Z direction. The conductor layer631is disposed between the conductor layer621and the axis A12. The above description on the conductor layers621and631also applies to the pair of the conductor layers622and632, the pair of the conductor layers561and571, the pair of the conductor layers543and553, and the pair of the conductor layers591and601. Description on the conductor layers622and632is obtained by replacing the conductor layers621and631in the above description on the conductor layers621and631with the conductor layers622and632, respectively. Description on the conductor layers561and571or the conductor layers543and553is obtained by replacing the conductor layers621and631in the above description on the conductor layers621and631with the conductor layers561and571or the conductor layers543and553, respectively, and replacing the axis A12in the above description on the conductor layers621and631with the axis A21. Description on the conductor layers591and601is obtained by replacing the conductor layers621and631in the above description on the conductor layers621and631with the conductor layers591and601, respectively, and replacing the axis A12in the above description on the conductor layers621and631with the axis A22. Next, an example of the characteristics of the electronic component1according to the present embodiment will be described.FIG.18is a characteristic diagram showing a pass characteristic between the common port2and the first signal port3, in other words, a pass characteristic of the first filter10.FIG.19is a characteristic diagram showing a pass characteristic between the common port2and the second signal port4, in other words, a pass characteristic of the second filter20. InFIG.18andFIG.19, the horizontal axis indicates frequency, and the vertical axis the attenuation. InFIG.18, a reference numeral91denotes an attenuation pole formed by the inductor L11, and a reference numeral92denotes an attenuation pole formed by the inductor L12. The inductor L12forms the attenuation pole92on a high-pass side of the first passband in the pass characteristic of the first filter10. The inductor L11forms the attenuation pole91between the first passband and the attenuation pole92in the pass characteristic of the first filter10. In other words, in the pass characteristic of the first filter10, the attenuation pole91formed by the inductor L11is closer to the first passband than the attenuation pole92formed by the inductor L12. InFIG.19, a reference numeral93denotes an attenuation pole formed by the inductor L21, and a reference numeral94denotes an attenuation pole formed by the inductor L22. The inductor L21forms the attenuation pole93on a low-pass side of the second passband in the pass characteristic of the second filter20. The inductor L22forms the attenuation pole94between the attenuation pole93and the second passband in the pass characteristic of the second filter20. In other words, in the pass characteristic of the second filter20, the attenuation pole94formed by the inductor L22is closer to the second passband than the attenuation pole93formed by the inductor L21. Next, an example of inductance and a Q value of each of the inductors L11, L12, L13, L21, and L22will be described. In the example, the inductance of the inductor L11is 0.8 nH. The Q value of the inductor L11is 125. The inductance of the inductor L12is 3.4 nH. The Q value of the inductor L12is 113. The inductance of the inductor L13is 0.81 nH. The Q value of the inductor L13is 53. The inductance of the inductor L21is 1.5 nH. The Q value of the inductor L21is 73. The inductance of the inductor L22is 2.0 nH. The Q value of the inductor L22is 127. Now, the operation and effects of the electronic component1according to the present embodiment will be described. In the present embodiment, in the inductor L11, two through hole columns are connected in parallel to a part near each end of the conductor layer portion11C1in the longitudinal direction. In addition, in the inductor L22, two through hole columns are connected in parallel to a part near each end of the conductor layer portion22C1in the longitudinal direction, and two through hole columns are connected in parallel to a part near each end of the conductor layer portion22C2in the longitudinal direction. In the inductor L12, one through hole column is connected to a part near each end of the conductor layer portion12C1in the longitudinal direction, one through hole column is connected to a part near each end of the conductor layer portion12C2in the longitudinal direction, and one through hole column is connected to a part near each end of the conductor layer portion12C3in the longitudinal direction. In addition, in the inductor L21, one through hole column is connected to a part near each end of the conductor layer portion21C1in the longitudinal direction. As described above, in the present embodiment, in each of the inductors L11and L22, a plurality (two) of through hole columns are connected in parallel to one end of each conductor layer portion. Thus, according to the present embodiment, it is possible to increase the Q value of each of the inductors L11and L22. In the present embodiment, in each of the inductors L12and L21, one through hole column is connected to one end of each conductor layer portion. Thus, according to the present embodiment, it is possible to downsize the electronic component1as compared to configuration in which a plurality of through hole columns are connected in parallel to one end of each conductor layer portion in each of the inductors L11, L12, L21, and L22. In the first filter10, it is preferable to increase the Q value of the inductor L11with which the attenuation pole91closest to the first passband is formed. In the second filter20, it is preferable to increase the Q value of the inductor L22with which the attenuation pole94closest the second passband is formed. In the present embodiment, from such a viewpoint, in each of the inductors L11and L22, a plurality (two) of through hole columns are connected in parallel to one end of each conductor layer portion so that the Q value of each of the inductors L11and L22increases. In the present embodiment, the inductor L12is disposed after the inductor L11in the −Y direction, and the inductor L21and the inductor L22are disposed after the inductor L11and the inductor L12, respectively, in the −X direction. In other words, in the present embodiment, the inductors L11and L12are arranged in line, and the inductors L21and L22are arranged in line at positions different from the inductors L11and L12. Thus, according to the present embodiment, it is possible to reduce an unnecessary space generated in the stack50as compared to configuration in which the inductors L11and L22are arranged in line and the inductors L12and L21are arranged in line at positions different from the inductors L11and L22, and as a result, it is possible to downsize the electronic component1. With these effects, according to the present embodiment, it is possible to increase the Q value of each of the inductors L11and L22and also downsize the electronic component1. In the present embodiment, the axis A11about which the inductor L11is wound and the axis A22about which the inductor L22is wound are parallel to each other. In the present embodiment, in particular, the axes A11and A22each extend in the direction parallel to the Y direction. In the inductors L11and L22, each conductor layer portion has a shape that is long in the X direction. Thus, according to the present embodiment, it is possible to reduce a dimension of the stack50in the Y direction as compared to configuration in which the axis A11and the axis A22are orthogonal to each other. In the present embodiment, the direction parallel to the axis A12and the direction parallel to the axis A22are orthogonal to each other. In the present embodiment, in particular, the direction parallel to the axis A12is the direction parallel to the X direction, and the direction parallel to the axis A22is the direction parallel to the Y direction. In the present embodiment, the inductor L12is wound approximately three times about the axis A12parallel to the X direction. As described above, each conductor layer portion in the inductor L22has a shape that is long in the X direction. Thus, according to the present embodiment, it is possible to reduce an unnecessary space generated when the inductor L12is wound a plurality of times about the axis A12as compared to configuration in which the axis A22is parallel to the X direction and each conductor layer portion in the inductor L22has a shape that is short in the X direction. Next, other effects of the present embodiment will be described. In the present embodiment, the area of the projection region of the space S11corresponding to the opening area of the inductor L11is larger than the area of the projection region of the space S12corresponding to the opening area of the inductor L12. In other words, in the present embodiment, the area of the projection region of the space S12corresponding to the opening area of the inductor L12is smaller than the area of the projection region of the space S11corresponding to the opening area of the inductor L11. Accordingly, a space for disposing another inductor can be formed near the inductor L12. In the present embodiment, the inductor L13is disposed in the above-described space. As described above, the inductor L13is disposed such that the axis A13does not intersect the space S11but intersects the space S12. In the present embodiment, the inductors L11, L12, and L13are wound about respective axes parallel to directions different from one another. In the present embodiment, in particular, the axes A11, A12, and A13are orthogonal to one another. Thus, according to the present embodiment, it is possible to prevent electromagnetic coupling among the inductors L11, L12, and L13and also downsize the electronic component1. In the present embodiment, the inductor L11is disposed such that part of the space S11overlaps at least part of the space S12in a view in one direction parallel to the axis A11. Thus, according to the present embodiment, it is possible to downsize the electronic component1as compared to configuration in which the space S11and the space S12do not overlap each other. According to the present embodiment, the first filter10includes the inductors L11, L12, and L13. According to the present embodiment, it is possible to reduce a region of the first filter10in the stack50because of the above-described characteristics of the inductors L11, L12, and L13, and as a result, it is possible to downsize the electronic component1. In the present embodiment, the area of the projection region of the space S12corresponding to the opening area of the inductor L12is smaller than the area of the projection region of the space S22corresponding to the opening area of the inductor L22. In the present embodiment, the inductors L12, L13, and L22are wound about respective axes parallel to directions different from one another. In the present embodiment, in particular, the axes A12, A13, and A22are orthogonal to one another. Thus, according to the present embodiment, it is possible to prevent electromagnetic coupling among the inductors L12, L13, and L22and also downsize the electronic component1. In the present embodiment, the inductor L12is disposed such that part of the space S12overlaps at least part of the space S22in a view in one direction parallel to the axis A12. Thus, according to the present embodiment, it is possible to downsize the electronic component1as compared to configuration in which the space S12and the space S22do not overlap each other. In the present embodiment, no capacitor conductor layer is interposed between the inductor L12and the inductor L13. Thus, according to the present embodiment, it is possible to downsize the electronic component1as compared to configuration in which a capacitor conductor layer is interposed between the inductor L12and the inductor L13. In the present embodiment, the first filter10includes the inductors L12and L13, and the second filter20includes the inductor L22. According to the present embodiment, it is possible to place the first filter10and the second filter20close to each other because of the above-described characteristics of the inductors L12, L13, and L22, and as a result, it is possible to downsize the electronic component1. Since the area of the projection region of the space S12corresponding to the opening area of the inductor L12is small, the inductance of the inductor L12is relatively small. However, in the present embodiment, the inductor L12includes the conductor portions L12A, L12B, and L12C each wound less than once about the axis A12. In other words, in the present embodiment, the inductor L12is wound approximately three times about the axis A12. Thus, according to the present embodiment, it is possible to increase the inductance of the inductor L12. Moreover, according to the present embodiment, it is possible to increase a dimension of the inductor L12in a direction parallel to the axis A12(the direction parallel to the X direction). Thus, according to the present embodiment, it is possible to increase the space for disposing the inductor L13. In the present embodiment, the inductor L12is disposed such that the axis A12is parallel to the long sides of the bottom surface50A of the stack50(the long sides of the top surface50B). Thus, according to the present embodiment, it is possible to dispose another inductor, specifically the inductor L22, in a direction parallel to the axis A12and also wind the inductor L12a plurality of times about the axis A12. In the present embodiment, the inductors L11and L12are provided on the first signal path5in the circuit configuration, and the inductor L13is provided between the first signal path5and the ground in the circuit configuration. The Q value of the inductor L13may be smaller than the Q values of the inductors L11and L12. As described above, in the example, the Q value of the inductor L11is 125, the Q value of the inductor L12is 113, and the Q value of the inductor L13is 53. In the present embodiment, the inductors L11and L12, which preferably have relatively large Q values, are inductors wound about an axis orthogonal to the stacking direction T, and the inductor L13, which may have a relatively small Q value, is an inductor wound about an axis parallel to the stacking direction T. The inductor L13, which may have a relatively small Q value, is disposed in the space formed near the inductor L12. In the present embodiment, the inductor L21is disposed such that part of the space S21overlaps at least part of the space S22in a view in one direction (the Y direction) parallel to the axis A21. In other words, the inductor L22is disposed such that at least part of the space S22overlaps part of the space S22in a view in one direction (the Y direction) parallel to the axis A22. In the present embodiment, in particular, the axis A21and the axis A22are parallel to each other. Thus, in the present embodiment, the inductors L21and L22are disposed such that an opening of the inductor L21and an opening of the inductor L22face each other and the inductor L21and the inductor L22overlap each other in a view in the Y direction. Consider a case in which magnetic coupling between the inductor L21and the inductor L22is adjusted. For example, the magnetic coupling can be adjusted by displacing one of the inductors L21and L22in the X direction or the −X direction. With this configuration, an unnecessary space is generated in the stack50, and a planar shape of the electronic component1(shape in a view in the Z direction) becomes large. However, in the present embodiment, the area of the projection region of the space S21and the area of the projection region of the space S22are different from each other. Thus, according to the present embodiment, it is possible to adjust the magnetic coupling without displacing one of the inductors L21and L22in the X direction or the −X direction. Consider a case in which a dimension of the inductor L21in the stacking direction T is increased to adjust the area of the projection region of the space S21. In this case, a distance from the bottom surface50A of the stack50to the inductor L21is shortened. When a ground terminal is provided near the inductor L21, floating capacitance is generated between the inductor L21and the ground terminal and a desired characteristic is potentially not obtained. However, in the present embodiment, the inductor L21includes the conductor layer portion21C2connected to the other end of the through hole column21T1and extending close to the other end of the through hole column21T2, and the conductor layer portion21C3connected to the other end of the through hole column21T2and extending close to the other end of the through hole column21T1. According to the present embodiment, with at least one of the conductor layer portions21C2and21C3, the inductor L21can be disposed such that the inductor L21does not overlap the ground terminal in a view in one direction (the Z direction) parallel to the stacking direction T. In the present embodiment, in particular, the conductor layer portion21C3extends across the signal terminal114in a view in one direction (the Z direction) parallel to the stacking direction T. Thus, according to the present embodiment, it is possible to adjust the area of the projection region of the space S21by increasing the dimension of the inductor L21in the stacking direction T. With these effects, according to the present embodiment, it is possible to adjust electromagnetic coupling between the inductors L21and L22and also downsize the electronic component1. In the present embodiment, the electronic component1includes the second filter20including the inductors L21and L22, and the first filter10including no inductors L21and L22. To increase isolation between the first filter10and the second filter20, it is conceivable to provide a ground terminal at a position sandwiched between the first filter10and the second filter20. In the present embodiment, the conductor layer portion21C3is connected to the ground terminal117provided at a position sandwiched between the first filter10and the second filter20. Thus, according to the present embodiment, it is possible to increase isolation between the first filter10and the inductor L21and also connect the inductor L21to the ground terminal117through the conductor layer portion21C3. In the present embodiment, the inductor L22includes the conductor portions L22A and L22B. The conductor portion L22A is magnetically coupled to the inductor L21. Specifically, in the present embodiment, part of the inductor L22is magnetically coupled to the inductor L21. According to the present embodiment, it is possible to adjust the magnetic coupling between the inductor L21and the inductor L22by configuring the inductors as described above. In the present embodiment, the conductor layer portion11C1of the inductor L11includes the two conductor layers721and731. As described above, in a manufacturing process of the stack50, ceramic green sheets on which a plurality of unfired conductor layers and a plurality of unfired through holes are formed are stacked, the plurality of unfired conductor layers eventually becoming a plurality of conductor layers, the plurality of unfired through holes eventually becoming a plurality of through holes. A characteristic of the inductor L11changes when the conductor layer721and the conductor layer731are displaced from each other due to displacement of the ceramic green sheets or the plurality of unfired conductor layers. However, in the present embodiment, the area of the conductor layer721is larger than the area of the conductor layer731. Thus, when the conductor layer731is displaced relative to the conductor layer721but a displacement amount is smaller than a certain amount, the conductor layer731does not extend out of the conductor layer721in a view in one direction (Z direction) parallel to the stacking direction T. Thus, according to the present embodiment, it is possible to reduce characteristic variation of the inductor L11due to mutual displacement of the conductor layer721and the conductor layer731. The above description of the conductor layers721and731also applies to pairs of the conductor layers72xand73x(x is an integer of two to seven), the pair of the conductor layers621and631, the pair of the conductor layers622and632, the pair of the conductor layers561and571, the pair of the conductor layers543and553, and the pair of the conductor layers591and601. Thus, according to the present embodiment, it is possible to reduce characteristic variation of each of the first filter10and the second filter20due to displacement of the ceramic green sheets or the plurality of unfired conductor layers, and as a result, it is possible to reduce characteristic variation of the electronic component1. The present invention is not limited to the foregoing embodiment, and various modifications may be made thereto. For example, the number of inductors included in each of the first filter10and the second filter20may be equal to or larger than three. The axis A11and the axis A12may intersect each other at an angle other than 90°. Similarly, the axis A21and the axis A22may intersect each other at an angle other than 90°. In each of the inductors L11and L22, three or more through hole columns may be connected in parallel to one end of each conductor layer portion. In each of the inductors L11, L12, L21, and L22, each conductor layer portion may include three or more conductor layers disposed at positions different from one another in the stacking direction T and connected in parallel to one another. When each conductor layer portion includes three conductor layers, a conductor layer having the smallest area among the three conductor layers may be interposed between the other two conductor layers. Alternatively, each conductor layer portion may be constituted by one conductor layer. Obviously, many modifications and variations of the present invention are possible in the light of the above teachings. Thus, it is to be understood that, within the scope of the appended claims and equivalents thereof, the invention may be practiced in other embodiments than the foregoing most preferable embodiment. | 62,683 |
11863151 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, preferred embodiments of the present invention will be described in detail by citing examples with reference to the accompanying drawings. The following preferred embodiments are general or specific examples. Details such as values, shapes, materials, components, and arrangements and connection patterns of the components in the following preferred embodiments are provided merely as examples and should not be construed as limiting the present invention. Of the components in the following preferred embodiments, those not mentioned in an independent claim are described as optional components. The sizes and the relative proportions of the components illustrated in the drawings are not necessarily to scale. FIG.1illustrates circuitry of a multiplexer1according to Preferred Embodiment 1. The multiplexer1includes a filter circuit10, a filter circuit20, an additional circuit30, a common terminal100, an input/output terminal110, and an input/output terminal120. The filter circuits10and20are both connected to the common terminal100. The common terminal100, the input/output terminal110(a first input/output terminal) and the input/output terminal120(a second input/output terminal) are interfaces through which radio-frequency signals are input to the multiplexer1or output from the multiplexer1. The filter circuit10is an example of a first filter circuit and is a filter including a pass band that is the first frequency band. The filter circuit10includes a first terminal and a second terminal. One of the first and second terminals is connected to the common terminal100, and the other terminal is connected to the input/output terminal110. The one of the first and second terminals of the filter circuit10in the present preferred embodiment is a node131. The filter circuit20is an example of a second filter circuit and is a filter including a pass band that is a second frequency band different from the first frequency band. The filter circuit20includes a third terminal and a fourth terminal. The third terminal is connected to the common terminal100, and the fourth terminal is connected to the input/output terminal120. The fourth terminal of the filter circuit20in the present preferred embodiment is a node132. The filter circuits10and20may be acoustic wave filters using surface acoustic waves (SAWs), acoustic wave filters using bulk acoustic waves (BAWs), LC resonant filters, or dielectric filters but are not limited thereto. The additional circuit30is connected between the nodes131and132. Alternatively, the additional circuit30may be connected between a node on a path connecting the filter circuit10to the input/output terminal110and a node on a path connecting the filter circuit20to the common terminal100. The additional circuit30in the present preferred embodiment includes at least one series-arm circuit and at least one parallel-arm circuit. The at least one series-arm circuit is on a series-arm path providing a connection between the nodes131and132. The at least one parallel-arm circuit is on a parallel-arm path providing a connection between the series-arm path and the ground. The parallel-arm circuit includes (1) only an inductor in series with the parallel-arm path, (2) only a capacitor in series with the parallel-arm path, or (3) only an LC parallel resonant circuit being in series with the parallel-arm path and including an inductor and a capacitor connected in parallel. The wording in (1), that is, the expression “only an inductor in series with the parallel-arm path” means the omission of circuit elements, for example, inductors, capacitors, switches, and resonators, except for the inductor in series with the parallel-arm path. The parallel-arm circuit may include, for example, traces, electrodes, and terminals. The wording in (2), that is, the expression “only a capacitor in series with the parallel-arm path” means the omission of circuit elements, for example, inductors, capacitors, switches, and resonators, except for the capacitor in series with the parallel-arm path. The parallel-arm circuit may include, for example, traces, electrodes, and terminals. The wording in (3), that is, the expression “only an LC parallel resonant circuit being in series with the parallel-arm path and including an inductor and a capacitor connected in parallel” means the omission of circuit elements, for example, such as inductors, capacitors, switches, and resonators, except for the LC parallel resonant circuit in series with the parallel-arm path. The parallel-arm circuit may include, for example, traces, electrodes, and terminals. Complex flows of unwanted signals may be present between the input/output terminals110and120of multiplexer1, where signals may be transmitted on a route passing through the input/output terminal110, the filter circuit10, the common terminal100, the filter circuit20, and the input/output terminal120or signals (direct waves) may be transmitted directly between the input/output terminals110and120. As the strength of unwanted signals becomes higher, the isolation characteristics of the multiplexer1may degraded, and the quality of radio-frequency signals transmitted through the multiplexer1may degraded accordingly. Meanwhile, the circuit constant of the inductor(s) and the capacitor(s) of the additional circuit30of the multiplexer1according to the present preferred embodiment may be changed to adjust the phase and the amplitude width of signals transmitted between the nodes131and132. According to the circuitry described above, the additional circuit30is able to generate signals that cancel unwanted signals transmitted between the input/output terminals110and120. That is, unwanted signals transmitted between the input/output terminals110and120may be attenuated by the additional circuit30. The additional circuit30does not include an LC series resonant circuit on the parallel-arm path. Radio-frequency signals, which would otherwise suffer losses due to the resonance point of an LC series resonant circuit, may thus be transmitted with significantly reduced transmission loss. When the parallel-arm circuit includes an inductor and a capacitor, unwanted signals may be significantly reduced or prevented over a wide frequency band that is relatively far apart from the pass band of the filter circuit10or20. In the multiplexer1according to the present preferred embodiment, a radio-frequency signal in the first frequency band and a radio-frequency signal in the second frequency band may be simultaneously or substantially simultaneously transmitted through the filter circuit10and the filter circuit20, respectively. Accordingly, unwanted signals transmitted between the input/output terminals110and120may be significantly reduced or prevented by the additional circuit30. With the filter circuits10and20being connected to the common terminal100, entry of unwanted signals from one of the two filter circuits into the other filter circuit may be significantly reduced or prevented accordingly. The filter circuit10of the multiplexer1according to the present preferred embodiment may be a transmitting filter that passes transmission signals from the input/output terminal110to the common terminal100. The filter circuit20of the multiplexer1according to the present preferred embodiment may be a receiving filter that passes reception signals from the common terminal100to the input/output terminal120. The transmitting filter and the receiving filter define a duplexer. The circuitry described above may cause unwanted signals, for example, harmonic waves of high-power transmission signals on a route passing through the input/output terminal110and the common terminal100. The unwanted signals may flow into a reception path between the common terminal100and the input/output terminal120, and as a result, the reception sensitivity associated with radio-frequency signals in the second frequency band may degrade. As a work around, the additional circuit30significantly reduces or prevents unwanted signals, for example, harmonic waves, transmitted between the input/output terminals110and120. Accordingly, the possibility that the reception sensitivity in the second frequency band will degrade is able to be significantly reduced or prevented. FIG.2Aillustrates, as a first example, circuitry of an additional circuit30A according to Preferred Embodiment 1. The additional circuit30A inFIG.2Ais an example for describing specific circuitry of the additional circuit30according to Preferred Embodiment 1. The additional circuit30A includes a series-arm circuit51, a series-arm circuit52, and a parallel-arm circuit53. The series-arm circuit51is an example of a first series-arm circuit and is on the series-arm path providing a connection between the nodes131and132. The series-arm circuit is an example of a second series-arm circuit and is on the series-arm path providing a connection between the nodes131and132. The parallel-arm circuit53is on a parallel-arm path providing a connection between the ground and a connection node n1, at which the series-arm circuits51and52are connected to each other. The series-arm circuit51includes an inductor31, which is in series with the series-arm path. The series-arm circuit52includes an inductor32, which is in series with the series-arm path. The parallel-arm circuit53includes only a capacitor41, which is in series with the parallel-arm path. The additional circuit30A in this example is a T-shaped circuitry including inductors as series-arm circuits and including a capacitor as a parallel-arm circuit. FIG.2Billustrates, as a second example, circuitry of an additional circuit30B according to Preferred Embodiment 1. The additional circuit30B inFIG.2Bis an example for describing specific circuitry of the additional circuit30according to Preferred Embodiment 1. The additional circuit30B includes a series-arm circuit51, a series-arm circuit52, and a parallel-arm circuit53. The series-arm circuit51is an example of the first series-arm circuit and is on the series-arm path providing a connection between the nodes131and132. The series-arm circuit52is an example of the second series-arm circuit and is on the series-arm path providing a connection between the nodes131and132. The parallel-arm circuit53is on a parallel-arm path providing a connection between the ground and a connection node n1, at which the series-arm circuits51and52are connected to each other. The series-arm circuit51includes a capacitor42, which is in series with the series-arm path. The series-arm circuit52includes a capacitor43, which is in series with the series-arm path. The parallel-arm circuit53includes only an inductor33, which is in series with the parallel-arm path. The additional circuit30B in this example is a T-shaped circuitry including capacitors as series-arm circuits and including an inductor as a parallel-arm circuit. FIG.2Cillustrates, as a third example, circuitry of an additional circuit30C according to Preferred Embodiment 1. The additional circuit30C inFIG.2Cis an example for describing specific circuitry of the additional circuit30according to Preferred Embodiment 1. The additional circuit30C includes a series-arm circuit51, a series-arm circuit52, and a parallel-arm circuit53. The series-arm circuit51is an example of the first series-arm circuit and is on the series-arm path providing a connection between the nodes131and132. The series-arm circuit52is an example of the second series-arm circuit and is on the series-arm path providing a connection between the nodes131and132. The parallel-arm circuit53is on a parallel-arm path providing a connection between the ground and a connection node n1, at which the series-arm circuits51and52are connected to each other. The series-arm circuit51includes a first LC parallel resonant circuit in series with the series-arm path. The series-arm circuit52includes a second LC parallel resonant circuit in series with the series-arm path. The parallel-arm circuit53includes only a third LC parallel resonant circuit in series with the parallel-arm path. The first LC parallel resonant circuit is a circuit including a parallel connection of a capacitor44and an inductor34. The second LC parallel resonant circuit is a circuit including a parallel connection of a capacitor45and an inductor35. The third LC parallel resonant circuit is a circuit including a parallel connection of a capacitor46and an inductor36. The additional circuit30C in this example is a T-shaped circuitry including LC parallel resonant circuits as series-arm circuits and including another LC parallel resonant circuit as a parallel-arm circuit. Each of the additional circuits30A to30C does not include an LC series resonant circuit on the parallel-arm path. Radio-frequency signals, which would otherwise suffer losses due to the resonance point of an LC series resonant circuit, may thus be transmitted with significantly reduced transmission loss. When the parallel-arm circuit53includes an inductor and a capacitor, unwanted signals may be significantly reduced or prevented over a wide frequency band that is relatively far apart from the pass band of the filter circuit10or20. FIG.3Aillustrates, as a fourth example, circuitry of an additional circuit30D according to Preferred Embodiment 1. The additional circuit30D inFIG.3Ais an example for describing specific circuitry of the additional circuit30according to Preferred Embodiment 1. The additional circuit30D includes a series-arm circuit54, a parallel-arm circuit55, and a parallel-arm circuit56. The series-arm circuit54is on the series-arm path providing a connection between the nodes131and132. The parallel-arm circuit55is an example of a first parallel-arm circuit and is on a first parallel-arm path providing a connection between the node131and the ground. The parallel-arm circuit56is an example of a second parallel-arm circuit and is on a second parallel-arm path providing a connection between the node132and the ground. The series-arm circuit54includes an inductor31, which is in series with the series-arm path. The parallel-arm circuit55includes only a capacitor41, which is in series with the first parallel-arm path. The parallel-arm circuit56includes only a capacitor42, which is in series with the second parallel-arm path. The additional circuit30D in this example is a n-shaped circuitry including an inductor as a series-arm circuit and including capacitors as parallel-arm circuits. FIG.3Billustrates, as a fifth example, circuitry of an additional circuit30E according to Preferred Embodiment 1. The additional circuit30E inFIG.3Bis an example for describing specific circuitry of the additional circuit30according to Preferred Embodiment 1. The additional circuit30E includes a series-arm circuit54, a parallel-arm circuit55, and a parallel-arm circuit56. The series-arm circuit54is on the series-arm path providing a connection between the nodes131and132. The parallel-arm circuit55is an example of the first parallel-arm circuit and is on the first parallel-arm path providing a connection between the node131and the ground. The parallel-arm circuit56is an example of the second parallel-arm circuit and is on the second parallel-arm path providing a connection between the node132and the ground. The series-arm circuit54includes a capacitor43, which is in series with the series-arm path. The parallel-arm circuit55includes only an inductor32, which is in series with the first parallel-arm path. The parallel-arm circuit56includes only an inductor33, which is in series with the second parallel-arm path. The additional circuit30E in this example is a n-shaped circuitry including a capacitor as a series-arm circuit and including inductors as parallel-arm circuits. The additional circuit30E may not include the capacitor when magnetic coupling or capacitive coupling is provided between the inductors32and33. FIG.3Cillustrates, as a sixth example, circuitry of an additional circuit30F according to Preferred Embodiment 1. The additional circuit30F inFIG.3Cis an example for describing specific circuitry of the additional circuit30according to Preferred Embodiment 1. The additional circuit30F includes a series-arm circuit54, a parallel-arm circuit55, and a parallel-arm circuit56. The series-arm circuit54is on the series-arm path providing a connection between the nodes131and132. The parallel-arm circuit55is an example of the first parallel-arm circuit and is on the first parallel-arm path providing a connection between the node131and the ground. The parallel-arm circuit56is an example of the second parallel-arm circuit and is on the second parallel-arm path providing a connection between the node132and the ground. The series-arm circuit54includes only a fourth LC parallel resonant circuit in series with the series-arm path. The parallel-arm circuit55includes only a fifth LC parallel resonant circuit in series with the first parallel-arm path. The parallel-arm circuit56includes only a sixth LC parallel-arm resonant circuit in series with the second parallel-arm path. The fourth LC parallel resonant circuit is a circuit including a parallel connection of a capacitor44and an inductor34. The fifth LC parallel resonant circuit is a circuit including a parallel connection of a capacitor45and an inductor35. The sixth LC parallel resonant circuit is a circuit including a parallel connection of a capacitor46and an inductor36. The additional circuit30F in this example is a n-shaped circuitry including an LC parallel resonant circuit as a series-arm circuit and including two other LC parallel resonant circuits as parallel-arm circuits. Each of the additional circuits30D to30F does not include an LC series resonant circuit on the parallel-arm path. Radio-frequency signals, which would otherwise suffer losses due to the resonance point of an LC series resonant circuit, may thus be transmitted with significantly reduced transmission loss. When the parallel-arm circuits55and56each include an inductor and a capacitor, unwanted signals may be significantly reduced or prevented over a wide frequency band that is relatively far apart from the pass band of the filter circuit10or20. The additional circuits30A to30F may include, as inductors and capacitors for frequency bands at or below 5 GHz, surface-mount components in a chip, planar electrodes in a multilayer substrate, and planar coil patterns in a multilayer substrate or may include electrode wiring as inductors and capacitors for frequency bands at or above 5 GHz or for millimeter-wave bands. The input/output terminals110and120are connected with, for example, a radio-frequency signal processing circuit (RFIC) or an amplifier circuit that amplifies radio-frequency signals. The common terminal100is connected to an antenna. A switching circuit may lie between the common terminal100and the antenna. An impedance matching inductor or an impedance matching capacitor may be provided between the common terminal and the filter circuit10, between the common terminal100and the filter circuit20, or between the antenna and the common terminal100. The first frequency band may be higher or lower than the second frequency band. The filter circuit10may be a transmitting filter that passes transmission signals from the input/output terminal110to the common terminal100. Alternatively, the filter circuit10may be a receiving filter that passes reception signals from the common terminal100to the input/output terminal110. The filter circuit20may be a transmitting filter that passes transmission signals from the input/output terminal120to the common terminal100. Alternatively, the filter circuit20may be a receiving filter that passes reception signals from the common terminal100to the input/output terminal120. FIG.4illustrates circuitry of a multiplexer2according to Preferred Embodiment 2. The multiplexer2includes a filter circuit10, a filter circuit20, an additional circuit30, a common terminal100, an input/output terminal110, and an input/output terminal120. The filter circuits10and20are both connected to the common terminal100. The difference between the multiplexer1according to Preferred Embodiment 1 and the multiplexer2according to the present preferred embodiment is in connection points to which the additional circuit30is connected. Circuitry common to the multiplexer2according to the present preferred embodiment and the multiplexer1according to Preferred Embodiment 1 will be omitted from the following description, which will be provided while focusing on distinctive circuitry in the present preferred embodiment. The filter circuit10is an example of a first filter circuit and is a filter including a pass band that is a first frequency band. The filter circuit10includes a first terminal and a second terminal. One of the first and second terminals is connected to the input/output terminal110, and the other terminal is connected to the common terminal100. The one of the first and second terminals of the filter circuit10in the present preferred embodiment is a node133. The filter circuit20is an example of a second filter circuit and is a filter including a pass band that is a second frequency band different from the first frequency band. The filter circuit20includes a third terminal and a fourth terminal. The third terminal is connected to the common terminal100, and the fourth terminal is connected to the input/output terminal120. The fourth terminal of the filter circuit20in the present preferred embodiment is a node132. The additional circuit30is connected between the nodes133and132. The additional circuit30in the present preferred embodiment includes at least one series-arm circuit and at least one parallel-arm circuit. The at least one series-arm circuit is on a series-arm path providing a connection between the nodes133and132. The at least one parallel-arm circuit is on a parallel-arm path providing a connection between the series-arm path and the ground. The parallel-arm circuit includes (1) only an inductor in series with the parallel-arm path, (2) only a capacitor in series with the parallel-arm path, or (3) only an LC parallel resonant circuit being in series with the parallel-arm path and including an inductor and a capacitor connected in parallel. According to the circuitry described above, the additional circuit30is able to generate signals that cancel unwanted signals transmitted between the input/output terminals110and120. That is, unwanted signals transmitted between the input/output terminals110and120may be significantly reduced or prevented by the additional circuit30. The additional circuit30does not include an LC series resonant circuit on the parallel-arm path. Radio-frequency signals, which would otherwise suffer losses due to the resonance point of an LC series resonant circuit, may thus be transmitted with significantly reduced transmission loss. When the parallel-arm circuit includes an inductor and/or a capacitor, unwanted signals may thus be significantly reduced or prevented over a wide frequency band that is relatively far apart from the pass band of the filter circuit10or20. FIGS.5A and5Billustrate circuitry of a multiplexer2A according to Example 1. As illustrated inFIG.5A, the multiplexer2A according to Example 1 includes the filter circuits10and20, the additional circuit30B, the common terminal100, and the input/output terminals110and120. The multiplexer2A according to Example 1 is based on the multiplexer2according to Preferred Embodiment 2. The pass band of the filter circuit10is long term evolution (LTE) Band 40 (2,300 to 2,400 MHz), and the pass band of the filter circuit20is the reception band of LTE Band 7 (2,620 to 2,690 MHz). As illustratedFIG.5B, the additional circuit of the multiplexer2A is the additional circuit30B. As in the preferred embodiment above, the additional circuit30B includes the capacitors42and43and the inductor33. The capacitors42and43are in series with a series-arm path providing a connection between a node133and a node132, and the inductor33is in series with a parallel-arm path providing a connection between a connection node n1and the ground. FIG.6is a graph that provides a comparison of bandpass characteristics of the multiplexer according to Example 1 and bandpass characteristics of a multiplexer according to Comparative Example 1. Unlike the multiplexer2A according to Example 1, the multiplexer according to Comparative Example 1 does not include the additional circuit30B.FIG.6illustrates bandpass characteristics of the filter circuit10according to Example 1 and bandpass characteristics of the filter circuit10according to Comparative Example 1. The comparison provided inFIG.6has revealed that the filter circuit10of the multiplexer2A according to Example 1 provides a significant improvement over the filter circuit10of the multiplexer according to Comparative Example 1, or more specifically, provides low-loss transmission in the pass band of the filter circuit10(Band 40) and increases the attenuation of signals in the attenuation band corresponding to the pass band of the filter circuit20(the reception band of Band 7). The additional circuit30B of the multiplexer2A according to Example 1 provides this significant improvement by significantly reducing unwanted signals lying in the pass band of the filter circuit20and transmitted between the input/output terminal110and the common terminal100. In this example, the frequency spacing between the pass band of the filter circuit10and the pass band of the filter circuit20is greater than one of the bandwidths of the filter circuits10and20that is smaller than the other; that is, a frequency spacing of 220 MHz between the Band 40 (2,300 to 2,400 MHz) and Band 7 Rx (2,620 to 2,690 MHz) is greater than the bandwidth (70 MHz) of the filter circuit20, which is smaller than the bandwidth (100 MHz) of the pass band of the filter circuit10. A case where the frequency spacing between the pass bands of the two filter circuits is greater than the band width of at least one of the filter circuits may be addressed by the parallel-arm circuit of the additional circuit including an inductor, for example. According to this circuitry, which is not an acoustic wave resonator, unwanted signals may be significantly reduced or prevented over a wide frequency band that is relatively far apart from the pass band of the filter circuit10or20. The following describes the relationship between the circuitry of the parallel-arm circuit of the additional circuit30and the filter bandpass characteristics. FIGS.7A to7Cillustrate circuitry of the multiplexer according to Example 1 and circuitry of a multiplexer according to Comparative Example 2.FIG.7Aillustrates circuitry of the multiplexer2A according to Example 1.FIG.7Billustrates circuitry of the additional circuit30B according to Example 1.FIG.7Cillustrates circuitry of an additional circuit500according to Comparative Example 2. The multiplexer according to Comparative Example 2 is provided by replacing the additional circuit30of the multiplexer2A according to Example 1 with the additional circuit500illustrated inFIG.7C. As illustrated inFIG.7C, the additional circuit500of the multiplexer according to Comparative Example 2 includes a capacitor42, a capacitor43, and an LC series resonant circuit71. The capacitors42and43are in series with the series-arm path providing a connection between the nodes133and132. The LC series resonant circuit71is in series with the parallel-arm path providing a connection between the connection node n1and the ground. The LC series resonant circuit71includes a capacitor47and an inductor33, which are connected in series. FIG.8is a graph that provides a comparison of bandpass characteristics of the filter circuit20according to Example 1 and bandpass characteristics of the filter circuit20according to Comparative Example 2. As illustrated inFIG.8, the insertion loss in the filter circuit20according to Example 1 is smaller than the insertion loss in the filter circuit20according to Comparative Example 2. As illustrated inFIGS.5and6, the additional circuit30B of the multiplexer2A according to Example 1 significantly reduces or prevents unwanted signals lying in the pass band of the filter circuit20and transmitted between the input/output terminal110and the common terminal100. More specifically, the additional circuit30B generates signals that are substantially in antiphase to unwanted signals lying in the pass band of the filter circuit20and transmitted between the input/output terminal110and the common terminal100, and the generated signals in turn cancel the unwanted signals. Similarly, the additional circuit500of the multiplexer according to Comparative Example 2 generates signals that are substantially in antiphase to unwanted signals lying in the pass band of the filter circuit20and transmitted between the input/output terminal110and the common terminal100, and the generated signals in turn cancel the unwanted signals. However, the multiplexer according to Comparative Example 2 has the following drawback: radio-frequency signals in or nearby the resonant frequency of the LC series resonant circuit71included as a parallel-arm circuit flow into the ground through the LC series resonant circuit71, and this may conceivably induces some of the radio-frequency signals in the pass band of the filter circuit20to flow into the parallel-arm path. The additional circuit30B of the multiplexer2A according to Example 1 does not include an LC series resonant circuit on the parallel-arm path. Radio-frequency signals, which would otherwise suffer losses due to the resonance point of an LC series resonant circuit, may thus be transmitted with significantly reduced transmission loss. When the parallel-arm circuit includes an inductor, unwanted signals may be significantly reduced or prevented over a wide frequency band that is relatively far apart from the pass band of the filter circuit10or20. The following describes how unwanted signals may be significantly reduced or prevented by the action of the additional circuit30. FIGS.9A and9Billustrates circuitry of a multiplexer2B according to Example 2.FIG.9Aillustrates circuitry of the multiplexer2B according to Example 2.FIG.9Billustrates circuitry of the additional circuit30A according to Example 2. The multiplexer2B according to Example 2 is provided by replacing the additional circuit30B of the multiplexer2A according to Example 1 with the additional circuit30A illustrated inFIG.2A. As illustrated inFIG.9B, the additional circuit30A of the multiplexer2B according to Example 2 includes the inductors31and32and the capacitor41. The inductors31and32are in series with the series-arm path providing a connection between the nodes133and132. The capacitor41is in series with the parallel-arm path providing a connection between the connection node n1and the ground. FIGS.10A to10Dinclude graphs that provide comparisons of bandpass characteristics, isolation characteristics, and phase characteristics of the multiplexer according to Example 2 and those of the multiplexer according to Comparative Example 1.FIG.10Aillustrates bandpass characteristics of (the filter circuits10and20of) the multiplexer according to Example 2 and those of the multiplexer according to Comparative Example 1. With regard to the bandpass characteristics of the filter circuit10(see I inFIG.10A), the amount of attenuation in the attenuation band corresponding to the pass band of the filter circuit20is greater in Example 2 than in Comparative Example 1. FIG.10Billustrates the isolation characteristics (on a route passing through the input/output terminal110, the common terminal100, and the input/output terminal120) of the multiplexer according to Example 2 and those of the multiplexer according to Comparative Example 1. With regard to the isolation characteristics (see IV inFIG.10B), the insertion loss in the pass band of the filter circuit20is greater in Example 2 than in Comparative Example 1. That is, the multiplexer according to Example 2 has significantly improved isolation characteristics. FIG.10Dillustrates the phase characteristics of (the filter circuits10and20of) the multiplexer according to Comparative Example 1. A comparison of the phase characteristics of the filter circuit10(see I inFIG.10D) and the phase characteristics of the filter circuit20(see II inFIG.10D) has revealed that the phase of the filter circuit10and the phase of the filter circuit20coincide with each other in the pass band of the filter circuit20. This may be the reason why the isolation characteristics in Comparative Example 1 degrade as illustrated inFIG.10B. FIG.10Cillustrates the phase characteristics of (the filter circuits10and20and the additional circuit30A of) the multiplexer2B according to Example 2. A comparison of the phase characteristics of the filter circuit10(see I inFIG.10C) and the phase characteristics of the filter circuit20and the additional circuit30A (see II+III inFIG.10C) has revealed that the phase of the filter circuit10and the phase of the filter circuit20and the additional circuit30A do not coincide with each other in the pass band of the filter circuit20. Accordingly, Example 2 provides significantly improved isolation characteristics as illustrated inFIG.10B. The additional circuit30A of the multiplexer2B according to Example 2 generates signals that are out of phase with unwanted signals as mentioned above. Consequently, unwanted signals transmitted between the input/output terminals110and120may be canceled by the signals generated by the additional circuit30A. That is, unwanted signals transmitted between the input/output terminals110and120may be attenuated by the additional circuit30A. The additional circuit30A does not include an LC series resonant circuit on the parallel-arm path. Radio-frequency signals, which would otherwise suffer losses due to the resonance point of an LC series resonant circuit, may thus be transmitted with significantly reduced transmission loss. When the parallel-arm circuit includes an inductor, unwanted signals may be significantly reduced or prevented over a wide frequency band that is relatively far apart from the pass band of the filter circuit10or20. The following describes mounting of the multiplexer2B according to Example 2. The filter circuits10and20and the additional circuit30A of the multiplexer2B may, for example, be surface-mounted on a mounting substrate. Examples of the mounting substrate include: a low-temperature co-fired ceramic (LTCC) substrate or a high-temperature co-fired ceramic (HTCC) substrate including a plurality of dielectric layers stacked on one another; a substrate with embedded components; a substrate provided with a redistribution layer (RDL); and a printed circuit board. The common terminal100and the input/output terminals110and120may be provided on the mounting substrate. The surface of the mounting substrate, the filter circuits10and20, and the additional circuit30A may be overlaid with resin. The filter circuits10and20are, for example, surface-mount components. The filter circuits10and20may be acoustic wave filters using surface acoustic waves (SAWs), acoustic wave filters using bulk acoustic waves (BAWs), LC resonant filters, or dielectric filters but are not limited thereto. The inductors31and32and the capacitor41of the additional circuit30A are, for example, surface-mount components in a chip. The direction of the magnetic flux of the inductor31preferably coincides with the direction of the magnetic flux of the inductor32, for example. Accordingly, the mutual inductance between the inductors31and32may be used by the additional circuitry30A, and thus the inductors31and32may be small in size. Alternatively, the additional circuit30A may be a dielectric filter that equivalently implements the circuitry of each of the inductors31and32and the capacitor41. In a case where the mounting substrate is a multilayer substrate, the inductors31and32may be planar coil patterns in the multilayer substrate, and the capacitor41may be a planar electrode pattern in the multilayer substrate. The following describes a multiplexer including three or more filter circuits connected to a common terminal. FIG.11Aillustrates circuitry of a multiplexer3according to Preferred Embodiment 3. The multiplexer3includes a filter circuit10, a filter circuit20, a filter circuit40, a filter circuit50, an additional circuit30, a common terminal100, an input/output terminal110, an input/output terminal120, an input/output terminal140, and an input/output terminal150. The filter circuits10,20,40, and50are all connected to the common terminal100. The multiplexer3according to the present preferred embodiment differs from the multiplexer2according to Preferred Embodiment 2 in that the multiplexer3includes the filter circuits40and50. Circuitry common to the multiplexer3according to the present preferred embodiment and the multiplexer2according to Preferred Embodiment 2 will be omitted from the following description, which will be provided while focusing on distinctive circuitry in the present preferred embodiment. The filter circuit40is a filter including a pass band that is a third frequency band different from first and second frequency bands. The filter circuit40includes a terminal connected to the common terminal100and a terminal connected to the input/output terminal140. The filter circuit50is a filter including a pass band that is a fourth frequency band different from the first to third frequency bands. The filter circuit50includes a terminal connected to the common terminal100and a terminal connected to the input/output terminal150. According to the circuitry described above, the additional circuit30is able to generate signals that cancel unwanted signals transmitted between the input/output terminals110and120. That is, unwanted signals transmitted between the input/output terminals110and120may be significantly reduced or prevented by the additional circuit30. The additional circuit30does not include an LC series resonant circuit on the parallel-arm path. Radio-frequency signals, which would otherwise suffer losses due to the resonance point of an LC series resonant circuit, may thus be transmitted with significantly reduced transmission loss. When the parallel-arm circuit includes an inductor and/or a capacitor, unwanted signals may be significantly reduced or prevented over a wide frequency band that is relatively far apart from the pass band of the filter circuit10or20. The multiplexer3according to the present preferred embodiment preferably includes at least three filter circuits connected to the common terminal100, for example. The additional circuit30may be connected between the input/output terminal120and a node on a path connecting the common terminal100to the filter circuit10or may be connected between the input/output terminal110and a node on a path connecting the common terminal100to the filter circuit20. FIG.11Billustrates circuitry of a multiplexer4according to a modification of Preferred Embodiment 3. The multiplexer4includes the filter circuits10,20,40, and50, the additional circuit30, an additional circuit60, the common terminal100, and the input/output terminals110,120,140, and150. The filter circuits10,20,40, and50are all connected to the common terminal100. The multiplexer4according to this modification differs from the multiplexer3according to Preferred Embodiment 3 in that the multiplexer4includes the additional circuit60. Circuitry common to the multiplexer4according to this modification and the multiplexer3according to Preferred Embodiment 3 will be omitted from the following description, which will be provided while focusing on distinctive circuitry in the present preferred embodiment. The additional circuit60is connected between a node161and a node162. The node161is located on a path connecting the filter circuit40to the input/output terminal140. The node162is located on a path connecting the filter circuit50to the input/output terminal150. Alternatively, the additional circuit60may be connected between the node162and a node on a path connecting the common terminal100to the filter circuit40or may be connected between the node161and a node on a path connecting the common terminal100to the filter circuit50. The additional circuit60in this modification includes at least one series-arm circuit and at least one parallel-arm circuit. The at least one series-arm circuit is on a series-arm path providing a connection between the nodes161and162. The at least one parallel-arm circuit is on a parallel-arm path providing a connection between the series-arm path and the ground. The parallel-arm circuit includes (1) only an inductor in series with the parallel-arm path, (2) only a capacitor in series with the parallel-arm path, or (3) only an LC parallel resonant circuit being in series with the parallel-arm path and including an inductor and a capacitor connected in parallel. Examples of the specific circuitry of the additional circuit60are identical or similar to the additional circuits30A to30F respectively illustrated inFIGS.2A to2CandFIGS.3A to3C. According to the circuitry described above, the additional circuit30is able to generate signals that cancel unwanted signals transmitted between the input/output terminals110and120. That is, unwanted signals transmitted between the input/output terminals110and120may be significantly reduced or prevented by the additional circuit30. According to the circuitry described above, the additional circuit60is able to generate signals that cancel unwanted signals transmitted between the input/output terminals140and150. That is, unwanted signals transmitted between the input/output terminals140and150may be significantly reduced or prevented by the additional circuit60. Each of the additional circuits30and60does not include an LC series resonant circuit on the parallel-arm path. Radio-frequency signals, which would otherwise suffer losses due to the resonance point of an LC series resonant circuit, may thus be transmitted with significantly reduced transmission loss. When at least one of the parallel-arm circuit of the additional circuit30and the parallel-arm circuit of the additional circuit60includes an inductor and/or a capacitor, unwanted signals may be significantly reduced or prevented over a wide frequency band that is relatively far apart from the pass band of the filter circuit10,20,40, or50. The multiplexer4according to this modification may include three or more additional circuits. That is, each of the additional circuits may be connected between any two of the input/output terminals110,120,140, and150. The multiplexer1according to Preferred Embodiment 1 includes the common terminal100, the input/output terminals110and120, the filter circuits10and20, and the additional circuit30. Radio-frequency signals are input or output through the common terminal100, the input/output terminal110, or the input/output terminal120. The filter circuit10includes the first terminal connected to the common terminal100and the second terminal connected to the input/output terminal110. The filter circuit10is a filter including a pass band that is the first frequency band. The filter circuit20includes the third terminal connected to the common terminal100and the fourth terminal connected to the input/output terminal120. The filter circuit20is a filter including a pass band that is the second frequency band different from the first frequency band. The additional circuit30is connected between the fourth terminal and one of the first and second terminals. The additional circuit30includes at least one series-arm circuit and at least one parallel-arm circuit. The at least one series-arm circuit is on a series-arm path providing a connection between the fourth terminal and one of the first and the second terminals. The at least one parallel-arm circuit is on a parallel-arm path providing a connection between the series-arm circuit and ground. The at least one parallel-arm circuit includes (1) only an inductor in series with the parallel-arm path, (2) only a capacitor in series with the parallel-arm path, or (3) only an LC parallel resonant circuit being in series with the parallel-arm path and including an inductor and a capacitor connected in parallel. The circuit constant of the inductor(s) and the capacitor(s) of the additional circuit30may be changed to adjust the phase and the amplitude width of signals passing through the additional circuit30. According to the circuitry described above, the additional circuit30is able to generate signals that cancel unwanted signals transmitted between the input/output terminals110and120. That is, unwanted signals transmitted between the input/output terminals110and120may be attenuated by the additional circuit30. The additional circuit30does not include an LC series resonant circuit on the parallel-arm path. Radio-frequency signals, which would otherwise suffer losses due to the resonance point of an LC series resonant circuit, may thus be transmitted with significantly reduced transmission loss. When the at least one parallel-arm circuit includes an inductor and/or a capacitor, unwanted signals may be significantly reduced or prevented over a wide frequency band that is relatively far apart from the pass band of the filter circuit10or20. The additional circuit30A may be provided as follows. The additional circuit30A includes the series-arm circuits51and52and the parallel-arm circuit53. The series-arm circuits51and52are on the series-arm path. The parallel-arm circuit53is on the parallel-arm circuit providing a connection between the ground and the connection node n1at which the series-arm circuits51and52are connected to each other. The series-arm circuits51and52include the inductors31and32, respectively. The inductors31and32are in series with the series-arm path. The parallel-arm circuit53includes only the capacitor41in series with the parallel-arm path. The additional circuit30B may be provided as follows. The additional circuit30B includes the series-arm circuits51and52and the parallel-arm circuit53. The series-arm circuits51and52are on the series-arm path. The parallel-arm circuit53is on the parallel-arm circuit providing a connection between the ground and the connection node n1at which the series-arm circuits51and52are connected to each other. The series-arm circuits51and52include the capacitors42and43, respectively. The capacitors42and43are in series with the series-arm path. The parallel-arm circuit53includes only the inductor33in series with the parallel-arm path. The additional circuit30C may be provided as follows. The additional circuit30C includes the series-arm circuits51and52and the parallel-arm circuit53. The series-arm circuits51and52are on the series-arm path. The parallel-arm circuit53is on the parallel-arm circuit providing a connection between the ground and the connection node n1at which the series-arm circuits51and52are connected to each other. The series-arm circuits51and52each include an LC parallel resonant circuit being in series with the series-arm path and including an inductor and a capacitor connected in parallel. The parallel-arm circuit53includes only an LC parallel resonant circuit being in series with the parallel-arm path and including the inductor36and the capacitor46connected in parallel. Each of the additional circuits30A to30C does not include an LC series resonant circuit on the parallel-arm path. Radio-frequency signals, which would otherwise suffer losses due to the resonance point of an LC series resonant circuit, may thus be transmitted with significantly reduced transmission loss. When the parallel-arm circuit53includes an inductor and a capacitor, unwanted signals may be significantly reduced or prevented over a wide frequency band that is relatively far apart from the pass band of the filter circuit10or20. The additional circuit30D may be provided as follows. The additional circuit30D includes the series-arm circuit54and the parallel-arm circuits55and56. The series-arm circuit54is in series with the series-arm path. The parallel-arm circuit55is on the parallel-arm path providing a connection between the ground and one of the first and second terminals. The parallel-arm circuit56is on the parallel-arm path providing a connection between the fourth terminal and the ground. The series-arm circuit54includes the inductor31in series with the series-arm path. The parallel-arm circuit55includes only the capacitor41in series with the first parallel-arm path. The parallel-arm circuit includes only the capacitor42in series with the second parallel-arm path. The additional circuit30E may be provided as follows. The additional circuit30E includes the series-arm circuit54and the parallel-arm circuits55and56. The series-arm circuit54is in series with the series-arm path. The parallel-arm circuit55is on the parallel-arm path providing a connection between the ground and one of the first and second terminals. The parallel-arm circuit56is on the second series-arm path providing a connection between the fourth terminal and the ground. The series-arm circuit54includes the capacitor43in series with the series-arm path. The parallel-arm circuit55includes only the inductor32in series with the first parallel-arm path. The parallel-arm circuit56includes only the inductor33in series with the second parallel-arm path. The additional circuit30F may be provided as follows. The additional circuit30F includes the series-arm circuit54and the parallel-arm circuits55and56. The series-arm circuit54is in series with the series-arm path. The parallel-arm circuit55is on the parallel-arm path providing a connection between the ground and one of the first and second terminals. The parallel-arm circuit56is on the second parallel-arm path providing a connection between the ground and the fourth terminal. The series-arm circuit54includes an LC parallel resonant circuit being in series with the series-arm path and including the inductor34and the capacitor44connected in parallel. The parallel-arm circuit includes only an LC parallel-arm resonant circuit being in series with the first parallel-arm path and including the inductor35and the capacitor45connected in parallel. The parallel-arm circuit56includes only an LC parallel resonant circuit being in series with the second parallel-arm path and including the inductor36and the capacitor46connected in parallel. Each of the additional circuits30D to30F does not include an LC series resonant circuit on the parallel-arm path. Radio-frequency signals, which would otherwise suffer losses due to the resonance point of an LC series resonant circuit, may thus be transmitted with significantly reduced transmission loss. When the parallel-arm circuits55and56each include an inductor and a capacitor, unwanted signals may be significantly reduced or prevented over a wide frequency band that is relatively far apart from the pass band of the filter circuit10or20. In the multiplexer1, a radio-frequency signal in the first frequency band and a radio-frequency signal in the second frequency band may be simultaneously or substantially simultaneously transmitted through the filter circuit10and the filter circuit20, respectively. Accordingly, unwanted signals transmitted between the input/output terminals110and120may be significantly reduced or prevented by the additional circuit30. With the filter circuits10and20being connected to the common terminal100, entry of unwanted signals from one of the two filter circuits into the other filter circuit may be significantly reduced or prevented accordingly. The frequency spacing between the first and second frequency bands may be greater than one of the bandwidths of the first and second frequency bands that is smaller than the other. A case where the frequency spacing between the pass bands of the filter circuits10and20is greater than the band width of at least one of the filter circuits10and20may be addressed by the parallel-arm circuit of the additional circuit including an inductor and/or a capacitor, without an acoustic wave resonator, for example. Accordingly, unwanted signals may be significantly reduced or prevented over a wide frequency band that is relatively far apart from the pass band of the filter circuit10or20. The filter circuit10may be a transmitting filter that passes transmission signals from the input/output terminal110to the common terminal100. The filter circuit20may be a receiving filter that passes reception signals from the common terminal100to the input/output terminal120. The circuitry described above may cause unwanted signals, for example, harmonic waves of high-power transmission signals on a route passing through the input/output terminal110and the common terminal100. The unwanted signals may flow into a reception path between the common terminal100and the input/output terminal120, and as a result, the reception sensitivity associated with radio-frequency signals in the second frequency band may degrade. As a work around, the additional circuit30is able to significantly reduce or prevent unwanted signals, for example, harmonic waves transmitted between the input/output terminals110and120. Accordingly, the possibility that the reception sensitivity in the second frequency band will degrade is able to be significantly reduced or prevented. The additional circuit30generates signals that cancel unwanted signals lying in a predetermined frequency band and transmitted between the input/output terminals110and120. Consequently, unwanted signals transmitted between the input/output terminals110and120may be attenuated by the additional circuit30. Preferred embodiments, examples, and modifications thereof have been described so far as examples of the multiplexer according to the preferred embodiments of the present invention. However, the preferred embodiments of the present invention are not limited to the preferred embodiments, examples, and modifications. The present invention also includes other preferred embodiments implemented by varying combinations of components of the aforementioned preferred embodiments, examples, and modifications; other modifications provided by various alterations to the preferred embodiments above that may be conceived by those skilled in the art within a range not departing from the spirit of the present invention; and various types of apparatuses including the multiplexer according to the preferred embodiments of the present invention. For example, inductors and capacitors may be connected between the individual components of the multiplexer. An example of an inductor is a wire inductor including a wire that provides a connection between the individual components. The preferred embodiments of the present invention, or more specifically, a high-isolation multiplexer compliant with standards that are able to support multiple frequency bands has wide applicability to communication apparatuses, for example, mobile phones. While preferred embodiments of the present invention have been described above, it is to be understood that variations and modifications will be apparent to those skilled in the art without departing from the scope and spirit of the present invention. The scope of the present invention, therefore, is to be determined solely by the following claims. | 55,661 |
11863152 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The present invention will be clarified by describing specific preferred embodiments of the present invention with reference to the drawings. FIG.1Ais a schematic elevational cross-sectional view of a surface acoustic wave device according to a first preferred embodiment of the present invention. A surface acoustic wave device1includes a supporting substrate2. A high-acoustic-velocity film3having a relatively high acoustic velocity is stacked on the supporting substrate2. A low-acoustic-velocity film4having a relatively low acoustic velocity is stacked on the high-acoustic-velocity film3. A piezoelectric film5is stacked on the low-acoustic-velocity film4. An IDT electrode6is stacked on the upper surface of the piezoelectric film5. Note that the IDT electrode6may be disposed on the lower surface of the piezoelectric film5. The supporting substrate2may be composed of an appropriate material as long as it can support the laminated structure including the high-acoustic-velocity film3, the low-acoustic-velocity film4, the piezoelectric film5, and the IDT electrode6. Examples of such a material that can be used include piezoelectrics, such as sapphire, lithium tantalate, lithium niobate, and quartz; various ceramics, such as alumina, magnesia, silicon nitride, aluminum nitride, silicon carbide, zirconia, cordierite, mullite, steatite, and forsterite; dielectrics, such as glass; semiconductors, such as silicon and gallium nitride; and resin substrates. In this preferred embodiment, the supporting substrate2is preferably composed of glass. The high-acoustic-velocity film3functions in such a manner that a surface acoustic wave is confined to a portion in which the piezoelectric film5and the low-acoustic-velocity film4are stacked and the surface acoustic wave does not leak into the structure below the high-acoustic-velocity film3. In this preferred embodiment, the high-acoustic-velocity film3is preferably composed of aluminum nitride. As the material for high-acoustic-velocity film3, as long as it is capable of confining the elastic wave, any of various high-acoustic-velocity materials can be used. Examples thereof include aluminum nitride, aluminum oxide, silicon carbide, silicon nitride, silicon oxynitride, a DLC film or diamond, media mainly composed of these materials, and media mainly composed of mixtures of these materials. In order to confine the surface acoustic wave to the portion in which the piezoelectric film5and the low-acoustic-velocity film4are stacked, it is preferable that the thickness of the high-acoustic-velocity film3be as large as possible. The thickness of the high-acoustic-velocity film3is preferably about 0.5 times or more, more preferably about 1.5 times or more, than the wavelength λ of the surface acoustic wave. In this description, the “high-acoustic-velocity film” is defined as a film in which the acoustic velocity of a bulk wave propagating therein is higher than the acoustic velocity of an elastic wave, such as a surface acoustic wave or a boundary acoustic wave, propagating in or along the piezoelectric film5. Furthermore, the “low-acoustic-velocity film” is defined as a film in which the acoustic velocity of a bulk wave propagating therein is lower than the acoustic velocity of a bulk wave propagating in the piezoelectric film5. Furthermore, elastic waves with various modes having different acoustic velocities are excited by an IDT electrode having a certain structure. The “elastic wave propagating in the piezoelectric film5” represents an elastic wave with a specific mode used for obtaining filter or resonator characteristics. The bulk wave mode that determines the acoustic velocity of the bulk wave is defined in accordance with the usage mode of the elastic wave propagating in the piezoelectric film5. In the case where the high-acoustic-velocity film3and the low-acoustic-velocity film4are isotropic with respect to the propagation direction of the bulk wave, correspondences are as shown in Table 1 below. That is, for the dominant mode of the elastic wave shown in the left column of Table 1, the high acoustic velocity and the low acoustic velocity are determined according to the mode of the bulk wave shown in the right column of Table 1. The P wave is a longitudinal wave, and the S wave is a transversal wave. In Table 1, U1 represents an elastic wave containing as a major component a P wave, U2 represents an elastic wave containing as a major component an SH wave, and U3 represents an elastic wave containing as a major component an SV wave. TABLE 1Correspondence of the elastic wave mode of the piezoelectric filmto the bulk wave mode of the dielectric film (in the case wherethe dielectric film is composed of an isotropic material)Dominant mode of the elasticMode of the bulk wavewave propagating in thepropagating in the dielectricpiezoelectric filmfilmU1P waveU2S waveU3 + U1S wave In the case where the low-acoustic-velocity film4and the high-acoustic-velocity film3are anisotropic with respect to the propagation of the bulk wave, bulk wave modes that determine the high acoustic velocity and the low acoustic velocity are shown in Table 2 below. In addition, in the bulk wave modes, the slower of the SH wave and the SV wave is referred to as a slow transversal wave, and the faster of the two is referred to as a fast transversal wave. Which of the two is the slow transversal wave depends on the anisotropy of the material. In LiTaO3or LiNbO3cut in the vicinity of rotated Y cut, in the bulk wave modes, the SV wave is the slow transversal wave, and the SH wave is the fast transversal wave. TABLE 2Correspondence of the elastic wave mode of the piezoelectric filmto the bulk wave mode of the dielectric film (in the case wherethe dielectric film is composed of an anisotropic material)Dominant mode of the elasticMode of the bulk wavewave propagating in thepropagating in the dielectricpiezoelectric filmfilmU1P waveU2SH waveU3 + U1SV wave In this preferred embodiment, the low-acoustic-velocity film4is preferably composed of silicon oxide, and the thickness thereof preferably is about 0.35λ, where λ is the wavelength of an elastic wave determined by the electrode period of the IDT electrode. As the material constituting the low-acoustic-velocity film4, it is possible to use any appropriate material having a bulk wave acoustic velocity that is slower than the acoustic velocity of the bulk wave propagating in the piezoelectric film5. Examples of such a material that can be used include silicon oxide, glass, silicon oxynitride, tantalum oxide, and media mainly composed of these materials, such as compounds obtained by adding fluorine, carbon, or boron to silicon oxide. The low-acoustic-velocity film and the high-acoustic-velocity film are each composed of an appropriate dielectric material capable of achieving a high acoustic velocity or a low acoustic velocity that is determined as described above. In this preferred embodiment, the piezoelectric film5is preferably composed of 38.5° Y cut LiTaO3, i.e., LiTaO3with Euler angles of (0°, 128.5°, 0°), and the thickness thereof preferably is about 0.25λ, where λ is the wavelength of a surface acoustic wave determined by the electrode period of the IDT electrode6. However, the piezoelectric film5may be composed of LiTaO3with other cut angles, or a piezoelectric single crystal other than LiTaO3. In this preferred embodiment, the IDT electrode6is preferably composed of Al. However, the IDT electrode6may be made of any appropriate metal material, such as Al, Cu, Pt, Au, Ag, Ti, Ni, Cr, Mo, W, or an alloy mainly composed of any one of these metals. Furthermore, the IDT electrode6may have a structure in which a plurality of metal films composed of these metals or alloys are stacked. Although schematically shown inFIG.1A, an electrode structure shown inFIG.1Bis disposed on the piezoelectric film5. That is, the IDT electrode6and reflectors7and8arranged on both sides in the surface acoustic wave electrode direction of the IDT electrode6are disposed. A one-port-type surface acoustic wave resonator is thus constituted. However, the electrode structure including the IDT electrode in the present invention is not particularly limited, and a modification is possible such that an appropriate resonator, a ladder filter in which resonators are combined, a longitudinally coupled filter, a lattice-type filter, or a transversal type filter is provided. The surface acoustic wave device1according to the present preferred embodiment preferably includes the high-acoustic-velocity film3, the low-acoustic-velocity film4, and the piezoelectric film5stacked on each other. Thereby, the Q factor can be increased. The reason for this is as follows. In the related art, it is known that, by disposing a high-acoustic-velocity film on the lower surface of a piezoelectric substrate, some portion of a surface acoustic wave propagates while distributing energy into the high-acoustic-velocity film, and therefore, the acoustic velocity of the surface acoustic wave can be increased. In contrast, in various preferred embodiments of the present invention of the present application, since the low-acoustic-velocity film4is disposed between the high-acoustic-velocity film3and the piezoelectric film5, the acoustic velocity of an elastic wave is decreased. Energy of an elastic wave essentially concentrates on a low-acoustic-velocity medium. Consequently, it is possible to enhance an effect of confining elastic wave energy to the piezoelectric film5and the IDT in which the elastic wave is excited. Therefore, in accordance with this preferred embodiment, the loss can be reduced and the Q factor can be enhanced compared with the case where the low-acoustic-velocity film4is not provided. Furthermore, the high-acoustic-velocity film3functions such that an elastic wave is confined to a portion in which the piezoelectric film5and the low-acoustic-velocity film4are stacked and the elastic wave does not leak into the structure below the high-acoustic-velocity film3. That is, in the structure of a preferred embodiment of the present invention, energy of an elastic wave of a specific mode used to obtain filter or resonator characteristics is distributed into the entirety of the piezoelectric film5and the low-acoustic-velocity film4and partially distributed into the low-acoustic-velocity film side of the high-acoustic-velocity film3, but is not distributed into the supporting substrate2. The mechanism of confining the elastic wave by the high-acoustic-velocity film is similar to that in the case of a Love wave-type surface acoustic wave, which is a non-leaky SH wave, and for example, is described in Kenya Hashimoto; “Introduction to simulation technologies for surface acoustic wave devices”; Realize; pp. 90-91. The mechanism is different from the confinement mechanism in which a Bragg reflector including an acoustic multilayer film is used. In addition, in this preferred embodiment, since the low-acoustic-velocity film4is preferably composed of silicon oxide, temperature characteristics can be improved. The elastic constant of LiTaO3has a negative temperature characteristic, and silicon oxide has a positive temperature characteristic. Consequently, in the surface acoustic wave device1, the absolute value of TCF can be decreased. In addition, the specific acoustic impedance of silicon oxide is lower than that of LiTaO3. Consequently, an increase in the electromechanical coupling coefficient, i.e., an enhancement in the band width ratio and an improvement in frequency temperature characteristics can be simultaneously achieved. Furthermore, by adjusting the thickness of the piezoelectric film5and the thickness of each of the high-acoustic-velocity film3and the low-acoustic-velocity film4, as will be described later, the electromechanical coupling coefficient can be adjusted in a wide range. Consequently, freedom of design can be increased. Specific experimental examples of the surface acoustic wave device according to the preferred embodiment described above will be described below to demonstrate the operation and advantageous effects of the preferred embodiment. A surface acoustic wave device1according to the first preferred embodiment and surface acoustic wave devices according to first and second comparative examples described below were fabricated. First preferred embodiment: Al electrode (thickness: 0.08λ)/38.5° Y cut LiTaO3thin film (thickness: 0.25λ)/silicon oxide film (thickness: 0.35λ)/aluminum nitride film (1.5λ)/supporting substrate composed of glass stacked in that order from the top. First comparative example: electrode composed of Al (thickness: 0.08λ)/38.5° Y cut LiTaO3substrate stacked in that order from the top. In the first comparative example, the electrode composed of Al was formed on the LiTaO3substrate with a thickness of 350 μm. Second comparative example: Al electrode (thickness: 0.08λ)/38.5° Y cut LiTaO3film with a thickness of 0.5λ/aluminum nitride film (thickness: 1.5λ)/supporting substrate composed of glass stacked in that order from the top. In each of the surface acoustic wave devices of the first preferred embodiment and the first and second comparative examples, the electrode had a one-port-type surface acoustic wave resonator structure shown inFIG.1B. The wavelength λ determined by the electrode period of the IDT electrode was 2 μm. The dominant mode of the surface acoustic wave propagating in the 38.5° Y cut LiTaO3is the U2 mode, and its acoustic velocity is about 3,950 m/sec. Furthermore, the acoustic velocity of the bulk wave propagating in a rotated Y cut LiTaO3is constant regardless of the rotation angle (Y cut). The acoustic velocity of the SV bulk wave (slow transversal wave) is 3,367 m/sec, and the acoustic velocity of the SH bulk wave (fast transversal wave) is 4,212 m/sec. Furthermore, in each of the first preferred embodiment and the second comparative example, the aluminum nitride film is an isotropic film, and the acoustic velocity of the bulk wave (S wave) in the aluminum nitride film is 6,000 m/sec. Furthermore, the silicon oxide film as the low-acoustic-velocity film4formed in the first preferred embodiment is an isotropic film, and the acoustic velocity of the bulk wave (S wave) in silicon oxide is 3,750 m/sec. Accordingly, since the dominant mode of the surface acoustic wave propagating the piezoelectric is the U2 mode, the following conditions are satisfied.(1) Acoustic velocity of the bulk wave (S wave) in the high-acoustic-velocity film: 6,000 m/sec>Acoustic velocity of the dominant mode (U2) of the surface acoustic wave: 3,950 m/sec.(2) Acoustic velocity of the bulk wave (S wave) in the low-acoustic-velocity film: 3,750 m/sec<Acoustic velocity of the bulk wave (SH) propagating in the piezoelectric film: 4,212 m/sec. FIG.2shows the impedance-frequency characteristics of the surface acoustic wave devices of the first preferred embodiment and the first and second comparative examples, andFIG.3shows an impedance Smith chart. Furthermore, as shown in Table 3 below, in the surface acoustic wave devices of the first preferred embodiment and the first and second comparative examples, the Q factor at the resonant frequency, the Q factor at the antiresonant frequency, the band width ratio, and the TCF at the resonant frequency were obtained by actual measurement. The results are shown in Table 3 below. TABLE 3BandTCFQQwidth[ppm/° C.](Resonance)(Antiresonance)ratio [%](Resonance)First8185273.2−45comparativeexampleSecond77712854.1−45comparativeexampleFirst102620804.4−25embodiment InFIGS.2and3, the solid line represents the results of the first preferred embodiment, the dashed line represents the results of the second comparative example, and the dotted-chain line represents the results of the first comparative example. As is clear fromFIGS.2and3, in the second comparative example and the first preferred embodiment, the top-to-valley ratio is higher than that in the first comparative example. The top-to-valley ratio is a ratio of the impedance at an antiresonant frequency to the impedance at a resonant frequency. As this value increases, it becomes possible to configure a filter having a higher Q factor and lower insertion loss. It is evident that, in particular, in the first preferred embodiment, the top-to-valley ratio is much higher than that in the second comparative example. Furthermore, it is also evident that according to the first preferred embodiment, the frequency difference between the resonant frequency and the antiresonant frequency, i.e., the band width ratio, can be increased compared with the second comparative example. Specifically, as is clear from Table 3, according to the first preferred embodiment, the Q factor at the resonant frequency can be increased, and in particular, the Q factor at the antiresonant frequency can be greatly increased compared with the first and second comparative examples. That is, since it is possible to configure a one-port-type surface acoustic wave resonator having a high Q factor, a filter having low insertion loss can be configured using the surface acoustic wave device1. Furthermore, the band width ratio is 3.2% in the first comparative example and 4.1% in the second comparative example. In contrast, the band width ratio increases to 4.4% in the first preferred embodiment. In addition, as is clear from Table 3, according to the first preferred embodiment, since the silicon oxide film is disposed, the absolute value of TCF can be greatly decreased compared with the first and second comparative examples. FIGS.5and6show the results of FEM simulation, in which the dotted-chain line represents the first preferred embodiment, the dashed line represents the first comparative example, and the solid line represents the second comparative example. In the FEM simulation, a one-port resonator is assumed, in which duty=0.5, the intersecting width is 20λ, and the number of pairs is 100. As in the experimental results described above, in the FEM simulation results, as is clear fromFIG.6, the Q factor can also be increased compared with the first and second comparative examples. Consequently, as is clear from the experimental results and the FEM simulation results regarding the first preferred embodiment and the first and second comparative examples, it has been confirmed that, by disposing the low-acoustic-velocity film4composed of silicon oxide between the high-acoustic-velocity film3composed of aluminum nitride and the piezoelectric film5composed of LiTaO3, the Q factor can be enhanced. The reason for the fact that the Q factor can be enhanced is believed to be that energy of surface acoustic waves can be effectively confined to the piezoelectric film5, the low-acoustic-velocity film4, and the high-acoustic-velocity film3by the formation of the high-acoustic-velocity film3, and that the effect of suppressing leakage of energy of surface acoustic waves outside the IDT electrode can be enhanced by the formation of the low-acoustic-velocity film4. Consequently, since the effect is obtained by disposing the low-acoustic-velocity film4between the piezoelectric film5and the high-acoustic-velocity film3as described above, the material constituting the piezoelectric film is not limited to the 38.5° Y cut LiTaO3described above. The same effect can be obtained in the case where LiTaO3with other cut angles is used. Furthermore, the same effect can be obtained in the case where a piezoelectric single crystal such as LiNbO3other than LiTaO3, a piezoelectric thin film such as ZnO or AlN, or a piezoelectric ceramic such as PZT is used. Furthermore, the high-acoustic-velocity film3has a function of confining the majority of energy of surface acoustic waves to a portion in which the piezoelectric film5and the low-acoustic-velocity film4are stacked. Consequently, the aluminum nitride film may be a c-axis-oriented, anisotropic film. Furthermore, the material for the high-acoustic-velocity film3is not limited to the aluminum nitride film, and it is expected that the same effect can be obtained in the case where any of various materials that can constitute the high-acoustic-velocity film3described above is used. Furthermore, silicon oxide of the low-acoustic-velocity film is not particularly limited as long as the acoustic velocity of a bulk wave propagating therein is lower than the acoustic velocity of a bulk wave propagating in the piezoelectric film. Consequently, the material constituting the low-acoustic-velocity film4is not limited to silicon oxide. Therefore, any of the various materials described above as examples of a material that can constitute the low-acoustic-velocity film4can be used. Second Preferred Embodiment Characteristics of a surface acoustic wave device according to a second preferred embodiment having the structure described below were simulated by a finite element method. The electrode structure was the same as that shown inFIG.1B. An IDT electrode was an Al film with a thickness of 0.08λ. A piezoelectric film was composed of 38.5° Y cut LiTaO3film, and the thickness thereof was in a range of 0 to 3λ. A low-acoustic-velocity film was composed of silicon oxide, and the thickness thereof was 0 to 2λ. A high-acoustic-velocity film was composed of aluminum oxide, and the thickness thereof was 1.5λ. A supporting substrate was composed of alumina. The results are shown inFIGS.7to10. FIG.7is a graph showing the relationship between the LiTaO3film thickness, the acoustic velocity of the U2 mode which is the usage mode, and the normalized film thickness of the silicon oxide film. Furthermore,FIG.8is a graph showing the relationship between the LiTaO3film thickness, the electromechanical coupling coefficient k2of the U2 mode which is the usage mode, and the normalized film thickness of the silicon oxide film. As is clear fromFIG.7, by forming the silicon oxide film, the variations in acoustic velocity are small in the wide thickness range of 0.05, to 0.5λ of the piezoelectric film composed of LiTaO3, in comparison with the case where the normalized film thickness of the silicon oxide film is 0.0, i.e., no silicon oxide film is formed. Furthermore, as is clear fromFIG.8, when the silicon oxide film is formed, even in the case where the LiTaO3film thickness is small at 0.35λ or less, by controlling the silicon oxide film thickness, the electromechanical coupling coefficient k2can be increased to 0.08 or more, in comparison with the case where no silicon oxide film is formed. FIG.9is a graph showing the relationship between the LiTaO3film thickness, the temperature coefficient of frequency TCV, and the normalized film thickness of the silicon oxide film.FIG.10is a graph showing the relationship between the LiTaO3film thickness, the band width ratio, and the normalized film thickness of the silicon oxide film. Note that TCF=TCV−α, where α is the coefficient of linear expansion in the propagation direction. In the case of LiTaO3, α is about 16 ppm/° C. As is clear fromFIG.9, by forming the silicon oxide film, the absolute value of TCV can be further decreased in comparison with the case where no silicon oxide film is formed. In addition, as is clear fromFIG.10, even in the case where the thickness of the piezoelectric film composed of LiTaO3is small at about 0.35λ or less, by adjusting the silicon oxide film thickness, the band width ratio can be adjusted. Furthermore, when the thickness of the silicon oxide film is increased to more than about 2λ, stress is generated, resulting in problems, such as warpage of the surface acoustic wave device, which may cause handling difficulty. Consequently, the thickness of the silicon oxide film is preferably about 2λ or less. In the related art, it is known that, by using a laminated structure in which an IDT is disposed on LiTaO3and silicon oxide is further disposed on the IDT, the absolute value of TCF in the surface acoustic wave device can be decreased. However, as is clear fromFIG.11, when the absolute value of TCV is intended to be decreased, i.e., when the absolute value of TCF is intended to be decreased, it is not possible to simultaneously achieve an increase in the bandwidth ratio and a decrease in the absolute value of the TCF. In contrast, by using the structure of the present invention in which the high-acoustic-velocity film and the low-acoustic-velocity film are stacked, a decrease in the absolute value of TCF and an increase in the band width ratio can be achieved. This will be described with reference toFIGS.11and12. FIG.11is a graph showing the relationship between the band width ratio and the TCF in surface acoustic wave devices of third to fifth comparative examples described below as conventional surface acoustic wave devices. Third comparative example: laminated structure of electrode composed of Al/42° Y cut LiTaO3. SH wave was used. Fourth comparative example: laminated structure of silicon oxide film/electrode composed of Cu/38.5° Y cut LiTaO3substrate. SH wave was used. Fifth comparative example: laminated structure of silicon oxide film/electrode composed of Cu/128° Y cut LiNbO3substrate. SV wave was used. As is clear fromFIG.11, in any of the third comparative example to the fifth comparative example, as the band width ratio BW increases, the absolute value of TCF increases. FIG.12is a graph showing the relationship between the band width ratio BW (%) and the temperature coefficient of frequency TCV in the case where the normalized film thickness of LiTaO3was changed in the range of about 0.1λ to about 0.5λ in each of the thickness levels of the silicon oxide film of the second preferred embodiment. As is clear fromFIG.12, in this preferred embodiment, even in the case where the band width ratio BW is increased, the absolute value of TCV does not increase. That is, by adjusting the thickness of the silicon oxide film, the band width ratio can be increased, and the absolute value of the temperature coefficient of frequency TCV can be decreased. That is, by stacking the low-acoustic-velocity film4and the high-acoustic-velocity film3on the piezoelectric film composed of LiTaO3, and in particular, by forming a silicon oxide film as the low-acoustic-velocity film, it is possible to provide an elastic wave device having a wide band width ratio and good temperature characteristics. Preferably, the coefficient of linear expansion of the supporting substrate2is smaller than that of the piezoelectric film5. As a result, expansion due to heat generated in the piezoelectric film5is restrained by the supporting substrate2. Consequently, the frequency temperature characteristics of the elastic wave device can be further improved. FIGS.13and14are graphs showing changes in the acoustic velocity and changes in the band width ratio, respectively, with changes in the thickness of the piezoelectric film composed of LiTaO3in the structure of the second preferred embodiment. As is clear fromFIGS.13and14, when the LiTaO3thickness is about 1.5λ or more, the acoustic velocity and the band width ratio are nearly unchanged. The reason for this is that energy of surface acoustic waves is confined to the piezoelectric film and is not distributed into the low-acoustic-velocity film4and the high-acoustic-velocity film3. Consequently, the effects of the low-acoustic-velocity film4and the high-acoustic-velocity film3are not exhibited. Therefore, it is more preferable to set the thickness of the piezoelectric film to be about 1.5λ or less. Thereby, it is believed that energy of surface acoustic waves can be sufficiently distributed into the low-acoustic-velocity film4and the Q factor can be further enhanced. The results ofFIGS.7to14show that, by adjusting the thickness of the silicon oxide film and the thickness of the piezoelectric film composed of LiTaO3, the electromechanical coupling coefficient can be adjusted over a wide range. Furthermore, it is clear that when the thickness of the piezoelectric film composed of LiTaO3is in the range of about 0.05λ to about 0.5λ, the electromechanical coupling coefficient can be adjusted in a wider range. Consequently, the thickness of the piezoelectric film composed of LiTaO3is preferably in the range of about 0.05λ to about 0.5λ. Conventionally, it has been required to adjust cut angles of the piezoelectric used in order to adjust the electromechanical coupling coefficient. However, when the cut angles, i.e., Euler angles, are changed, other material characteristics, such as the acoustic velocity, temperature characteristics, and spurious characteristics, are also changed. Consequently, it has been difficult to simultaneously satisfy these characteristics, and optimization of design has been difficult. However, as is clear from the results of the second preferred embodiment described above, according to the present invention, even in the case where a piezoelectric single crystal with the same cut angles is used as the piezoelectric film, by adjusting the thickness of the silicon oxide film, i.e., the low-acoustic-velocity film, and the thickness of the piezoelectric film, the electromechanical coupling coefficient can be freely adjusted. Consequently, freedom of design can be greatly increased. Therefore, it is enabled to simultaneously satisfy various characteristics, such as the acoustic velocity, the electromechanical coupling coefficient, frequency temperature characteristics, and spurious characteristics, and it is possible to easily provide a surface acoustic wave device having desired characteristics. Third Preferred Embodiment As a third preferred embodiment, surface acoustic wave devices same as those of the first preferred embodiment were fabricated. The materials and thickness were as described below. A laminated structure included an Al film with a thickness of 0.08λ as an IDT electrode6/a LiTaO3film with a thickness of 0.25λ as a piezoelectric film4/a silicon oxide film with a thickness in the range of 0 to 2λ as a low-acoustic-velocity film4/a high-acoustic-velocity film. As the high-acoustic-velocity film, a silicon nitride film, an aluminum oxide film, or diamond was used. The thickness of the high-acoustic-velocity film3was 1.5λ. FIGS.15and16are graphs showing the relationship between the thickness of the silicon oxide film and the acoustic velocity and the relationship between the thickness of the silicon oxide film and the electromechanical coupling coefficient k2, respectively, in the third preferred embodiment. The acoustic velocity of the bulk wave (S wave) in the silicon nitride film is 6,000 m/sec, and the acoustic velocity of the bulk wave (S wave) in aluminum oxide is 6,000 m/sec. Furthermore, the acoustic velocity of the bulk wave (S wave) in diamond is 12,800 m/sec. As is clear fromFIGS.15and16, as long as the high-acoustic-velocity film4satisfies the conditions for the high-acoustic-velocity film4described earlier, even if the material for the high-acoustic-velocity film4and the thickness of the silicon oxide film are changed, the electromechanical coupling coefficient and the acoustic velocity are nearly unchanged. In particular, if the thickness of the silicon oxide film is about 0.1λ or more, the electromechanical coupling coefficient is nearly unchanged in the silicon oxide film thickness range of about 0.1λ to about 0.5λ regardless of the material for the high-acoustic-velocity film. Furthermore, as is clear fromFIG.15, in the silicon oxide film thickness range of about 0.3λ to about 2λ, the acoustic velocity is nearly unchanged regardless of the material for the high-acoustic-velocity film. Consequently, in the present invention, the material for the high-acoustic-velocity film is not particularly limited as long as the above conditions are satisfied. Fourth Preferred Embodiment In a fourth preferred embodiment, while changing the Euler angles (0°, θ, ψ) of the piezoelectric film, the electromechanical coupling coefficient of a surface acoustic wave containing as a major component the U2 component (SH component) was measured. A laminated structure was composed of IDT electrode6/piezoelectric film5/low-acoustic-velocity film4/high-acoustic-velocity film3/supporting substrate2. As the IDT electrode6, Al with a thickness of 0.08λ was used. As the piezoelectric film, LiTaO3with a thickness of 0.25λ was used. As the low-acoustic-velocity film4, silicon oxide with a thickness of 0.35λ was used. As the high-acoustic-velocity film3, an aluminum nitride film with a thickness of 1.5λ was used. As the supporting substrate2, glass was used. In the structure described above, regarding many surface acoustic wave devices with Euler angles (0°, θ, ψ) in which θ and ψ were varied, the electromechanical coupling coefficient was obtained by FEM. As a result, it was confirmed that in a plurality of regions R1 shown inFIG.17, the electromechanical coupling coefficient k2of the mode mainly composed of the U2 component (SH component) is about 2% or more. Note that the same results were obtained in the range of Euler angles (0°±5, θ, ψ). That is, when LiTaO3with Euler angles located in a plurality of ranges R1 shown inFIG.17is used, the electromechanical coupling coefficient of the vibration mainly composed of the U2 component is about 2% or more. Therefore, it is clear that a bandpass filter with a wide band width can be configured using a surface acoustic wave device according to a preferred embodiment of the present invention. Fifth Preferred Embodiment Assuming the same structure as that in the fourth preferred embodiment, the electromechanical coupling coefficient of a surface acoustic wave mainly composed of the U3 component (SV component) was obtained by FEM. The range of Euler angles in which the electromechanical coupling coefficient of the mode mainly composed of the U2 (SH component) is about 2% or more, and the electromechanical coupling coefficient of the mode mainly composed of the U3 (SV component) is about 1% or less was obtained. The results are shown inFIG.18. In a plurality of ranges R2 shown inFIG.18, the electromechanical coupling coefficient of the mode mainly composed of the U2 (SH component) is about 2% or more, and the electromechanical coupling coefficient of the mode mainly composed of the U3 (SV component) is about 1% or less. Consequently, by using LiTaO3with Euler angles located in any one of a plurality of regions R2, the electromechanical coupling coefficient of the U2 mode used can be increased and the electromechanical coupling coefficient of the U3 mode which is spurious can be decreased. Therefore, it is possible to configure a bandpass filter having better filter characteristics. Sixth Preferred Embodiment As in the second preferred embodiment, simulation was carried out on a surface acoustic wave device having the structure described below. As shown in Table 4 below, in the case where the transversal wave acoustic velocity of the low-acoustic-velocity film and the specific acoustic impedance of the transversal wave of the low-acoustic-velocity film were changed in 10 levels, characteristics of surface acoustic waves mainly composed of the U2 component were simulated by a finite element method. In the transversal wave acoustic velocity and specific acoustic impedance of the low-acoustic-velocity film, the density and elastic constant of the low-acoustic-velocity film were changed. Furthermore, as the material constants of the low-acoustic-velocity film not shown in Table 4, material constants of silicon oxide were used. TABLE 4SpecificTrans-acousticversalimpedancewaveofSpecificacoustictransversalgravityElastic constantvelocitywaveρC11C44VZsLevel[kg/m3][N/m2][N/m2][m/s][N · s/m3]Remarks11.11E+034.73E+101.56E+1037574.2 · E+0622.21E+037.85E+103.12E+1037578.3 · E+06Siliconoxideequiv-alent33.32E+031.10E+114.68E+1037571.2 · E+0746.63E+032.03E+119.36E+1037572.5 · E+0751.11E+043.28E+111.56E+1137574.2 · E+0762.21E+033.17E+107.80E+0918794.2 · E+0674.42E+034.73E+101.56E+1018798.3 · E+0686.63E+036.29E+102.34E+1018791.2 · E+0791.33E+041.10E+114.68E+1018792.5 · E+07102.21E+041.72E+117.80E+1018794.2 · E+07 Note that, in Table 4, 1.11E+03 means 1.11×103. That is, aE+b represents a×10b. The electrode structure was the same as that shown inFIG.1B, and the surface acoustic wave device had a laminated structure of IDT electrode/piezoelectric film/low-acoustic-velocity film/high-acoustic-velocity film/supporting substrate. The IDT electrode was an Al film with a thickness of 0.08λ. The piezoelectric film was composed of 40° Y cut LiTaO3. In each of the cases where the thickness of the piezoelectric film was 0.1λ, 0.4λ, and 0.6λ, 10 levels shown in Table 4 were calculated. The thickness of the low-acoustic-velocity film was 0.4λ. The high-acoustic-velocity film was composed of aluminum oxide, and the thickness thereof was 1.5λ. The supporting substrate was composed of an alumina substrate. FIGS.19A to19Care graphs showing the relationships between the specific acoustic impedance of the low-acoustic-velocity film and the band width ratio in the sixth preferred embodiment. In the graphs, each level shows the behavior in the case where the acoustic velocity of the transversal wave in the low-acoustic-velocity film changes, and the band width ratio in each level is normalized to the band width ratio in the case where the specific acoustic impedance of the piezoelectric film is equal to the specific acoustic impedance of the low-acoustic-velocity film. The specific acoustic impedance is expressed as a product of the acoustic velocity of the bulk wave and the density of the medium. In the sixth preferred embodiment, the bulk wave of the piezoelectric film is the SH bulk wave, the acoustic velocity is 4,212 m/s, and the density is 7.454×103kg/m3. Consequently, the specific acoustic impedance of the piezoelectric film is 3.14×107N·s/m3. Furthermore, regarding the acoustic velocity of the bulk wave used for calculating the specific acoustic impedance of each of the low-acoustic-velocity film and the piezoelectric film, for the dominant mode of the elastic wave shown in the left column of Table 1 or 2, the acoustic velocity is determined according to the mode of the bulk wave shown in the right column of Table 1 or 2. Furthermore,FIGS.20A to20Care graphs showing the relationships between the specific acoustic impedance of the transversal wave of the low-acoustic-velocity film and the acoustic velocity of the propagating surface acoustic wave in the sixth preferred embodiment. As is clear fromFIGS.19A to19C, regardless of the thickness of the piezoelectric film, the band width ratio increases as the specific acoustic impedance of the low-acoustic-velocity film becomes smaller than the specific acoustic impedance of the piezoelectric film. The reason for this is that since the specific acoustic impedance of the low-acoustic-velocity film is smaller than the specific acoustic impedance of the piezoelectric film, the displacement of the piezoelectric film under certain stress increases, thus generating a larger electric charge, and therefore, equivalently higher piezoelectricity can be obtained. That is, since this effect is obtained depending only on the magnitude of specific acoustic impedance, regardless of the vibration mode of the surface acoustic wave, the type of the piezoelectric film, or the type of the low-acoustic-velocity film, it is possible to obtain a surface acoustic wave device having a higher band width ratio when the specific acoustic impedance of the low-acoustic-velocity film is smaller than the specific impedance of the piezoelectric film. In each of the first to sixth preferred embodiments of the present invention, the IDT electrode6, the piezoelectric film5, the low-acoustic-velocity film4, the high-acoustic-velocity film3, and the supporting substrate2preferably are stacked in that order from the top, for example. However, within the extent that does not greatly affect the propagating surface acoustic wave and boundary wave, an adhesion layer composed of Ti, NiCr, or the like, an underlying film, or any medium may be disposed between the individual layers. In such a case, the same effect can be obtained. For example, a new high-acoustic-velocity film which is sufficiently thin compared with the wavelength of the surface acoustic wave may be disposed between the piezoelectric film5and the low-acoustic-velocity film4. In such a case, the same effect can be obtained. Furthermore, energy of the mainly used surface acoustic wave is not distributed between the high-acoustic-velocity film3and the supporting substrate2. Consequently, any medium with any thickness may be disposed between the high-acoustic-velocity film3and the supporting substrate2. In such a case, the same advantageous effects can be obtained. The seventh and eighth preferred embodiments described below relate to surface acoustic wave devices provided with such a medium layer. Seventh Preferred Embodiment In a surface acoustic wave device21according to a seventh preferred embodiment shown inFIG.23, a medium layer22is disposed between a supporting substrate2and a high-acoustic-velocity film3. The structure other than this is the same as that in the first preferred embodiment. Therefore, the description of the first preferred embodiment is incorporated herein. In the surface acoustic wave device21, an IDT electrode6, a piezoelectric film5, a low-acoustic-velocity film4, the high-acoustic-velocity film3, the medium layer22, and the supporting substrate2are stacked in that order from the top. As the medium layer22, any material, such as a dielectric, a piezoelectric, a semiconductor, or a metal, may be used. Even in such a case, the same effect as that of the first preferred embodiment can be obtained. In the case where the medium layer22is composed of a metal, the band width ratio can be decreased. Consequently, in the application in which the band width ratio is small, the medium layer22is preferably composed of a metal. Eighth Preferred Embodiment In a surface acoustic wave device23according to an eighth preferred embodiment shown inFIG.24, a medium layer22and a medium layer24are disposed between a supporting substrate2and a high-acoustic-velocity film3. That is, an IDT electrode6, a piezoelectric film5, a low-acoustic-velocity film4, the high-acoustic-velocity film3, the medium layer22, the medium layer24, and the supporting substrate2are stacked in that order from the top. Other than the medium layer22and the medium layer24, the structure is the same as that in the first preferred embodiment. The medium layers22and24may be composed of any material, such as a dielectric, a piezoelectric, a semiconductor, or a metal. Even in such a case, in the eighth preferred embodiment, it is possible to obtain the same effect as that of the surface acoustic wave device of the first preferred embodiment. In this preferred embodiment, after a laminated structure including the piezoelectric film5, the low-acoustic-velocity film4, the high-acoustic-velocity film3, and the medium layer22and a laminated structure including the medium layer24and the supporting substrate2are separately fabricated, both laminated structures are bonded to each other. Then, the IDT electrode6is formed on the piezoelectric film5. As a result, it is possible to obtain a surface acoustic wave device according to this preferred embodiment without being restricted by manufacturing conditions when each laminated structure is fabricated. Consequently, freedom of selection for materials constituting the individual layers can be increased. When the two laminated structures are bonded to each other, any joining method can be used. For such a bonded structure, various methods, such as bonding by hydrophilization, activation bonding, atomic diffusion bonding, metal diffusion bonding, anodic bonding, bonding using a resin or SOG, can be used. Furthermore, the joint interface between the two laminated structures is located on the side opposite to the piezoelectric film5side of the high-acoustic-velocity film3. Consequently, the joint interface exists in the portion below the high-acoustic-velocity film3in which major energy of the surface acoustic wave used is not distributed. Therefore, surface acoustic wave propagation characteristics are not affected by the quality of the joint interface. Accordingly, it is possible to obtain stable and good resonance characteristics and filter characteristics. Ninth Preferred Embodiment In a surface acoustic wave device31shown inFIG.25, an IDT electrode6, a piezoelectric film5, a low-acoustic-velocity film4, and a high-acoustic-velocity supporting substrate33which also functions as a high-acoustic-velocity film are stacked in that order from the top. That is, the high-acoustic-velocity supporting substrate33serves both as the high-acoustic-velocity film3and as the supporting substrate2in the first preferred embodiment. Consequently, the acoustic velocity of a bulk wave in the high-acoustic-velocity supporting substrate33is set to be higher than the acoustic velocity of a surface acoustic wave propagating in the piezoelectric film5. Thus, the same effect as that in the first preferred embodiment can be obtained. Moreover, since the high-acoustic-velocity supporting substrate33serves both as the high-acoustic-velocity film and as the supporting substrate, the number of components can be reduced. Tenth Preferred Embodiment In a tenth preferred embodiment, the relationship between the Q factor and the frequency in a one-port-type surface acoustic wave resonator as a surface acoustic wave device was simulated by FEM. Here, as the surface acoustic wave device according to the first preferred embodiment, shown inFIGS.1A and1B, the following structure was assumed. The structure included an IDT electrode6composed of Al with a thickness of 0.1λ, a piezoelectric film composed of a 50° Y cut LiTaO3film, an SiO2film as a low-acoustic-velocity film, an aluminum nitride film with a thickness of 1.5λ as a high-acoustic-velocity film, an SiO2film with a thickness of 0.3λ, and a supporting substrate composed of alumina stacked in that order from the top. In this simulation, the thickness of the LiTaO3film as the piezoelectric film was changed to 0.15λ, 0.20λ, 0.25λ, or 0.30λ. Furthermore, the thickness of the SiO2film as the low-acoustic-velocity film was changed in the range of 0 to 2λ. The duty of the IDT electrode was 0.5, the intersecting width of electrode fingers was 20λ, and the number of electrode finger pairs was 100. For comparison, a one-port-type surface acoustic wave resonator, in which an IDT electrode composed of Al with a thickness of 0.1λ and a 38.5° Y cut LiTaO3substrate were stacked in that order from the top, was prepared. That is, in the comparative example, an electrode structure including the IDT electrode composed of Al is disposed on a 38.5° Y cut LiTaO3substrate with a thickness of 350 μm. Regarding the surface acoustic wave devices according to the tenth preferred embodiment and the comparative example, the relationship between the Q factor and the frequency was obtained by simulation by FEM. In the range from the resonant frequency at which the impedance of the one-port resonator was lowest to the antiresonant frequency at which the impedance was highest, the highest Q factor was defined as the Qmaxfactor. A higher Qmaxfactor indicates lower loss. The Qmaxfactor of the comparative example was 857.FIG.26shows the relationship between the LiTaO3film thickness, the SiO2film thickness, and the Qmaxin this preferred embodiment. As is clear fromFIG.26, in each case where the LiTaO3film thickness is 0.15λ, 0.20λ, 0.25λ, or 0.30λ, the Qmaxfactor increases when the thickness of the low-acoustic-velocity film composed of SiO2exceeds 0. It is also clear that in the tenth preferred embodiment, in any of the cases, the Qmaxfactor is effectively enhanced relative to the comparative example. Preferred Embodiment of Manufacturing Method The elastic wave device according to the first preferred embodiment includes, as described above, the high-acoustic-velocity film3, the low-acoustic-velocity film4, the piezoelectric film5, and the IDT electrode6which are disposed on the supporting substrate2. The method for manufacturing such an elastic wave device is not particularly limited. By using a manufacturing method using the ion implantation process described below, it is possible to easily obtain an elastic wave device1having a piezoelectric film with a small thickness. A preferred embodiment of the manufacturing method will be described with reference toFIGS.21A-21E and22A-22C. First, as shown inFIG.21A, a piezoelectric substrate5A is prepared. In this preferred embodiment, the piezoelectric substrate5A is preferably composed of LiTaO3. Hydrogen ions are implanted from a surface of the piezoelectric substrate5A. The ions to be implanted are not limited to hydrogen, and helium or the like may be used. In the ion implantation, energy is not particularly limited. In this preferred embodiment, preferably the energy is about 107 KeV, and the dose amount is about 8×1016atoms/cm2, for example. When ion implantation is performed, the ion concentration is distributed in the thickness direction in the piezoelectric substrate5A. InFIG.21A, the dashed line represents a region in which the ion concentration is highest. In a high concentration ion-implanted region5ain which the ion concentration is highest represented by the dashed line, when heating is performed as will be described later, separation easily occurs owing to stress. Such a method in which separation is performed using the high concentration ion-implanted region5ais disclosed in Japanese Unexamined Patent Application Publication No. 2002-534886. In this step, at the high concentration ion-implanted region5a, the piezoelectric substrate5A is separated to obtain a piezoelectric film5. The piezoelectric film5is a layer between the high concentration ion-implanted region5aand the surface of the piezoelectric substrate from which ion implantation is performed. In some cases, the piezoelectric film5may be subjected to machining, such as grinding. Consequently, the distance from the high concentration ion-implanted region5ato the surface of the piezoelectric substrate on the ion implantation side is set to be equal to or slightly larger than the thickness of the finally formed piezoelectric film. Next, as shown inFIG.21B, a low-acoustic-velocity film4is formed on the surface of the piezoelectric substrate5A on which the ion implantation has been performed. In addition, a low-acoustic-velocity film formed in advance may be bonded to the piezoelectric substrate5A using a transfer method or the like. Next, as shown inFIG.21C, a high-acoustic-velocity film3is formed on a surface of the low-acoustic-velocity film4, opposite to the piezoelectric substrate5A side of the low-acoustic-velocity film4. Instead of using the film formation method, the high-acoustic-velocity film3may also be bonded to the low-acoustic-velocity film4using a transfer method or the like. Furthermore, as shown inFIG.21D, an exposed surface of the high-acoustic-velocity film3, opposite to the low-acoustic-velocity film4side of the high-acoustic-velocity film3, is subjected to mirror finishing. By performing mirror finishing, it is possible to strengthen bonding between the high-acoustic-velocity film and the supporting substrate which will be described later. Then, as shown inFIG.21E, a supporting substrate2is bonded to the high-acoustic-velocity film3. As the low-acoustic-velocity film4, as in the first preferred embodiment, a silicon oxide film is used. As the high-acoustic-velocity film3, an aluminum nitride film is used. Next, as shown inFIG.22A, after heating, a piezoelectric substrate portion5blocated above the high concentration ion-implanted region5ain the piezoelectric substrate5A is separated. As described above, by applying stress by heating through the high concentration ion-implanted region5a, the piezoelectric substrate5A becomes easily separated. In this case, the heating temperature may be set at about 250° C. to 400° C., for example. In this preferred embodiment, by the heat-separation, a piezoelectric film5with a thickness of about 500 nm, for example, is obtained. In such a manner, as shown inFIG.22B, a structure in which the piezoelectric film5is stacked on the low-acoustic-velocity film4is obtained. Then, in order to recover piezoelectricity, heat treatment is performed in which the structure is retained at a temperature of about 400° C. to about 500° C. for about 3 hours, for example. Optionally, prior to the heat treatment, the upper surface of the piezoelectric film5after the separation may be subjected to grinding. Then, as shown inFIG.22C, an electrode including an IDT electrode6is formed. The electrode formation method is not particularly limited, and an appropriate method, such as vapor deposition, plating, or sputtering, may be used, for example. According to the manufacturing method of this preferred embodiment, by the separation, it is possible to easily form a piezoelectric film5with rotated Euler angles at a uniform thickness. Eleventh Preferred Embodiment In the first preferred embodiment, the IDT electrode6, the piezoelectric film5, the low-acoustic-velocity film4, the high-acoustic-velocity film3, and the supporting substrate2are preferably stacked in that order from the top. In a surface acoustic wave device41according to an eleventh preferred embodiment shown inFIG.27, a dielectric film42may be arranged so as to cover an IDT electrode6. By disposing such a dielectric film42, frequency temperature characteristics can be adjusted, and moisture resistance can be enhanced. Twelfth Preferred Embodiment In the preferred embodiments described above, description has been provided for surface acoustic wave devices. The present invention can also be applied to other elastic wave devices, such as boundary acoustic wave devices. In such a case, the same advantageous effects can also be obtained.FIG.28is a schematic elevational cross-sectional view of a boundary acoustic wave device43according to a twelfth preferred embodiment. In this case, a low-acoustic-velocity film4, a high-acoustic-velocity film3, and a supporting substrate2are preferably stacked in that order from the top, under a piezoelectric film5. This structure is preferably the same or substantially the same as that of the first preferred embodiment. In order to excite a boundary acoustic wave, an IDT electrode6is provided at the interface between the piezoelectric film5and a dielectric44disposed on the piezoelectric film5. Furthermore,FIG.29is a schematic elevational cross-sectional view of a boundary acoustic wave device45having a three-medium structure. In this case, with respect to a structure in which a low-acoustic-velocity film4, a high-acoustic-velocity film3, and a supporting substrate2are stacked in that order, under a piezoelectric film5, an IDT electrode6is provided at the interface between the piezoelectric film5and a dielectric film46. Furthermore, a dielectric47in which the acoustic velocity of a transversal wave propagating therein is faster than that of the dielectric46is disposed on the dielectric46. As a result, a boundary acoustic wave having a three-medium structure is provided. In the boundary acoustic wave device, such as the boundary acoustic wave device43or45, as in the surface acoustic wave device1according to the first preferred embodiment, by disposing a laminated structure composed of low-acoustic-velocity film4/high-acoustic-velocity film3on the lower side of the piezoelectric film5, the same effect as that in the first preferred embodiment can be obtained. While preferred embodiments of the present invention have been described above, it is to be understood that variations and modifications will be apparent to those skilled in the art without departing from the scope and spirit of the present invention. The scope of the present invention, therefore, is to be determined solely by the following claims. | 56,399 |
11863153 | DETAILED DESCRIPTION Non-limiting embodiments will be described by way of example with reference to the accompanying figures, which are schematic and are not intended to be drawn to scale. In the figures, each identical or nearly identical component illustrated is typically represented by a single numeral. For purposes of clarity, not every component is labeled in every figure, nor is every component of each embodiment shown where illustration is not necessary to allow understanding by those of ordinary skill in the art. In the specification, as well as in the claims, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of” shall be closed or semi-closed transitional phrases, respectively. Further, relative terms, such as “above,” “below,” “top,” “bottom,” “upper” and “lower” are used to describe the various elements' relationships to one another, as illustrated in the accompanying drawings. It is understood that these relative terms are intended to encompass different orientations of the device and/or elements in addition to the orientation depicted in the drawings. For example, if the device were inverted with respect to the view in the drawings, an element described as “above” another element, for example, would now be below that element. The term “compensating” is to be understood as including “substantially compensating”. The terms “oppose”, “opposes” and “opposing” are to be understood as including “substantially oppose”, “substantially opposes” and “substantially opposing” respectively. Further, as used in the specification and appended claims, and in addition to their ordinary meanings, the terms “substantial” or “substantially” mean to within acceptable limits or degree. For example, “substantially canceled” means that one skilled in the art would consider the cancellation to be acceptable. As used in the specification and the appended claims and in addition to its ordinary meaning, the term “approximately” or “about” means to within an acceptable limit or amount to one of ordinary skill in the art. For example, “approximately the same” means that one of ordinary skill in the art would consider the items being compared to be the same. As used in the specification and appended claims, the terms “a”, “an” and “the” include both singular and plural referents, unless the context clearly dictates otherwise. Thus, for example, “a device” includes one device and plural devices. As used herein, the International Telecommunication Union (ITU) defines Super High Frequency (SHF) as extending between three Gigahertz (3 GHz) and thirty Gigahertz (30 GHz). The ITU defines Extremely High Frequency (EHF) as extending between thirty Gigahertz (30 GHz) and three hundred Gigahertz (300 GHz). FIG.1shows simplified diagrams of a bulk acoustic wave resonator structure of this disclosure and its operation in a thickness extensional main resonant mode. Bulk acoustic wave resonator structures1000A,1000B,1000C may include respective multilayer metal acoustic reflector electrodes1013A,1013B,1013C arranged over respective substrates1001A,1001B,1001C (e.g., silicon substrate1001A,1001B,1001C), and respective harmonically tuned top electrode top sensor electrodes1015A,1015B,1015C acoustically coupled with respective sensing regions1016A,1016B,1016C. In a non-limiting example, bulk acoustic wave resonator structures1000A,1000B,1000C may operate with their respective sensing regions1016A,1016B,1016C to sense an analyte (e.g., coronavirus, e.g., SARS CoV-2 virus) in a fluid1018A,1018B,1018C, e.g., liquid1018A,1018B,1018C, e.g., comprising water. Harmonically tuned top sensor electrodes1015A,1015B,1015C may have respective thicknesses that are approximately an integral multiple of a half of an acoustic wavelength of the respective resonant frequencies of the BAW resonators coupled with respective sensing regions1016A,1016B,1016C. The harmonically tuned top sensor electrodes1015A,1015B,1015C may facilitate suppressing parasitic lateral modes. Respective stacks of piezoelectric material layers (e.g., stacks of normal axis piezoelectric layer1005A,1005B,1005C and reverse axis piezoelectric layer1007A,1007B,1007C) may be respectively sandwiched between respective multilayer metal acoustic reflector electrodes1013A,1013B,1013C and respective harmonically tuned top electrode top sensor electrodes1015A,1015B,1015C. For example, inFIG.1, respective acoustic reflectors1013A,1013B,1013C (e.g., respective acoustic reflector electrodes1013A,1013B,1013C) may be respective multi-layer acoustic reflectors1013A,1013B,1013C (e.g., may be respective multi-layer acoustic reflector electrodes1013A,1013B,1013C). For example, respective multi-layer acoustic reflectors1013A,1013B,1013C (e.g., respective multi-layer acoustic reflector electrodes1013A,1013B,1013C) may approximate respective distributed Bragg reflectors1013A,1013B,1013C. For example, respective multi-layer acoustic reflectors1013A,1013B,1013C (e.g., respective multi-layer acoustic reflector electrodes1013A,1013B,1013C) may include respective acoustic layers1013A,1013B,1013C (e.g., respective first pairs of bottom metal electrode layers1022A,1022B,1022C). For example, respective layers of respective multi-layer acoustic reflectors1013A,1013B,1013C may be respectively arranged in respective alternating arrangements of low acoustic impedance metal layers and high acoustic impedance metal layers. For example, inFIG.1, respective acoustic reflectors1013A,1015A,1013B,1015B (e.g., respective acoustic reflector electrodes1013A,1015A,1013B,1015B) may be acoustically tuned approximately for respective resonant frequencies of the respective BAW resonators1000A,1000B,1000C. For example, respective acoustic reflectors1013A,1013B,1013C (e.g., respective acoustic reflector electrodes1013A,1013B,1013C) may approximate respective distributed Bragg reflectors1013A,1013B,1013C, having respective quarter wavelength resonances which may be acoustically tuned approximately for respective resonant frequencies of the respective BAW resonators1000A,1000B,1000C. For example, respective acoustic layers (e.g., first pair of bottom acoustic layers1022A,1022B,1022C) of the respective multi-layer acoustic reflectors1013A,1013B,1013C may have respective layer thicknesses selected so that the respective multi-layer acoustic reflectors1013A,1013B,1013C, may have respective quarter wavelength resonances at respective frequencies that may be acoustically tuned approximately for the respective resonant frequencies of the respective BAW resonators1000A,1000B,1000C. For example, respective metal electrode layers (e.g., first pair of bottom metal electrode layers1022A,1022B,1022C) of the respective tuned multi-layer metal reflector electrodes1013A,1013B,1013C, may have respective layer thicknesses selected so that the respective tuned multi-layer acoustic reflectors1013A,1013B,1013C, may have respective quarter wavelength resonances at respective frequencies that may be acoustically tuned for approximately the respective resonant frequencies of the respective BAW resonators1000A,1000B,1000C. The stacks of piezoelectric material layers (e.g., stacks of normal axis piezoelectric layer1005A,1005B,1005C and reverse axis piezoelectric layer1007A,1007B,1007C) may have respective active regions where harmonically tuned top electrode top sensor electrodes1015A,1015B,1015C may respectively overlap respective multilayer metal acoustic reflector electrodes1013A,1013B,1013C. For example, in operation of BAW resonators1000A,1000B,1000C an oscillating electric field may be applied via harmonically tuned top electrode top sensor electrodes1015A,1015B,1015C and respective multilayer metal acoustic reflector electrodes1013A,1013B,1013C, so as to activate responsive piezoelectric acoustic oscillations in a thickness extensional main resonant mode in the respective active regions of the stacks of piezoelectric material layers (e.g., stacks of normal axis piezoelectric layer1005A,1005B,1005C and reverse axis piezoelectric layer1007A,1007B,1007C), where harmonically tuned top electrode top sensor electrodes1015A,1015B,1015C may respectively overlap respective multilayer metal acoustic reflector electrodes1013A,1013B,1013C. For illustrative purposes, bulk acoustic resonator1000A depicts approximately equal half acoustic wavelength thicknesses of normal axis piezoelectric layer1005A and reverse axis piezoelectric layer1007A, for example, prior to activation of the thickness extensional main resonant mode by application of the oscillating electric field via harmonically tuned top electrode top sensor electrode1015A and multilayer metal acoustic reflector electrode. In contrast, bulk acoustic resonators1000B,1000C depict thickness changes in normal axis piezoelectric layers1005B,1005C and reverse axis piezoelectric layers1007B,1007C from activation of the thickness extensional main resonant mode by application of the oscillating electric field via harmonically tuned top electrode top sensor electrodes1015B,1015C and multilayer metal acoustic reflector electrodes1013B,1013C. As illustrated in BAW resonator1000B, during an initial half cycle of the thickness extensional main resonant mode, normal axis piezoelectric layer1005B is in extension and while reverse axis piezoelectric layer1007B is in compression. The extension is representatively illustrated by a thickened depiction of normal axis piezoelectric layer1005B (e.g., relative to unactivated normal axis piezoelectric layer1005A). The compression is representatively illustrated by a thinned depiction of reverse axis piezoelectric layer1007B (e.g., relative to unactivated reverse axis piezoelectric layer1007A). A dashed line at the interface between normal axis piezoelectric layer1005B and reverse axis piezoelectric layer1007B is used to depict motion of thickness extensional main resonant mode. As illustrated in BAW resonator1000C, during a subsequent half cycle of the thickness extensional main resonant mode, normal axis piezoelectric layer1005C is in compression and while reverse axis piezoelectric layer1007C is in extension. The compression is representatively illustrated by a thinned depiction of normal axis piezoelectric layer1005C (e.g., relative to unactivated normal axis piezoelectric layer1005A). The extension is representatively illustrated by a thickened depiction of reverse axis piezoelectric layer1007C (e.g., relative to unactivated reverse axis piezoelectric layer1007A). A dashed line at the interface between normal axis piezoelectric layer1005C and reverse axis piezoelectric layer1007C is used to depict motion of the thickness extensional main resonant mode. For illustrative purposes in depictions of BAW resonators1000B,1000C, amounts of extension (thickening) and compression (thinning) are greatly exaggerated. The thickness extensional main resonant mode depicted inFIG.1is a longitudinal mode excited in a vertically grown piezoelectric material film by coupling a vertically applied electric field through a d33 piezoelectric coefficient. The main thickness extensional resonance mode of BAW resonators of this disclosure may offer the highest sensitivity to analytes, for example, using sensing regions1016A,1016B,1016C shown inFIG.1. For example, both the acoustic wave velocity and resonance frequency of the thickness extensional main resonant mode of the BAW resonators of this disclosure are higher than acoustic wave velocity and resonance frequency of shear mode resonators and may offer higher sensitivity to analytes than shear mode resonators. BAW resonators of this disclosure may have sensing regions (e.g., sensing regions1016A,1016B,1016C), which may comprise a respective functionalized layers (not shown in the simplified view ofFIG.1). The functionalized layers of the sensing regions (e.g., sensing regions1016A,1016B,1016C) may be used to selectively bind and detect biomolecules (e.g., coronavirus, e.g., SARS CoV-2). Such selective binding and detection may occur in real time or near real time. BAW resonators of this disclosure may use a resonance frequency shift (a decrease in resonance frequency) that may be caused by the mass of biomolecules selectively binding with the functionalized layer. This technique need not require fluorescent tags or chemical labels for detection of biomolecules. Further, mass sensitivity may increase with the square of frequency. The thickness extensional main resonant mode BAW resonators of this disclosure may operate with resonant frequencies in the Super High Frequency band (e.g., main resonant frequency of 24.25 GHz, or higher bands, e.g., higher main resonant frequencies), and so their mass sensitivity may be much higher than resonators operating below the Super High Frequency band. Thus, label-free, highly sensitive and selective, and real-time detection of biomolecules (e.g., coronavirus, e.g., SARS CoV-2) may, but need not be achieved by BAW resonators of this disclosure. For example, respective harmonically tuned top electrode top sensor electrodes1015A,1015B,1015C and respective multilayer metal acoustic reflector electrodes1013A,1013B,1013C may be respectively coupled (e.g., electrically coupled, e.g., acoustically coupled) with the respective normal axis piezoelectric layers1005A,1005B,1005C and the reverse axis piezoelectric layers1007A,1007B,1007C to excite the piezoelectrically excitable resonance mode (e.g., thickness extensional main resonant mode) at respective resonant frequencies of the bulk acoustic Super High Frequency (SHF) wave resonators1000A,1000B,1000C in the Super High Frequency (SHF) wave band (e.g., 24.25 GHz main resonant frequency). For example, thicknesses of the normal axis piezoelectric layers1005A,1005B,1005C and the reverse axis piezoelectric layers1007A,1007B,1007C may be selected to determine the main resonant frequency of bulk acoustic Super High Frequency (SHF) wave resonators1000A,1000B,1000C in the Super High Frequency (SHF) wave band (e.g., twenty-four and a quarter GigaHertz, 24.25 GHz main resonant frequency). Further, quality factor (Q factor) is a figure of merit for bulk acoustic wave resonators that may be related, in part, to acoustic reflector electrode conductivity. In accordance with the teachings of this disclosure, without an offsetting compensation that increases number of member layers, member layer thinning with increasing frequency may otherwise diminish acoustic reflector electrode conductivity, and may otherwise diminish quality factor (Q factor) of bulk acoustic wave resonators. In accordance with the teachings of this disclosure, number of member layers of the multilayer metal acoustic reflector electrodes1013A,1013B,1013C may be increased in designs extending to higher resonant frequencies, to facilitate electrical conductivity through acoustic reflector electrodes. The acoustic reflector electrodes (e.g., Super High Frequency (SHF) bottom acoustic reflector electrode1013A,1013B,1013C may have sheet resistance of less than one Ohm per square at the given frequency (e.g., at the main resonant frequency of the BAW resonator in the super high frequency band or the extremely high frequency band, e.g., at the quarter wavelength resonant frequency of the acoustic reflector electrode in the super high frequency band or the extremely high frequency band). For example, a sufficient number of member layers may be employed to provide for this sheet resistance at the given frequency (e.g., at the main resonant frequency of the BAW resonator in the super high frequency band or the extremely high frequency band, e.g., at the quarter wavelength resonant frequency of the acoustic reflector electrode in the super high frequency band or the extremely high frequency band). This may, but need not, facilitate enhancing quality factor (Q factor) to a quality factor (Q factor) that is above a desired value of one hundred (100). Moreover, quality factor (Q factor) may, but need not be increased by the inclusion of reverse axis piezoelectric layer1007A,1007B,1007C in acoustic coupling with normal axis piezoelectric layer1005A,1005B,1005C. In accordance with the teachings of this disclosure, without an offsetting compensation that increases number of member piezoelectric layers in an alternating piezoelectric axis arrangement, member piezoelectric layer thinning with increasing frequency may otherwise diminish quality factor (Q factor) of bulk acoustic wave resonators. In accordance with the teachings of this disclosure, number of member piezoelectric layers in an alternating piezoelectric axis arrangement may be increased in designs extending to higher resonant frequencies. This may, but need not boost quality factor (Q factor). Furthermore, higher Q factor may, but need not increase detection sensitivity (e.g., sensitivity in detection of biomolecules, e.g., sensitivity in detection of coronavirus.) FIG.1Ais a diagram that illustrates an example bulk acoustic wave resonator structure100.FIGS.4A through4Cshow alternative example bulk acoustic wave resonators,400A through400C, to the example bulk acoustic wave resonator structure100shown inFIG.1A. The foregoing are shown in simplified cross sectional views. The resonator structures are formed over a substrate101,401A through401C (e.g., silicon substrate101,401A,401B, e.g., silicon carbide substrate401C). In some examples, the substrate may further comprise a seed layer103,403A,403B, formed of, for example, aluminum nitride (AlN), or another suitable material (e.g., silicon dioxide (SiO2), aluminum oxide (Al2O3), silicon nitride (Si3N4), amorphous silicon (a-Si), silicon carbide (SiC)), having an example thickness in a range from approximately 100 A to approximately 1 um on the silicon substrate. In some other examples, the seed layer103,403A,403B may also be at least partially formed of electrical conductivity enhancing material such as Aluminum (Al) or Gold (Au). The example resonators100,400A through400C, include a respective stack104,404A through404C, of an example four layers of piezoelectric material, for example, four layers of Aluminum Nitride (AlN) having a wurtzite structure. For example,FIG.1AandFIGS.4A through4Cshow a bottom piezoelectric layer105,405A through405C, a first middle piezoelectric layer107,407A through407C, a second middle piezoelectric layer109,409A through409C, and a top piezoelectric layer111,411A through411C. A mesa structure104,404A through404C (e.g., first mesa structure104,404A through404C) may comprise the respective stack104,404A through404C, of the example four layers of piezoelectric material. The mesa structure104,404A through404C (e.g., first mesa structure104,404A through404C) may comprise bottom piezoelectric layer105,405A through405C. The mesa structure104,404A through404C (e.g., first mesa structure104,404A through404C) may comprise first middle piezoelectric layer107,407A through407C. The mesa structure104,404A through404C (e.g., first mesa structure104,404A through404C) may comprise second middle piezoelectric layer109,409A through409C. The mesa structure104,404A through404C (e.g., first mesa structure104,404A through404C) may comprise top piezoelectric layer111,411A through411C. Although piezoelectric aluminum nitride may be used, alternative examples may comprise alternative piezoelectric materials, e.g., doped aluminum nitride, e.g., zinc oxide, e.g., lithium niobate, e.g., lithium tantalate. The four layers of piezoelectric material in the respective stack104,404A through404C ofFIG.1AandFIGS.4A through4Cmay have an alternating axis arrangement in the respective stack104,404A through404C. For example the bottom piezoelectric layer105,405A through405C may have a normal axis orientation, which is depicted in the figures using a downward directed arrow. Next in the alternating axis arrangement of the respective stack104,404A through404C, the first middle piezoelectric layer107,407A through407C may have a reverse axis orientation, which is depicted in the figures using an upward directed arrow. Next in the alternating axis arrangement of the respective stack104,404A through404C, the second middle piezoelectric layer109,409A through409C may have the normal axis orientation, which is depicted in the figures using the downward directed arrow. Next in the alternating axis arrangement of the respective stack104,404A through404C, the top piezoelectric layer111,411A through411C may have the reverse axis orientation, which is depicted in the figures using the upward directed arrow. For example, polycrystalline thin film MN may be grown in a crystallographic c-axis negative polarization, or normal axis orientation perpendicular relative to the substrate surface using reactive magnetron sputtering of an Aluminum target in a nitrogen atmosphere. However, as will be discussed in greater detail subsequently herein, changing sputtering conditions, for example by adding oxygen, may reverse the axis to a crystallographic c-axis positive polarization, or reverse axis, orientation perpendicular relative to the substrate surface. In the example resonators100,400A through400C, ofFIG.1AandFIGS.4A through4C, the bottom piezoelectric layer105,405A through405C, may have a piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at a resonant frequency (e.g., main resonant frequency) of the example resonators. Similarly, the first middle piezoelectric layer107,407A through407C, may have its piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency) of the example resonators. Similarly, the second middle piezoelectric layer109,409A through409C, may have its piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency) of the example resonators. Similarly, the top piezoelectric layer111,411A through411C, may have its piezoelectrically excitable main resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency) of the example resonators. Accordingly, the top piezoelectric layer111,411A through411C, may have its piezoelectrically excitable main resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency) with the bottom piezoelectric layer105,405A through405C, the first middle piezoelectric layer107,407A through407C, and the second middle piezoelectric layer109,409A through409C. The bottom piezoelectric layer105,405A through405C, may be acoustically coupled with the first middle piezoelectric layer107,407A through407C, in the piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency) of the example resonators100,400A through400C. The normal axis of bottom piezoelectric layer105,405A through405C, in opposing the reverse axis of the first middle piezoelectric layer107,407A through407C, may cooperate for the piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency) of the example resonators. The first middle piezoelectric layer107,407A through407C, may be sandwiched between the bottom piezoelectric layer105,405A through405C, and the second middle piezoelectric layer109,409A through409C, for example, in the alternating axis arrangement in the respective stack104,404A through404C. For example, the reverse axis of the first middle piezoelectric layer107,407A through407C, may oppose the normal axis of the bottom piezoelectric layer105,405A through405C, and the normal axis of the second middle piezoelectric layer109,409A-409C. In opposing the normal axis of the bottom piezoelectric layer105,405A through405C, and the normal axis of the second middle piezoelectric layer109,409A through409C, the reverse axis of the first middle piezoelectric layer107,407A through407C, may cooperate for the piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency) of the example resonators. The second middle piezoelectric layer109,409A through409C, may be sandwiched between the first middle piezoelectric layer107,407A through407C, and the top piezoelectric layer111,411A through411C, for example, in the alternating axis arrangement in the respective stack104,404A through404C. For example, the normal axis of the second middle piezoelectric layer109,409A through409C, may oppose the reverse axis of the first middle piezoelectric layer107,407A through407C, and the reverse axis of the top piezoelectric layer111,411A through411C. In opposing the reverse axis of the first middle piezoelectric layer107,407A through407C, and the reverse axis of the top piezoelectric layer111,411A through411C, the normal axis of the second middle piezoelectric layer109,409A through409C, may cooperate for the piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency) of the example resonators. Similarly, the alternating axis arrangement of the bottom piezoelectric layer105,405A through405C, and the first middle piezoelectric layer107,407A through407C, and the second middle piezoelectric layer109,409A through409C, and the top piezoelectric layer111,411A through411C, in the respective stack104,404A through404C may cooperate for the piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency) of the example resonators. Despite differing in their alternating axis arrangement in the respective stack104,404A through404C, the bottom piezoelectric layer105,405A through405C and the first middle piezoelectric layer107,407A through407C, and the second middle piezoelectric layer109,409A through409C, and the top piezoelectric layer111,411A through411C, may all be made of the same piezoelectric material, e.g., Aluminum Nitride (AlN). Respective layers of piezoelectric material in the stack104,404A through404C, ofFIG.1AandFIGS.4A through4Cmay have respective layer thicknesses of about one half wavelength (e.g., one half acoustic wavelength) of the main resonant frequency of the example resonators. For example, respective layers of piezoelectric material in the stack104,404A through404C, ofFIG.1AandFIGS.4A through4Cmay have respective layer thicknesses so that (e.g., selected so that) the respective bulk acoustic wave resonators100,400A through400C may have respective resonant frequencies that are in a Super High Frequency (SHF) band or an Extremely High Frequency (EHF) band (e.g., respective resonant frequencies that are in a Super High Frequency (SHF) band, e.g., respective resonant frequencies that are in an Extremely High Frequency (EHF) band. For example, respective layers of piezoelectric material in the stack104,404A through404C, ofFIG.1AandFIGS.4A through4Cmay have respective layer thicknesses so that (e.g., selected so that) the respective bulk acoustic wave resonators100,400A through400C may have respective resonant frequencies that are in a millimeter wave band. For example, for an approximately twenty-four gigahertz (e.g., 24 GHz) main resonant frequency of the example resonators, the bottom piezoelectric layer105,405A through405C, may have a layer thickness corresponding to about one half of a wavelength (e.g., about one half of an acoustic wavelength) of the main resonant frequency, and may be about two thousand Angstroms (2000 A). Similarly, the first middle piezoelectric layer107,407A through407C, may have a layer thickness corresponding the one half of the wavelength (e.g., one half of the acoustic wavelength) of the main resonant frequency; the second middle piezoelectric layer109,409A through409C, may have a layer thickness corresponding the one half of the wavelength (e.g., one half of the acoustic wavelength) of the main resonant frequency; and the top piezoelectric layer111,411A through411C, may have a layer thickness corresponding the one half of the wavelength (e.g., one half of the acoustic wavelength) of the main resonant frequency. Piezoelectric layer thickness may be scaled up or down to determine main resonant frequency. The example resonators100,400A through400C, ofFIG.1AandFIGS.4A through4Cmay comprise: a bottom acoustic reflector113,413A through413C (e.g., multi-layer bottom acoustic reflector113,413A through413C, e.g., multi-layer metal bottom acoustic reflector electrode113,413A through413C), e.g., including an acoustically reflective bottom electrode stack of a plurality of bottom metal electrode layers; and a harmonically tuned top sensor electrode115,415A through415C. Accordingly, the bottom acoustic reflector113,413A through413C, may be a bottom multi-layer acoustic reflector. The piezoelectric layer stack104,404A through404C, may be sandwiched between the plurality of bottom metal electrode layers of the bottom acoustic reflector113,413A through413C, and the top metal electrode layer of the harmonically tuned top sensor electrode115,415A through415C. Harmonically tuned top sensor electrode115,415A through415C may comprise the relatively high acoustic impedance metal, for example, Tungsten, Ruthenium or Molybdenum. In other examples, harmonically tuned top sensor electrode115,415A through415C may comprise (at least partially) a relatively large electrical conductivity material, for example, Aluminum or Gold. The piezoelectric layer stack104,404A through404C, may be electrically and acoustically coupled with the plurality of bottom metal electrode layers of the bottom acoustic reflector113,413A through413C and the top metal electrode layer of the harmonically tuned top sensor electrode115,415A through415C, to excite the piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency). For example, such excitation may be done by using the plurality of bottom metal electrode layers of the bottom acoustic reflector113,413A through413C and the top metal electrode layer of the harmonically tuned top sensor electrode115,415A through415C to apply an oscillating electric field having a frequency corresponding to the resonant frequency (e.g., main resonant frequency) of the piezoelectric layer stack104,404A through404C, and of the example resonators100,400A through400C. For example, the piezoelectric layer stack104,404A through404C, may be electrically and acoustically coupled with the plurality of bottom metal electrode layers of the bottom acoustic reflector113,413A through413C and the top metal electrode layer of the harmonically tuned top sensor electrode115,415A through415C, to excite the piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency). For example, the bottom piezoelectric layer105,405A through405C, may be electrically and acoustically coupled with the plurality of bottom metal electrode layers of the bottom acoustic reflector113,413A through413C and the top metal electrode layer of the harmonically tuned top sensor electrode115,415A through415C, to excite the piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency) of the bottom piezoelectric layer105,405A through405C. Further, the bottom piezoelectric layer105,405A through405C and the first middle piezoelectric layer107,407A through407C, may be electrically and acoustically coupled with the plurality of bottom metal electrode layers of the bottom acoustic reflector113,413A through413C, and the top metal electrode layer of the harmonically tuned top sensor electrode115,415A through415C, to excite the piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency) of the bottom piezoelectric layer105,405A through405C, acoustically coupled with the first middle piezoelectric layer107,407A through407C. Additionally, the first middle piezoelectric layer107,407A through407G, may be sandwiched between the bottom piezoelectric layer105,405A through405C and the second middle piezoelectric layer109,409A through409C, and may be electrically and acoustically coupled with the plurality of bottom metal electrode layers of the bottom acoustic reflector113,413A through413C, and the top metal electrode layer of the harmonically tuned top sensor electrode115,415A through415C, to excite the piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency) of the first middle piezoelectric layer107,407A through407C, sandwiched between the bottom piezoelectric layer105,405A through405C, and the second middle piezoelectric layer109,409A through409C. The acoustically reflective bottom electrode stack of the plurality of bottom metal electrode layers of the bottom acoustic reflector113,413A through413C, may have an alternating arrangement of low acoustic impedance metal layer and high acoustic impedance metal layer. The acoustically reflective bottom electrode stack of the plurality of bottom metal electrode layers of the bottom acoustic reflector113,413A through413C may approximate a distributed Bragg acoustic reflector, e.g. a metal distributed Bragg acoustic reflector. The plurality of metal bottom electrode layers of the bottom acoustic reflector may be electrically coupled (e.g., electrically interconnected) with one another. The acoustically reflective bottom electrode stack of the plurality of bottom metal electrode layers may operate together as a multi-layer (e.g., bi-layer, e.g., multiple layer) bottom electrode for the bottom acoustic reflector113,413A through413C. In the alternating arrangement of low acoustic impedance metal layer and high acoustic impedance metal layer of the acoustically reflective bottom electrode stack, may be a first pair of bottom metal electrode layers119,419A through419C and121,421A through421C. A first member119,419A through419C, of the first pair of bottom metal electrode layers may comprise a relatively low acoustic impedance metal, for example, Titanium having an acoustic impedance of about 27 MegaRayls, or for example, Aluminum having an acoustic impedance of about 18 MegaRayls. A second member121,421A through421C, of the first pair of bottom metal electrode layers may comprise the relatively high acoustic impedance metal, for example, Tungsten or Molybdenum. Accordingly, the first pair of bottom metal electrode layers119,419A through419C, and121,421A through421C, of the bottom acoustic reflector113,413A through413C, may be different metals, and may have respective acoustic impedances that are different from one another so as to provide a reflective acoustic impedance mismatch at the resonant frequency (e.g., main resonant frequency). Similarly, the first member of the first pair of bottom metal electrode layers119,419A through419C, of the bottom acoustic reflector113,413A through413C, may be different metals, and may have respective acoustic impedances that are different from one another so as to provide a reflective acoustic impedance mismatch at the resonant frequency (e.g., main resonant frequency). Next in the alternating arrangement of low acoustic impedance metal layer and high acoustic impedance metal layer of the acoustically reflective bottom electrode stack, a second pair of bottom metal electrode layers123,423A through423C, and125,425A through425C, may respectively comprise the relatively low acoustic impedance metal and the relatively high acoustic impedance metal. Accordingly, members of the first and second pairs of bottom metal electrode layers119,419A through419C,121,421A through421C,123,423A through423C,125,425A through425C, may have respective acoustic impedances in the alternating arrangement to provide a corresponding plurality of reflective acoustic impedance mismatches. Next in the alternating arrangement of low acoustic impedance metal layer and high acoustic impedance metal layer of the acoustically reflective bottom electrode stack, a third pair of bottom metal electrode layers127,129may respectively comprise the relatively low acoustic impedance metal and the relatively high acoustic impedance metal. Next in the alternating arrangement of low acoustic impedance metal layer and high acoustic impedance metal layer of the acoustically reflective bottom electrode stack, a fourth pair of bottom metal electrode layers131,133may respectively comprise the relatively low acoustic impedance metal and the relatively high acoustic impedance metal. Respective thicknesses of the bottom metal electrode layers may be related to wavelength (e.g., acoustic wavelength) for the main resonant frequency of the example bulk acoustic wave resonators,100,400A through400C. Further, various embodiments for resonators having relatively higher resonant frequency (higher main resonant frequency) may have relatively thinner bottom metal electrode thicknesses, e.g., scaled thinner with relatively higher resonant frequency (e.g., higher main resonant frequency). Similarly, various alternative embodiments for resonators having relatively lower resonant frequency (e.g., lower main resonant frequency) may have relatively thicker bottom metal electrode layer thicknesses, e.g., scaled thicker with relatively lower resonant frequency (e.g., lower main resonant frequency). Further, the bottom acoustic reflectors113,413A through413C may be acoustically de-tuned from respective resonant frequencies of the respective BAW resonators100,400A through400C. For example, respective multi-layer bottom acoustic reflectors113,413A through413C (e.g., respective multi-layer bottom acoustic reflector electrodes113,413A through413C, e.g., respective multi-layer metal bottom acoustic reflector electrodes113,413A through413C) may approximate respective distributed Bragg reflectors113,413A through413C, (e.g., respective metal distributed Bragg reflectors113,413A through413C), which may be acoustically de-tuned from respective resonant frequencies of the respective BAW resonators100,400A through400C. For example, respective bottom acoustic layers of the respective de-tuned multi-layer bottom acoustic reflectors113,413A through413C may have respective layer thicknesses selected so that the respective de-tuned multi-layer acoustic reflectors113,413A through413C may have respective quarter wavelength resonant frequencies that may be acoustically de-tuned from the respective resonant frequencies of the respective BAW resonators100,400A through400C. For example, bottom metal electrode layers (e.g., first pair of bottom metal electrode layers119,419A through419C,121,421A through421C, e.g., second pair of bottom metal electrode layers123,423A through423C,125,425A through425C, e.g., third pair of bottom metal electrode layers127,129, fourth pair of bottom metal electrode layers131,133) may have respective layer thicknesses selected so that the respective de-tuned multi-layer acoustic reflectors113,413A through413C may have respective quarter wavelength resonant frequencies that may be acoustically de-tuned to be below the respective resonant frequencies of the respective BAW resonators100,400A through400C. For example, for a 24 GHz resonator, (e.g., resonator having a main resonant frequency of about 24 GHz) bottom metal electrode layers may have respective layer thicknesses selected so that the respective de-tuned multi-layer bottom acoustic reflectors113,413A through413C may have respective quarter wavelength resonant frequencies that may be acoustically de-tuned to be below (e.g., 2 GHz below) the respective resonant frequencies of the respective BAW resonators100,400A through400C, e.g., acoustically de-tuned to about 22 GHz. As will be discussed in greater detail subsequently herein, bottom acoustic reflector de-tuning may facilitate suppressing parasitic (e.g., undesired) lateral resonances in acoustic resonators, for example, in respective BAW resonators100,400A through400C. In various differing examples, multi-layer bottom acoustic reflectors (e.g., the multi-layer bottom acoustic reflectors113,413A through413C) may be de-tuned (e.g. tuned down in frequency) by various differing amounts from the resonant frequency (e.g. main resonant frequency) of the BAW resonator. As discussed in greater detail subsequently herein, in examples having about one or two piezoelectric layers in an alternating piezoelectric axis stack arrangement, the de-tuned multi-layer bottom acoustic reflector (e.g., the multi-layer metal bottom acoustic reflector electrode) may be acoustically de-tuned (e.g. tuned down in frequency) from the resonant frequency (e.g. main resonant frequency) of the BAW resonator. For example in the figures, the first member of the first pair of bottom metal electrode layers119,419A through419C, of the bottom acoustic reflector113,413A through413C, is depicted as relatively thicker (e.g., thickness T01of the first member of the first pair of bottom metal electrode layers119,419A through419C is depicted as relatively thicker) than thickness of remainder bottom acoustic layers (e.g., than thicknesses T02through T08of remainder bottom metal electrode layers). For example, a thickness T01may be about 9% greater, e.g., substantially greater, than an odd multiple (e.g., 1×, 3×, etc.) of a quarter of a wavelength (e.g., 9% greater than one quarter of the acoustic wavelength) for the first member of the first pair of bottom metal electrode layers119,419A through419C. For example, if Titanium is used as the low acoustic impedance metal for a 24 GHz resonator (e.g., resonator having a main resonant frequency of about 24 GHz), a thickness T01may be about 690 Angstroms, 690 A, for the first member of the first pair of bottom metal electrode layers119,419A through419C, of the bottom acoustic reflector113,413A through413C, while respective layer thicknesses, T02through T08, shown in the figures for corresponding members of the pairs of bottom metal electrode layers may be substantially thinner than T01. Respective layer thicknesses, T02through T08, shown inFIG.1Afor corresponding members of the pairs of bottom metal electrode layers may be about an odd multiple (e.g., 1×, 3×, etc.) of a quarter of a wavelength (e.g., one quarter of the acoustic wavelength) at the main resonant frequency of the example resonator. However, the foregoing may be varied. For example, members of the pairs of bottom metal electrode layers of the bottom acoustic reflector may have respective layer thickness that are within a range from about one eighth to about one half wavelength at the resonant frequency, or an odd multiple (e.g., 1×, 3×, etc.) thereof. In an example, if Tungsten is used as the high acoustic impedance metal, and the main resonant frequency of the resonator is approximately twenty-four gigahertz (e.g., 24 GHz), then using the one quarter of the wavelength (e.g., one quarter of the acoustic wavelength) provides the layer thickness of the high impedance metal electrode layer members of the pairs as about five hundred and forty Angstroms (540 A). For example, if Titanium is used as the low acoustic impedance metal, and the main resonant frequency of the resonator is approximately twenty-four gigahertz (e.g., 24 GHz), then using the one quarter of the wavelength (e.g., one quarter of the acoustic wavelength) provides the layer thickness of the low impedance metal electrode layer members of the second, third and fourth pairs as about six hundred and thirty Angstroms (630 A). Similarly, respective layer thicknesses for members of the remainder pairs of bottom metal electrode layers shown inFIGS.4A through4C(e.g., second, third and fourth pairs) may likewise be about one quarter of the wavelength (e.g., one quarter of the acoustic wavelength) of the main resonant frequency of the example resonator, and these respective layer thicknesses may likewise be determined for members of the pairs of bottom metal electrode layers for the high and low acoustic impedance metals employed. As shown in the figures, a second member121,421A through421C of the first pair of bottom metal electrode layers may have a relatively high acoustic impedance (e.g., high acoustic impedance metal layer121,421A through421C, e.g. tungsten metal layer121,421A through421C). A first member119,419A through419C of the first pair of bottom metal electrode layers may have a relatively low acoustic impedance (e.g., low acoustic impedance metal layer119,419A through419C, e.g., titanium metal layer119,419A through419C). This relatively low acoustic impedance of the first member119,419A through419C of the first pair may be relatively lower than the acoustic impedance of the second member121,421A through421C of the first pair. The first member119,419A through419C having the relatively lower acoustic impedance may abut a layer of piezoelectric material (e.g. may abut bottom piezoelectric layer105,405A through405C, e.g. may abut piezoelectric stack104,404A through404C). This arrangement may facilitate suppressing parasitic lateral resonances in operation of the BAW resonator. The first member119,419A through419C having the relatively lower acoustic impedance may be arranged nearest to a layer of piezoelectric material (e.g. may be arranged nearest to bottom piezoelectric layer105,405A through405C, e.g. may be arranged nearest to piezoelectric stack104,404A through404C) relative to other bottom acoustic layers of the bottom acoustic reflector113,413A through413C (e.g. relative to the second member121,421A through421C of the first pair of bottom metal electrode layers, the second pair of bottom metal electrode layers123,423A through423C,125,425A through425C, the third pair of bottom metal electrode layers127,427A through427C,129,429A through429C, and the fourth pair of bottom metal electrodes131,431A through431C,133,433A through433C). This arrangement may facilitate suppressing parasitic lateral resonances in operation of the BAW resonator. The first member119,419A through419C having the relatively lower acoustic impedance may be arranged sufficiently proximate to the a layer of piezoelectric material (e.g. may be arranged sufficiently proximate to bottom piezoelectric layer105,405A through405C, e.g. may be arranged sufficiently proximate to piezoelectric stack104,404A through404C), so that the first member119,419A through419C having the relatively lower acoustic impedance may contribute more to the multi-layer metal bottom acoustic reflector electrode113,413A through413C being acoustically de-tuned from the resonant frequency of the BAW resonator than is contributed by any other bottom metal electrode layer of the multi-layer metal bottom acoustic reflector electrode113,413A through413C (e.g., contribute more than the second member121,421A through421C of the first pair of bottom metal electrode layers, e.g., contribute more than the first member123,423A through423C of the second pair of bottom metal electrode layers, e.g., contribute more than the second member125,425A through425C of the second pair of bottom metal electrode layers, e.g., contribute more than the first member127,427A through427C of the third pair of bottom metal electrode layers, e.g., contribute more than the second member129,429A through429C of the third pair of bottom metal electrode layers, e.g., contribute more than the first member131,431A through431C of the fourth pair of bottom metal electrodes, e.g., contribute more than the second member133,433A through433C of the fourth pair of bottom metal electrodes). The first member119,419A through419C having the relatively lower acoustic impedance may be arranged sufficiently proximate to the a layer of piezoelectric material (e.g. may be arranged sufficiently proximate to bottom piezoelectric layer105,405A through405C, e.g. may be arranged sufficiently proximate to piezoelectric stack104,404A through404C), so that the first member119,419A through419C having the relatively lower acoustic impedance may contribute more to facilitate suppressing parasitic lateral resonances in operation of the BAW resonator than is contributed by any other bottom metal electrode layer of the multi-layer metal bottom acoustic reflector electrode113,413A through413C (e.g., contribute more than the second member121,421A through421C of the first pair of bottom metal electrode layers, e.g., contribute more than the first member123,423A through423C of the second pair of bottom metal electrode layers, e.g., contribute more than the second member125,425A through425C of the second pair of bottom metal electrode layers, e.g., contribute more than the first member127,427A through427C of the third pair of bottom metal electrode layers, e.g., contribute more than the second member129,429A through429C of the third pair of bottom metal electrode layers, e.g., contribute more than the first member131,431A through431C of the fourth pair of bottom metal electrodes, e.g., contribute more than the second member133,433A through433C of the fourth pair of bottom metal electrodes). For example, the bottom piezoelectric layer105,405A through405C, may be electrically and acoustically coupled with pair(s) of bottom metal electrode layers (e.g., first pair of bottom metal electrode layers119,419A through419C,121,421A through421C, e.g., second pair of bottom metal electrode layers123,423A through423C,125,425A through425C, e.g., third pair of bottom metal electrode layers127,129, fourth pair of bottom metal electrode layers131,133), to excite the piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency) of the bottom piezoelectric layer105,405A through405C. Further, the bottom piezoelectric layer105,405A through405C and the first middle piezoelectric layer107,407A through407C may be electrically and acoustically coupled with pair(s) of bottom metal electrode layers (e.g., first pair of bottom metal electrode layers119,419A through419C,121,421A through421C, e.g., second pair of bottom metal electrode layers123,423A through423C,125,425A through425C, e.g., third pair of bottom metal electrode layers127,129), to excite the piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency) of the bottom piezoelectric layer105,405A through405C acoustically coupled with the first middle piezoelectric layer107,407A through407C. Additionally, the first middle piezoelectric layer107,407A through407C, may be sandwiched between the bottom piezoelectric layer105,405A through405C, and the second middle piezoelectric layer109,409A through409C, and may be electrically and acoustically coupled with pair(s) of bottom metal electrode layers (e.g., first pair of bottom metal electrode layers119,419A through419C,121,421A through421C, e.g., second pair of bottom metal electrode layers123,423A through423C,125,425A through425C, e.g., third pair of bottom metal electrode layers127,129), to excite the piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) at the resonant frequency (e.g., main resonant frequency) of the first middle piezoelectric layer107,407A through407C, sandwiched between the bottom piezoelectric layer105,405A through405C, and the second middle piezoelectric layer109,409A through409C. Another mesa structure113,413A through413C, (e.g., second mesa structure113,413A through413C), may comprise the bottom acoustic reflector113,413A through413C. The another mesa structure113,413A through413C, (e.g., second mesa structure113,413A through413C), may comprise one or more pair(s) of bottom metal electrode layers (e.g., first pair of bottom metal electrode layers119,419A through419C,121,421A through421C, e.g., second pair of bottom metal electrode layers123,423A through423C,125,425A through425C, e.g., third pair of bottom metal electrode layers127,129, e.g., fourth pair of bottom metal electrode layers131,133). Further, the respective harmonically tuned top sensor electrodes115,415A through415C may be acoustically de-tuned from respective resonant frequencies of the respective BAW resonators100,400A through400C. For example, respective harmonically tuned top sensor electrode115,415A through415C (e.g., respective top acoustic electrodes115,415A through415C, e.g., respective metal top electrodes115,415A through415C) may be acoustically de-tuned from respective resonant frequencies of the respective BAW resonators100,400A through400C. For example, respective de-tuned harmonically tuned top sensor electrodes115,415A through415C may have respective metal electrode layer thicknesses selected so that the respective de-tuned harmonically tuned top sensor electrodes115,415A through415C may have respective half wavelength resonant frequencies that may be acoustically de-tuned from the respective resonant frequencies of the respective BAW resonators100,400A through400C. For example, top metal electrode layers of de-tuned harmonically tuned top sensor electrodes115,415A through415C may have respective layer thicknesses selected so that the respective de-tuned harmonically tuned top sensor electrodes115,415A through415C may have respective half wavelength resonant frequencies that may be acoustically de-tuned to be above the respective resonant frequencies of the respective BAW resonators100,400A through400C. For example, for a 24 GHz resonator, (e.g., resonator having a main resonant frequency of about 24 GHz) top metal electrode layers may have respective layer thicknesses selected so that the respective de-tuned harmonically tuned top sensor electrodes115,415A through415C may have respective half wavelength resonance frequencies that may be acoustically de-tuned to be above (e.g., 2 GHz above) the respective resonant frequencies of the respective BAW resonators100,400A through400C, e.g., acoustically de-tuned to about 26 GHz. As will be discussed in greater detail subsequently herein, top acoustic reflector de-tuning may facilitate suppressing parasitic (e.g., undesired) lateral resonances in acoustic resonators, for example, in respective BAW resonators100,400A through400C. In various differing examples, de-tuned harmonically tuned top sensor electrodes115,415A through415C may be de-tuned (e.g., tuned up in frequency) by various differing amounts from the resonant frequency (e.g. main resonant frequency) of the BAW resonator. As discussed in greater detail subsequently herein, in examples having about one or two piezoelectric layers in an alternating piezoelectric axis stack arrangement, the de-tuned harmonically tuned top sensor electrodes115,415A through415C may be acoustically de-tuned (e.g., tuned up in frequency) from the resonant frequency (e.g. main resonant frequency) of the BAW resonator by about up to about 5% of the resonant frequency (e.g. main resonant frequency) of the BAW resonator. It is theorized that this de-tuning by up to about 5% may facilitate suppression of parasitic lateral resonances for resonators comprising about one or two piezoelectric layers. In examples having about three piezoelectric layers to about six piezoelectric layers in an alternating piezoelectric axis stack arrangement, de-tuned harmonically tuned top sensor electrodes115,415A through415C may be acoustically de-tuned (e.g., tuned up in frequency) from the resonant frequency (e.g. main resonant frequency) of the BAW resonator by up to about 12% of the resonant frequency (e.g. main resonant frequency) of the BAW resonator. It is theorized that this de-tuning by up to about 12% may facilitate suppression of parasitic lateral resonances for resonators comprising the about three piezoelectric layers to about six piezoelectric layers. In examples having about seven piezoelectric layers to about eighteen piezoelectric layers, in an alternating piezoelectric axis stack arrangement, the de-tuned harmonically tuned top sensor electrodes115,415A through415C may be acoustically de-tuned (e.g., tuned up in frequency) from the resonant frequency (e.g. main resonant frequency) of the BAW resonator by up to about 36% of the resonant frequency (e.g. main resonant frequency) of the BAW resonator. It is theorized that this de-tuning by up to about 36% may facilitate suppression of parasitic lateral resonances for resonators comprising the about seven piezoelectric layers to about eighteen piezoelectric layers. In examples having greater than about eighteen piezoelectric layers, in an alternating piezoelectric stack arrangement, the de-tuned harmonically tuned top sensor electrodes115,415A through415C may be acoustically de-tuned (e.g., tuned up in frequency) from the resonant frequency (e.g. main resonant frequency) of the BAW resonator by greater than about 36% of the resonant frequency (e.g. main resonant frequency) of the BAW resonator. It is theorized that this de-tuning by greater than 36% may facilitate suppression of parasitic lateral resonances for resonators comprising greater than eighteen piezoelectric layers. Respective thicknesses of the de-tuned harmonically tuned top sensor electrodes115,415A through415C may be related to wavelength (e.g., acoustic wavelength) for the main resonant frequency of the example bulk acoustic wave resonators,100,400A through400C. Further, various embodiments for resonators having relatively higher main resonant frequency may have relatively thinner harmonically tuned top sensor electrode thicknesses, e.g., scaled thinner with relatively higher main resonant frequency. Similarly, various alternative embodiments for resonators having relatively lower main resonant frequency may have relatively thicker harmonically tuned top sensor electrode thicknesses, e.g., scaled thicker with relatively lower main resonant frequency. In an example, if Molybdenum is used as the high acoustic impedance metal, and the main resonant frequency of the resonator is approximately twenty-four gigahertz (e.g., 24 GHz), then using the half wavelength (e.g., half acoustic wavelength) may provide the layer thickness of the high impedance metal of the harmonically tuned top sensor electrodes as about one thousand three hundred Angstroms (1300 A). The bottom acoustic reflector113,413A through413C, may have a thickness dimension T23extending along the stack of bottom electrode layers. For the example of the 24 GHz resonator, the thickness dimension T23of the bottom acoustic reflector may be about five thousand Angstroms (5,000 A). The harmonically tuned top sensor electrodes115,415A through415C, may have a thickness dimension T25extending along the harmonically tuned top sensor electrodes. For the example of the 24 GHz resonator, the thickness dimension T25of the harmonically tuned top sensor electrodes115,415A through415C may be about one thousand three hundred Angstroms (1300 A). The piezoelectric layer stack104,404A through404C, may have a thickness dimension T27extending along the piezoelectric layer stack104,404A through404C. For the example of the 24 GHz resonator, the thickness dimension T27of the piezoelectric layer stack may be about eight thousand Angstroms (8,000 A). In the example resonators100,400A through400C, ofFIG.1AandFIGS.4A through4C, a notional heavy dashed line is used in depicting an etched edge region153,453A through453C, associated with the example resonators100,400A through400C. Similarly, a laterally opposing etched edge region154,454A through454C is arranged laterally opposing or opposite from the notional heavy dashed line depicting the etched edge region153,453A through453C. The etched edge region may, but need not, assist with acoustic isolation of the resonators. The etched edge region may, but need not, help with avoiding acoustic losses for the resonators. The etched edge region153,453A through453C, (and the laterally opposing etched edge region154,454A through454C) may extend along the thickness dimension T27of the piezoelectric layer stack104,404A through404C. The etched edge region153,453A through453C, may extend through (e.g., entirely through or partially through) the piezoelectric layer stack104,404A through404C. Similarly, the laterally opposing etched edge region154,454A through454C may extend through (e.g., entirely through or partially through) the piezoelectric layer stack104,404A through404C. The etched edge region153,453A through453C, (and the laterally opposing etched edge region154,454A through454C) may extend through (e.g., entirely through or partially through) the bottom piezoelectric layer105,405A through405C. The etched edge region153,453A through453C, (and the laterally opposing etched edge region154,454A through454C) may extend through (e.g., entirely through or partially through) the first middle piezoelectric layer107,407A through407C. The etched edge region153,453A through453C, (and the laterally opposing etched edge region154,454A through454C) may extend through (e.g., entirely through or partially through) the second middle piezoelectric layer109,409A through409C. The etched edge region153,453A through453C, (and the laterally opposing etched edge region154,454A through454C) may extend through (e.g., entirely through or partially through) the top piezoelectric layer111,411A through411C. The etched edge region153,453A through453C, (and the laterally opposing etched edge region154,454A through454C) may extend along the thickness dimension T23of the bottom acoustic reflector113,413A through413C. The etched edge region153,453A through453C, (and the laterally opposing etched edge region154,454A through454C) may extend through (e.g., entirely through or partially through) the bottom acoustic reflector113,413A through413C. The etched edge region153,453A through453C, (and the laterally opposing etched edge region154,454A through454C) may extend through (e.g., entirely through or partially through) the first pair of bottom metal electrode layers,119,419A through419C,121,421A through421C. The etched edge region153,453A through453C (and the laterally opposing etched edge region154,454A through454C) may extend through (e.g., entirely through or partially through) the second pair of bottom metal electrode layers,123,423A through423C,125,425A through425C. The etched edge region153,453A through453C (and the laterally opposing etched edge region154,454A through454C) may extend through (e.g., entirely through or partially through) the third pair of bottom metal electrode layers,127,129. The etched edge region153,453A through453C (and the laterally opposing etched edge region154,454A through454C) may extend through (e.g., entirely through or partially through) the fourth pair of bottom metal electrode layers,131,133. The etched edge region153,453A through453C (and the laterally opposing etched edge region154,454A through454C) may extend along the thickness dimension T25of the harmonically tuned top sensor electrode115,415A through415C. The etched edge region153,453A through453C (and the laterally opposing etched edge region154,454A through454C) may extend through (e.g., entirely through or partially through) the harmonically tuned top sensor electrode115,415A through415C. The example resonators100,400A through400C, ofFIG.1AandFIGS.4A through4Cmay include one or more (e.g., one or a plurality of) interposer layers sandwiched between piezoelectric layers of the stack104,404A through404C. For example, a first interposer layer159,459A through459C may be sandwiched between the bottom piezoelectric layer105,405A through405C, and the first middle piezoelectric layer107,407A through407C. For example, a second interposer layer161,461A through461C, may be sandwiched between the first middle piezoelectric layer107,407A through407C, and the second middle piezoelectric layer109,409A through409C. For example, a third interposer layer163,463A through463C, may be sandwiched between the second middle piezoelectric layer109,409A through409C, and the top piezoelectric layer111,411A through411C. One or more (e.g., one or a plurality of) interposer layers may be metal interposer layers. The metal interposer layers may be relatively high acoustic impedance metal interposer layers (e.g., using relatively high acoustic impedance metals such as Tungsten (W) or Molybdenum (Mo)). Such metal interposer layers may (but need not) flatten stress distribution across adjacent piezoelectric layers and may (but need not) raise effective electromechanical coupling coefficient (Kt2) of adjacent piezoelectric layers. Alternatively or additionally, one or more (e.g., one or a plurality of) interposer layers may be dielectric interposer layers. The dielectric of the dielectric interposer layers may be a dielectric that has a positive acoustic velocity temperature coefficient, so acoustic velocity increases with increasing temperature of the dielectric. The dielectric of the dielectric interposer layers may be, for example, silicon dioxide. Dielectric interposer layers may, but need not, facilitate compensating for frequency response shifts with increasing temperature. Most materials (e.g., metals, e.g., dielectrics) generally have a negative acoustic velocity temperature coefficient, so acoustic velocity decreases with increasing temperature of such materials. Accordingly, increasing device temperature generally causes response of resonators and filters to shift downward in frequency. Including dielectric (e.g., silicon dioxide) that instead has a positive acoustic velocity temperature coefficient may facilitate countering or compensating (e.g., temperature compensating) this downward shift in frequency with increasing temperature. Alternatively or additionally, one or more (e.g., one or a plurality of) interposer layers may comprise metal and dielectric for respective interposer layers. For example, high acoustic impedance metal layer such as Tungsten (W) Molybdenum (Mo) may (but need not) raise effective electromechanical coupling coefficient (Kt2). Subsequently deposited amorphous dielectric layer such as Silicon Dioxide (SiO2) may (but need not) facilitate compensating for temperature dependent frequency shifts. Alternatively or additionally, one or more (e.g., one or a plurality of) interposer layers may comprise different metals for respective interposer layers. For example, high acoustic impedance metal layer such as Tungsten (W), Molybdenum (Mo) may (but need not) raise effective electromechanical coupling coefficient (Kt2) while subsequently deposited metal layer with hexagonal symmetry such as Titanium (Ti) may (but need not) facilitate higher crystallographic quality of subsequently deposited piezoelectric layer. Alternatively or additionally, one or more (e.g., one or a plurality of) interposer layers may comprise different dielectrics for respective interposer layers. For example, high acoustic impedance dielectric layer such as Hafnium Dioxide (HfO2) may (but need not) raise effective electromechanical coupling coefficient (Kt2). Subsequently deposited amorphous dielectric layer such as Silicon Dioxide (SiO2) may (but need not) facilitate compensating for frequency dependent frequency shifts. In addition to the foregoing application of metal interposer layers to raise effective electromechanical coupling coefficient (Kt2) of adjacent piezoelectric layers, and the application of dielectric interposer layers to facilitate compensating for frequency response shifts with increasing temperature, interposer layers may, but need not, increase quality factor (Q-factor) and/or suppress irregular spectral response patterns characterized by sharp reductions in Q-factor known as “rattles”. Q-factor of a resonator is a figure of merit in which increased Q-factor indicates a lower rate of energy loss per cycle relative to the stored energy of the resonator. Increased Q-factor in resonators used in filters results in lower insertion loss and sharper roll-off in filters. The irregular spectral response patterns characterized by sharp reductions in Q-factor known as “rattles” may cause ripples in filter pass bands. Metal and/or dielectric interposer layer of suitable thicknesses and acoustic material properties (e.g., velocity, density) may be placed at appropriate places in the stack104,404A through404C, of piezoelectric layers, for example, proximate to the nulls of acoustic energy distribution in the stacks (e.g., between interfaces of piezoelectric layers of opposing axis orientation). Finite Element Modeling (FEM) simulations and varying parameters in fabrication prior to subsequent testing may help to optimize interposer layer designs for the stack. Thickness of interposer layers may, but need not, be adjusted to influence increased Q-factor and/or rattle suppression. It is theorized that if the interposer layer is too thin there is no substantial effect. Thus minimum thickness for the interposer layer may be about one mono-layer, or about five Angstroms (5 A). Alternatively, if the interposer layer is too thick, rattle strength may increase rather than being suppressed. Accordingly, an upper limit of interposer thickness may be about five-hundred Angstroms (500 A) for an approximately twenty-four gigahertz (24 GHz) resonator design, with limiting thickness scaling inversely with frequency for alternative resonator designs. It is theorized that below a series resonant frequency of resonators, Fs, Q-factor may not be systematically and significantly affected by including a single interposer layer. However, it is theorized that there may, but need not, be significant increases in Q-factor for inclusion of two or more interposer layers. In the example resonators100,400A through400C, ofFIG.1AandFIGS.4A through4C, a planarization layer165,465A through465C (e.g., passivation layer165,465A through465C) may be included. A suitable material may be used for planarization layer165,465A through465C, for example Silicon Dioxide (SiO2), Hafnium Dioxide (HfO2), polyimide, or BenzoCyclobutene (BCB). An isolation layer167,467A through467C, may also be included and arranged over the planarization layer165,465A-465C. For the acoustic resonator based sensor of this disclosure, a suitable dielectric material may be used for the isolation layer167,467A through467C, for example Silicon Nitride, Silicon Dioxide, or Aluminum Nitride. Thickness of isolation layer167,467A through467C may be controlled, for example, to be very thin. For example, thickness of isolation layer167,467A through467C may be within a range from approximately fifty Angstroms to approximately three hundred Angstroms (approximately 50 A to approximately 300 A) for resonators designed to operate at approximately 24 GHz. In the example resonators100,400A through400C, ofFIG.1AandFIGS.4A through4C, a bottom electrical interconnect169,469A through469C, may be included to interconnect electrically with (e.g., electrically contact with) the bottom acoustic reflector113,413A through413C, stack of the plurality of bottom metal electrode layers. A top electrical interconnect171,471A through471C, may be included to interconnect electrically with the harmonically tuned top sensor electrode115,415A through415C. A suitable material may be used for the bottom electrical interconnect169,469A through469C, and the top electrical interconnect171,471A through471C, for example, gold (Au). Top electrical interconnect171,471A through471C may be arranged substantially away from an active area of the stack104,404A through404C of the example four layers of piezoelectric material. This may provide for top electrical interconnect171,471A through471C being substantially acoustically isolated from the active area of the stack104,404A through404C of the example four layers of piezoelectric material. Top electrical interconnect171,471A through471C may have dimensions selected so that the top electrical interconnect171,471A through471C approximates an electrical transmission line impedance at the main resonant frequency of the bulk acoustic wave resonator100,400A through400C (e.g., in cases where the electrical impedance of the bulk acoustic wave resonator100,400A through400C may be designed to be near fifty ohms, the electrical transmission line impedance may be approximately fifty ohms). Top electrical interconnect171,471A through471C may have a thickness that is substantially thicker than a thickness the harmonically tuned top sensor electrode115,415A through415C. Top electrical interconnect171,471A through471C may have a thickness within a range from about one hundred Angstroms (100 A) to about five micrometers (5 um). For example, top electrical interconnect171,471A through471C may have a thickness of about two thousand Angstroms (2000 A). Bulk acoustic wave resonators100,400A through400C may comprise respective sensing regions (e.g., sensing regions116,416A through416C). Sensing regions116,416A through416C may comprise respective functionalized layers168,468A through468C. Respective functionalized layers168,468A through468C may be patterned, e.g., may have their lateral extents limited by patterning techniques, e.g., masking techniques, e.g., unmasked material removal techniques. Respective functionalized layers168,468A through468C may be patterned to be arranged over respective active regions of bulk acoustic wave resonators100,400A through400C (e.g., over central portions of the respective active regions). Variations of functionalized layers168,468A through468C may be employed in combination with the other structures of bulk acoustic wave resonators100,400A through400C for varied sensing purposes. For example, respective functionalized layers168,468A through468C may be different from one another. This may facilitate respective responses (e.g., sensing responses) to differing environmental variables, e.g., binding to differing analytes. Respective functionalized layers168,468A through468C may facilitate binding to respective analytes. Respective functionalized layers168,468A through468C that may be different from one another to facilitate binding to respective analytes that may be different from one another. Respective functionalized layers168,468A through468C may be selected to have affinity (e.g., selective affinity) for one or more respective analytes (e.g., targeted analytes). For example, various functionalized layers168,468A through468C of the sensing regions (e.g., sensing regions116,416A through416C) may selectively bind to various analytes, e.g., biomolecules (e.g., targeted biomolecules, e.g., coronavirus, e.g., SARS CoV-2, e.g., carriers of infectious disease, e.g., bioweapons, e.g., biomarkers, e.g., targeted antigens, e.g., targeted antibodies), for detection. For example, respective functionalized layers168,468A through468C associated with bulk acoustic wave resonator100,400A through400C may selectively bind the mass of one or more analytes, e.g., biomolecules (e.g., targeted biomolecules, e.g., coronavirus, e.g., SARS CoV-2, e.g., carriers of infectious disease, e.g., bioweapons, e.g., biomarkers, e.g., targeted antigens, e.g., targeted antibodies), to the functionalized layers168,468A through468C. The mass of one or more analytes, e.g., biomolecules (e.g., targeted biomolecules, e.g., coronavirus, e.g., SARS CoV-2, e.g., carriers of infectious disease, e.g., bioweapons, e.g., biomarkers, e.g., targeted antigens, e.g., targeted antibodies), binding to the respective functionalized layers168,468A through468C may cause respective detectable resonance frequency shifts (e.g., decreases in respective resonance frequencies) in operation of bulk acoustic wave resonators100,400A through400C in their respective thickness extensional main resonant modes. Respective electrical circuitry may be coupled with bulk acoustic wave resonators100,400A through400C to determine the resonance frequency shifts. This may detect the presence of analytes, for example, biomolecules (e.g., targeted biomolecules, e.g., coronavirus, e.g., SARS CoV-2, e.g., carriers of infectious disease, e.g., bioweapons, e.g., biomarkers, e.g., targeted antigens, e.g., targeted antibodies). Functionalized layers168,468A through468C may comprise respective binding layers, e.g., respective layers of antibodies, e.g., respective layers of binding antibodies, e.g., receptors, e.g., ligands. Functionalized layers168,468A through468C may comprise respective interface layers (e.g., thin interface layers, e.g., very thin interface layers) e.g., of noble metal (e.g., gold). For example, the respective interface layers, e.g., noble metal layers may be within a range from five Angstroms thick (e.g., approximately one monolayer) to approximately one thousand five hundred Angstroms thick (e.g., approximately one acoustic wavelength of thickness extensional mode at approximately 24.25 GHz). For functionalized layers168,468A through468C, respective binding layers, e.g., respective layers of antibodies may be coupled with (e.g., acoustically coupled with, e.g., may be arranged over) respective interface layers (e.g., respective noble metal layers, e.g., respective supportive noble metal layers, e.g., respective noble metal layers). Functionalized layers168,468A through468C may comprise respective immobilization layers to couple respective binding layers (e.g., respective layers of antibodies) to respective interface layers (e.g., respective gold layers). Respective immobilization layers may be arranged over respective interface layers (e.g., respective gold layers). Respective binding layers (e.g., respective layers of antibodies) may be arranged over respective immobilization layers. Respective immobilization layers of functionalized layers168,468A through468C may comprise respective “Protein A” layers, e.g., respective immunoglobulin binding protein layers, to immobilize antibodies on respective interface layers (e.g., to immobilize coronavirus antibodies on respective gold interface layers, e.g., to immobilize SARS Cov-2 antibodies on respective gold interface layers). Respective immobilization layers of functionalized layers168,468A through468C may comprise respective self assembled monolayers (e.g., respective self assembled monolayers, e.g., self assembled monolayers comprising thiol material). The respective binding layers, e.g., respective layers of antibodies (e.g. respective layers of binding antibodies) may be arranged over respective immobilization layers, e.g., over respective self assembled monolayers. Respective self assembled monolayers may be formed by exposure of respective interface layers to molecules with chemical groups that exhibit strong affinities for the respective interface layers (e.g., strong affinities for respective noble metal layers, e.g., strong affinities for gold layers). For example, thiol-based (e.g., alkanethiol-based) assembled monolayers may be used. For example, molecules like Alkanethiols may have e.g., an alkyl chain as the back bone, may have e.g., a tail group, and may have e.g., a S—H head group. Thiols may be used on noble metal interface layers due to what may be a strong affinity for these metals. Examples of thiol-based self assembly monolayers that may be used include my comprise 1-dodecanethiol (DDT), 11-mercaptoundecanoic acid (MUA), and hydroxyl-terminated (hexaethylene glycol) undecanethiol (1-UDT). These thiols may contain the same or similar backbone, but different end groups—namely, methyl (CH.sub.3), carboxyl (COOH), and hydroxyl-terminated hexaethylene glycol (HO—(CH.sub.2CH.sub.2O).sub.6) for DDT, MUA, and 1-UDT, respectively. Self assembly monolayers may be formed by incubating interface layers, e.g., noble metal layers, e.g., gold layers, in thiol solutions using a suitable solvent, such as anhydrous ethanol. Following formation of respective self assembled monolayers, the self assembled monolayers may be biologically functionalized, such as by receiving at least one functionalization (e.g., specific binding) material. Examples of specific binding materials may include, but are not limited to, antibodies, receptors, ligands, and the like. A specific binding material may be configured to receive a predefined target species (e.g., molecule, protein, DNA, virus, bacteria, etc.) As another example, respective binding layers of respective functionalized layers168,468A through468C may comprise aptamers, e.g., Prostate Specific Antigen (PSA) binding aptamers to bind with analyte biomarkers, e.g., Prostate Specific Antigen (PSA) biomarkers, e.g., biomarkers for prostate cancer. As another example, respective binding layers of respective functionalized layers168,468A through468C may have an affinity for glucose (e.g., binding layers of (3-acrylamidopropyl) tri-methylammonium chloride (poly(acrylamide-co-3-APB)). Glucose level may be recognized as a biomarker for management of diabetes. More broadly, interactions of respective binding layers of respective functionalized layers168,468A through468C in pairings with analytes may be viewed through the lens of biospecific interaction analysis (BIA) e.g., for antibody-antigen, e.g., for nucleic acids, e.g., for DNA-RNA, e.g., for protein-peptide, e.g. for enzymes-substrate, e.g., for receptors-various molecules. Respective binding layers of respective functionalized layers168,468A through468C, may form one member of these pairings, and the other member may comprise the other member of these pairings. For example, as just discussed, respective binding layers of respective functionalized layers168,468A through468C may comprise antibodies (e.g., coronavirus antibodies) to detect an analyte, e.g., antigens (e.g., coronavirus antigens). However the roles may be reversed. Respective binding layers of respective functionalized layers168,468A through468C may comprise antigens (e.g., coronavirus antigens or portions thereof) to detect an analyte, e.g., antibodies (e.g., coronavirus antibodies). The presence of antibodies (e.g., coronavirus antibodies) in a blood sample or a sputum sample collected from a patient may be indicative of an infection (e.g., COVID-19). As additional examples, respective binding layers of respective functionalized layers168,468A through468C may comprise a protein to detect an analyte, e.g., peptides. However the roles may be reversed. Respective binding layers of respective functionalized layers168,468A through468C may comprise peptides to detect an analyte, e.g., proteins. Respective members of respective functionalized layers168,468A through468C (e.g., respective antibody layers of respective functionalized layers168,468A through468C, e.g., respective noble metal layers of functionalized layers168,468A through468C) may be coupled (e.g., acoustically coupled) with other respective members of bulk acoustic wave resonators100,400A through400C (e.g., may be acoustically coupled with respective isolation layers167,467A through467C, e.g., may be acoustically coupled with respective harmonically tuned top sensor electrodes115,415A through415C, e.g., may be acoustically coupled with respective stacks104,404A through404C of alternating axis arrangements of piezoelectric layers, e.g., may be acoustically coupled with respective bottom piezoelectric layers105,405A through405C (e.g., having a normal axis orientation), e.g., may be acoustically coupled with respective first middle piezoelectric layers107,407A through407C (e.g., having a reverse axis orientation), e.g., may be acoustically coupled with respective second middle piezoelectric layers109,409A through409C (e.g., having the normal axis orientation), e.g., may be acoustically coupled with respective top piezoelectric layers111,411A through411C (e.g., having the reverse axis orientation)). Respective targeted analytes (e.g., respective targeted biomolecules), in binding with respective layers of antibodies of functionalized layers168,468A through468C, may become acoustically coupled with respective members of bulk acoustic wave resonators100,400A through400C (e.g., may become acoustically coupled with respective antibody layers of respective functionalized layers168,468A through468C, e.g., may become acoustically coupled with respective noble metal layers of functionalized layers168,468A through468C e.g., may become acoustically coupled with respective isolation layers167,467A through467C, e.g., may become acoustically coupled with respective harmonically tuned top sensor electrodes115,415A through415C, e.g., may become acoustically coupled with respective stacks104,404A through404C of alternating axis arrangements of piezoelectric layers, e.g., may become acoustically coupled with respective bottom piezoelectric layers105,405A through405C (e.g., having a normal axis orientation), e.g., may become acoustically coupled with respective first middle piezoelectric layers107,407A through407C (e.g., having a reverse axis orientation), e.g., may become acoustically coupled with respective second middle piezoelectric layers109,409A through409C (e.g., having the normal axis orientation), e.g., may become acoustically coupled with respective top piezoelectric layers111,411A through411C (e.g., having the reverse axis orientation)). Respective binding layers of functionalized layers168,468A through468C may comprise material (e.g., bacteria, e.g.,Escherichia colibacteria) having an affinity for absorption of toxins, e.g., having an affinity for absorption of heavy metals, e.g., having an affinity for absorption of lead (Pb). Examples of heavy metals include mercury (Hg), cadmium (Cd), arsenic (As), chromium (Cr), thallium (Tl), and lead (Pb). As another example, respective binding layers of functionalized layers168,468A through468C may comprise material (e.g., cobalt corroles) having an affinity for absorption of toxins, e.g., having an affinity for absorption of carbon monoxide. Functionalized layers168,468A through468C may comprise respective nanoporous layers. In some cases, pore size of respective nanoporous layers may be tuned for analyte selectivity. Functionalized layers168,468A through468C may comprise respective nanostructured layers. In some cases, nanostructure size of respective nanostructured layers may be tuned for analyte selectivity. Respective binding layers of functionalized layers168,468A through468C may comprise materials having an affinity for volatile organic compounds (e.g., hydrocarbons, e.g., alcohols, e.g., ammonia, e.g., acetone, e.g., ketones, e.g., aldehydes, e.g., esters, e.g., heterocycles). Example materials having affinity for volatile organic compounds include, hexamethyldisiloxane (HMDSO), hexamethyldisilazane (HMDSN) and tetraethoxysilane (TEOS), polyaniline, calixarenes, chitosan, chitosan/polyaniline nanofibers, grapheme, molecularly imprinted polymers, and mesoporous materials e.g., having a tunable pore structure. Respective binding layers of functionalized layers168,468A through468C may comprise molecularly imprinted polymers, which may have affinity for one or more target analytes. In addition to affinity for volatile organic compounds, as just mentioned, molecularly imprinted polymers may be configured to have affinity for other analytes, e.g., tetrahydrocannabinol (THC), e.g., biological weapons agents, e.g., anthrax. Molecularly imprinted polymer may be configured to have affinity for explosives, e.g., trinitrotoluene (TNT), e.g., 1,3,5-trinitro-1,3,5-triazacyclohexane (RDX). Molecularly imprinted polymers may be configured to have affinity for nerve agents, e.g., Sarin. Molecularly imprinted polymers may be configured to have affinity for a chemical associated with a chemical weapon, e.g., dimethyl methylphosphonate (DMMP). Dimethyl methylphosphonate (DMMP) is associated with production of the Sarin nerve agent. Respective functionalized layers168,468A through468C may comprise nanocomposites. Respective functionalized layers168,468A through468C may have an affinity for a chemical associated with a chemical weapon, e.g., dimethyl methylphosphonate (DMMP). For example, a nanocomposite of ZnO modified MnO2 nanofibers may have an affinity for dimethyl methylphosphonate (DMMP). Numerous examples just discussed are directed to examples where functionalized layers168,468A through468C may include binding layers. Binding of analytes with functionalized layers168,468A through468C may increase mass. This may lead to decrease in resonant frequency of example resonators100,400A through400C. This may provide for analyte detection. In addition to the foregoing, there are the following examples of targeted energetic phenomena impinging on functionalized layers168,468A through468C. Functionalized layers168,468A through468C may be selectively configured for targeted energetic phenomena. This may lead to increase in temperature of functionalized layers168,468A through468C. This may lead to decrease in resonant frequency of example resonators100,400A through400C. This may provide for detection of the targeted energetic phenomena. For example, functionalized layers168,468A through468C may comprise a material having an affinity for absorbing neutrons, e.g., a material having a relatively high neutron cross section, e.g., cadmium. For example, respective functionalized layers168,468A through468C may comprise the material having the affinity for absorbing neutrons, e.g., for absorbing the targeted energetic phenomena of a flux of thermal neutrons. This may lead to increase in temperature of functionalized layers168,468A through468C. This may lead to decrease in resonant frequency of example resonators100,400A through400C. The flux of thermal neutrons may be detected by example resonators100,400A through400C. As another example, respective functionalized layers168,468A through468C may be selectively configured for a targeted energetic phenomena of infrared light. For example, respective functionalized layers168,468A through468C may comprise material having an affinity for absorbing infrared light, e.g., nanoplasmonic metasurfaces configured to absorb infrared light. This may lead to increase in temperature of functionalized layers168,468A through468C. This may lead to decrease in resonant frequency of example resonators100,400A through400C. Using the respective functionalized layers168,468A through468C so configured, infrared light may be detected by example resonators100,400A through400C. As another example, respective functionalized layers168,468A through468C may be selectively configured for a targeted energetic phenomena of terahertz radiation. For example, respective functionalized layers168,468A through468C may comprise material having an affinity for absorbing terahertz radiation, e.g., nanoplasmonic metasurfaces configured to absorb terahertz radiation. This may lead to increase in temperature of functionalized layers168,468A through468C. This may lead to decrease in resonant frequency of example resonators100,400A through400C. Using the respective functionalized layers168,468A through468C so configured, terahertz radiation may be detected by example resonators100,400A through400C. As another example, respective functionalized layers168,468A through468C may be selectively configured for a targeted energetic phenomena of solar blind ultraviolet light. For example, respective functionalized layers168,468A through468C may comprise material having an affinity for absorbing solar blind ultraviolet light, e.g., beta gallium oxide (β-Ga2O3). This may lead to increase in temperature of functionalized layers168,468A through468C. This may lead to decrease in resonant frequency of example resonators100,400A through400C. Using the respective functionalized layers168,468A through468C so configured, solar blind ultraviolet light may be detected by example resonators100,400A through400C. In addition to the forgoing examples, striction of respective functionalized layers168,468A through468C may be in response to sensed phenomena. This may cause a changed in resonant frequency of example resonators100,400A through400C. This may provide for detection of the sensed phenomena. For example, respective functionalized layers168,468A through468C may be magnetostrictive, e.g., striction of respective functionalized layers168,468A through468C may be in response to sensed magnetic phenomena. This may cause a change in resonant frequency of example resonators100,400A through400C, e.g., magnetic phenomena may be detected, e.g., changes in magnetic phenomena may be detected. For example, respective functionalized layers168,468A through468C may comprise a magnetostrictive material. Respective functionalized layers168,468A through468C may be multiferroic. Respective functionalized layers168,468A through468C may be magnetoelectric. Respective functionalized layers168,468A through468C may comprise a nanocomposite. Respective functionalized layers168,468A through468C may comprise respective heterostructures. Respective functionalized layers168,468A through468C may comprise respective perovskite layers. Respective functionalized layers168,468A through468C may comprise respective magnetostrictive exchange biased multilayers. Respective functionalized layers168,468A through468C may comprise respective antiparallel magnetostrictive exchange biased multilayers. More broadly, sensing regions116,416A through416C may comprise respective functionalized layers168,468A through468C. Sensing regions116,416A through416C may comprise at least respective portions of respective harmonically tuned top sensor electrodes115,415A through415C. Harmonically tuned top sensor electrodes115,415A through415C may be magnetostrictive. Harmonically tuned top sensor electrodes115,415A through415C may comprise a magnetostrictive material. Respective sensing regions116,416A through416C may be acoustically coupled with respective harmonically tuned top sensor electrodes115,415A through415C. This may comprise integral coupling of respective sensing regions116,416A through416C with respective harmonically tuned top sensor electrodes115,415A through415C. Respective sensing regions116,416A through416C may comprise metallic glass (e.g., respective functionalized layers168,468A through468C may comprise metallic glass, e.g., harmonically tuned top sensor electrodes115,415A through415C may comprise metallic glass.) Respective sensing regions116,416A through416C may comprise cobalt (e.g., respective functionalized layers168,468A through468C may comprise cobalt, e.g., harmonically tuned top sensor electrodes115,415A through415C may comprise cobalt.) Respective sensing regions116,416A through416C may comprise terbium (e.g., respective functionalized layers168,468A through468C may comprise terbium, e.g., harmonically tuned top sensor electrodes115,415A through415C may comprise terbium.) Respective sensing regions116,416A through416C may comprise samarium (e.g., respective functionalized layers168,468A through468C may comprise samarium, e.g., harmonically tuned top sensor electrodes115,415A through415C may comprise samarium.) Respective sensing regions116,416A through416C may comprise dysprosium (e.g., respective functionalized layers168,468A through468C may comprise dysprosium, e.g., harmonically tuned top sensor electrodes115,415A through415C may comprise dysprosium.) Respective sensing regions116,416A through416C may comprise nickel (e.g., respective functionalized layers168,468A through468C may comprise nickel, e.g., harmonically tuned top sensor electrodes115,415A through415C may comprise nickel.) Respective sensing regions116,416A through416C may comprise a metallic alloy (e.g., respective functionalized layers168,468A through468C may comprise a metallic alloy, e.g., harmonically tuned top sensor electrodes115,415A through415C may comprise a metallic alloy.) Respective sensing regions116,416A through416C may comprise a giant magnetostrictive alloy (e.g., respective functionalized layers168,468A through468C may comprise a giant magnetostrictive alloy, e.g., harmonically tuned top sensor electrodes115,415A through415C may comprise a giant magnetostrictive alloy.) Respective sensing regions116,416A through416C may comprise respective tunable regions (e.g., respective functionalized layers168,468A through468C may comprise respective tunable regions, e.g., harmonically tuned top sensor electrodes115,415A through415C may comprise tunable regions.) For example striction (e.g., magnetostriction) may provide for tuning of resonant frequencies of respective resonators100,400A through400C, in response to sensed phenomena (e.g., in response to magnetic phenomena). For example, respective bulk acoustic wave (BAW) resonators100,400A through400C may comprise respective tunable bulk acoustic wave (BAW) resonators100,400A through400C. Respective bulk acoustic wave (BAW) resonators100,400A through400C may comprise at least a portion of a tunable electric filter (e.g., ladder filter of interconnected tunable bulk acoustic wave (BAW) resonators). FIG.1Bis a simplified view ofFIG.1Athat illustrates an example of acoustic stress distribution during electrical operation of the bulk acoustic wave resonator structure shown inFIG.1A. A notional curved line schematically depicts vertical (Tzz) stress distribution173through stack104of the example four piezoelectric layers. The stress173is excited by the oscillating electric field applied via the harmonically tuned top sensor electrode115, and the multilayer metal acoustic reflector electrode113comprising bottom metal electrode layers. The stress173has maximum values inside the stack104of piezoelectric layers, while exponentially tapering off within the multilayer metal acoustic reflector electrode113. Acoustic energy confined in the resonator structure100may be proportional to stress magnitude. As discussed previously herein, the example four piezoelectric layers in the stack104may have an alternating axis arrangement in the stack104. For example the bottom piezoelectric layer may have the normal axis orientation. Next in the alternating axis arrangement of the stack104id the first middle piezoelectric layer. Next in the alternating axis arrangement of the stack104, is the second middle piezoelectric layer. Next in the alternating axis arrangement of the stack104is the top piezoelectric layer. For the alternating axis arrangement of the stack104, stress173excited by the applied oscillating electric field causes normal axis piezoelectric layers to be in compression, while reverse axis piezoelectric layers are in extension. Accordingly,FIG.1Bshows peaks of stress173on the right side of the heavy dashed line to depict compression in normal axis piezoelectric layers (e.g., bottom and second middle piezoelectric layers), while peaks of stress173are shown on the left side of the heavy dashed line to depict extension in reverse axis piezoelectric layers (e.g., first middle and top piezoelectric layers). In operation of the BAW resonator shown inFIG.1B, peaks of standing wave acoustic energy may correspond to absolute value of peaks of stress173as shown inFIG.1B(e.g., peaks of standing wave acoustic energy may correspond to squares of absolute value of peaks of stress173as shown inFIG.1B). In operation of the BAW resonator, standing wave acoustic energy may be coupled through the harmonically tuned top sensor electrode115, through isolation layer167and into functionalized layer168of sensing region116shown inFIG.1B. Standing wave acoustic energy may be coupled into the multilayer metal acoustic reflector electrode113shown inFIG.1Bin operation of the BAW resonator. A second member of the first pair of bottom metal electrode layers may have a relatively high acoustic impedance (e.g., high acoustic impedance metal layer, e.g., tungsten layer). A first member of the first pair of bottom metal electrode layers may have a relatively low acoustic impedance (e.g., low acoustic impedance metal layer, e.g., titanium layer). Accordingly, the first member of the first pair of bottom metal electrode layers may have acoustic impedance that is relatively lower than the acoustic impedance of the second member. The first member119having the relatively lower acoustic impedance may be arranged, for example as shown inFIG.1B, sufficiently proximate to a first layer of piezoelectric material (e.g. sufficiently proximate to bottom layer of piezoelectric material, e.g., sufficiently proximate to stack of piezoelectric material) so that standing wave acoustic energy to be in the first member is greater than respective standing wave acoustic energy to be in other respective layers of the multi-layer metal bottom acoustic reflector electrode113in operation of the BAW resonator (e.g. greater than standing wave acoustic energy in the second member of the first pair of bottom metal electrode layers, e.g., greater than standing wave acoustic energy in the first member of the second pair of bottom metal electrode layers, e.g., greater than standing wave acoustic energy in the second member of the second pair of bottom metal electrode layers, e.g., greater than standing wave acoustic energy in the first member of the third pair of bottom metal electrode layers, e.g., greater than standing wave acoustic energy in the second member of the third pair of bottom metal electrode layers, e.g., greater than standing wave acoustic energy in the first member of the fourth pair of bottom metal electrodes, e.g., greater than standing wave acoustic energy in the second member of the fourth pair of bottom metal electrodes. This may facilitate suppressing parasitic lateral resonances in operation of the BAW resonator shown inFIG.1B. FIG.1Cshows a simplified top plan view of a bulk acoustic wave resonator structure100A corresponding to the cross sectional view ofFIG.1A, and also shows another simplified top plan view of an alternative bulk acoustic wave resonator structure100B. The bulk acoustic wave resonator structure100A includes the stack104A of four layers of piezoelectric material e.g., having the alternating piezoelectric axis arrangement of the four layers of piezoelectric material. The stack104A of piezoelectric layers may be sandwiched between the multilayer metal acoustic reflector electrode113A and harmonically tuned top sensor electrode115A. The multilayer metal acoustic reflector electrode comprise the stack of the plurality of bottom metal electrode layers of the multilayer metal acoustic reflector electrode113A, e.g., having the alternating arrangement of low acoustic impedance bottom metal electrode layers and high acoustic impedance bottom metal layers. Top electrical interconnect171A extends over (e.g., electrically contacts) an extremity of harmonically tuned top sensor electrode115A. Bottom electrical interconnect169A extends over (e.g., electrically contacts) an extremity of multilayer metal acoustic reflector electrode113A through bottom via region168A. FIG.1Calso shows another simplified top plan view of an alternative bulk acoustic wave resonator structure100B having an apodized shape. The bulk acoustic wave resonator structure100B includes the stack104B of four layers of piezoelectric material e.g., having the alternating piezoelectric axis arrangement of the four layers of piezoelectric material. The stack104B of piezoelectric layers may be sandwiched between the multilayer metal acoustic reflector electrode113B and harmonically tuned top sensor electrode115B. Harmonically tuned top sensor electrode115B may have the alternative apodized shape of alternative bulk acoustic wave resonator structure100B. The multilayer metal acoustic reflector electrode may comprise the stack of the plurality of bottom metal electrode layers of the multilayer metal acoustic reflector electrode113B, e.g., having the alternating arrangement of low acoustic impedance bottom metal electrode layers and high acoustic impedance bottom metal layers. Top electrical interconnect171B extends over (e.g., electrically contacts) an extremity of harmonically tuned top sensor electrode115B. Bottom electrical interconnect169B extends over (e.g., electrically contacts) an extremity of multilayer metal acoustic reflector electrode113B through bottom via region168B. InFIGS.1D and1E, Nitrogen (N) atoms are depicted with a hatching style, while Aluminum (Al) atoms are depicted without a hatching style.FIG.1Dis a perspective view of an illustrative model of a reverse axis crystal structure175of Aluminum Nitride, AlN, in piezoelectric material of layers inFIG.1A, e.g., having reverse axis orientation of negative polarization. For example, first middle and top piezoelectric layers107,111discussed previously herein with respect toFIGS.1A and1Bare reverse axis piezoelectric layers. By convention, when the first layer of normal axis crystal structure175is a Nitrogen, N, layer and second layer in an upward direction (in the depicted orientation) is an Aluminum, Al, layer, the piezoelectric material including the reverse axis crystal structure175is said to have crystallographic c-axis negative polarization, or reverse axis orientation as indicated by the upward pointing arrow177. For example, polycrystalline thin film Aluminum Nitride, AlN, may be grown in the crystallographic c-axis negative polarization, or reverse axis, orientation perpendicular relative to the substrate surface using reactive magnetron sputtering of an aluminum target in a nitrogen atmosphere, and by introducing oxygen into the gas atmosphere of the reaction chamber during fabrication at the position where the flip to the reverse axis is desired. An inert gas, for example, Argon may also be included in a sputtering gas atmosphere, along with the nitrogen and oxygen. For example, a predetermined amount of oxygen containing gas may be added to the gas atmosphere over a short predetermined period of time or for the entire time the reverse axis layer is being deposited. The oxygen containing gas may be diatomic oxygen containing gas, such as oxygen (O2). Proportionate amounts of the Nitrogen gas (N2) and the inert gas may flow, while the predetermined amount of oxygen containing gas flows into the gas atmosphere over the predetermined period of time. For example, N2 and Ar gas may flow into the reaction chamber in approximately a 3:1 ratio of N2 to Ar, as oxygen gas also flows into the reaction chamber. For example, the predetermined amount of oxygen containing gas added to the gas atmosphere may be in a range from about a thousandth of a percent (0.001%) to about ten percent (10%), of the entire gas flow. The entire gas flow may be a sum of the gas flows of argon, nitrogen and oxygen, and the predetermined period of time during which the predetermined amount of oxygen containing gas is added to the gas atmosphere may be in a range from about a quarter (0.25) second to a length of time needed to create an entire layer, for example. For example, based on mass-flows, the oxygen composition of the gas atmosphere may be about 2 percent when the oxygen is briefly injected. This results in an aluminum oxynitride (ALON) portion of the final monolithic piezoelectric layer, integrated in the Aluminum Nitride, AlN, material, having a thickness in a range of about 5 nm to about 20 nm, which is relatively oxygen rich and very thin. Alternatively, the entire reverse axis piezoelectric layer may be aluminum oxynitride. FIG.1Eis a perspective view of an illustrative model of a normal axis crystal structure179of Aluminum Nitride, AlN, in piezoelectric material of layers inFIG.1A, e.g., having normal axis orientation of positive polarization. For example, bottom and second middle piezoelectric layers105,109discussed previously herein with respect toFIGS.1A and1Bare normal axis piezoelectric layers. By convention, when the first layer of the reverse axis crystal structure179is an Al layer and second layer in an upward direction (in the depicted orientation) is an N layer, the piezoelectric material including the reverse axis crystal structure179is said to have a c-axis positive polarization, or normal axis orientation as indicated by the downward pointing arrow181. For example, polycrystalline thin film MN may be grown in the crystallographic c-axis positive polarization, or normal axis, orientation perpendicular relative to the substrate surface by using reactive magnetron sputtering of an Aluminum target in a nitrogen atmosphere. FIGS.2A through2Eshow further simplified views of bulk acoustic wave resonators similar to the bulk acoustic wave resonator structure shown inFIG.1A. In addition to further simplified views of bulk acoustic wave resonators,FIGS.2A and2Bshow corresponding impedance versus frequency response during its electrical operation, as well as alternative bulk acoustic wave resonator structures with differing numbers of alternating axis piezoelectric layers, and their respective corresponding impedance versus frequency response during electrical operation.FIG.2Cshows additional alternative bulk acoustic wave resonator structures with additional numbers of alternating axis piezoelectric layers.FIGS.2D and2Eshow more additional alternative bulk acoustic wave resonator structures. Bulk acoustic wave resonators2001A through2001K may, but need not be, bulk acoustic millimeter wave resonators2001A through2001K, operable with a main resonance mode having a main resonant frequency that is a millimeter wave frequency (e.g., twenty-four Gigahertz, 24 GHz) in a millimeter wave frequency band. As defined herein, millimeter wave means a wave having a frequency within a range extending from eight Gigahertz (8 GHz) to three hundred Gigahertz (300 GHz), and millimeter wave band means a frequency band spanning this millimeter wave frequency range from eight Gigahertz (8 GHz) to three hundred Gigahertz (300 GHz). Bulk acoustic wave resonators2001A through2001K may, but need not be, bulk acoustic Super High Frequency (SHF) wave resonators2001A through2001K or bulk acoustic Extremely High Frequency (EHF) wave resonators2001A through2001K, as the terms Super High Frequency (SHF) and Extremely High Frequency (EHF) are defined by the International Telecommunications Union (ITU). For example, bulk acoustic wave resonators2001A through2001K may be bulk acoustic Super High Frequency (SHF) wave resonators2001A through2001K operable with a main resonance mode having a main resonant frequency that is a Super High Frequency (SHF) (e.g., twenty-four Gigahertz, 24 GHz) in a Super High Frequency (SHF) wave frequency band. Piezoelectric layer thicknesses may be selected to determine the main resonant frequency of bulk acoustic Super High Frequency (SHF) wave resonators2001A through2001K in the Super High Frequency (SHF) wave band (e.g., twenty-four Gigahertz, 24 GHz main resonant frequency). Similarly, layer thicknesses of Super High Frequency (SHF) reflector layers (e.g., layer thickness of multilayer metal acoustic SHF reflector electrodes1013A through2013K) may be selected to determine quarter wavelength resonant frequency of such SHF reflectors at a frequency, e.g., quarter wavelength resonant frequency, within the Super High Frequency (SHF) wave band. For example, layer thickness of de-tuned multi-layer metal acoustic SHF wave reflector bottom electrodes2013A through2013K may be acoustically de-tuned (e.g., tuned down in frequency) from the resonant frequency (e.g. main resonant frequency) of the BAW resonator (e.g., tuned down to have a quarter wavelength resonant frequency that is lower than the 24 GHz main resonant frequency of the SHF BAW resonator). Similarly, layer thicknesses of Super High Frequency (SHF) harmonically tuned top sensor electrodes (e.g., layer thickness of SHF harmonically tuned top sensor electrodes2015A through2015K) may be selected to determine half wavelength resonant frequency of such SHF harmonically tuned top sensor electrodes at a frequency, e.g., half wavelength resonant frequency, within the Super High Frequency (SHF) wave band. For example, layer thickness of de-tuned of SHF harmonically tuned top sensor electrodes2015A through2015K may be acoustically de-tuned (e.g., tuned up in frequency) from the resonant frequency (e.g. main resonant frequency) of the BAW resonator (e.g., tuned up to have a half wavelength resonant frequency that is higher than a 24 GHz main resonant frequency of the SHF BAW resonator, e.g., tuned up to have a half wavelength resonant frequency that is higher than the 24 GHz main resonant frequency of the SHF BAW resonator). Alternatively, bulk acoustic wave resonators2001A through2001K may be bulk acoustic Extremely High Frequency (EHF) wave resonators2001A through2001K operable with a main resonance mode having a main resonant frequency that is an Extremely High Frequency (EHF) wave band (e.g., thirty-nine Gigahertz, 39 GHz main resonant frequency, e.g., seventy-seven Gigahertz, 77 GHz main resonant frequency) in an Extremely High Frequency (EHF) wave frequency band. As discussed previously herein, piezoelectric layer thicknesses may be selected to determine the main resonant frequency of bulk acoustic Extremely High Frequency (EHF) wave resonators2001A through2001K in the Extremely High Frequency (EHF) wave band (e.g., thirty-nine Gigahertz, 39 GHz main resonant frequency, e.g., seventy-seven Gigahertz, 77 GHz main resonant frequency). Similarly, layer thicknesses of Extremely High Frequency (EHF) reflector layers (e.g., layer thickness of multilayer metal acoustic EHF reflector electrodes1013A through2013K) may be selected to determine quarter wavelength resonant frequency of such EHF reflectors at a frequency, e.g., quarter wavelength resonant frequency, within the Extremely High Frequency (EHF) wave band. For example, layer thickness of de-tuned multi-layer metal acoustic EHF wave reflector bottom electrodes2013A through2013K may be acoustically de-tuned (e.g., tuned down in frequency) from the resonant frequency (e.g. main resonant frequency) of the BAW resonator (e.g., tuned down to have a quarter wavelength resonant frequency that is lower than a 77 GHz main resonant frequency of the EHF BAW resonator, e.g., tuned down to have a quarter wavelength resonant frequency that is lower than the 77 GHz main resonant frequency of the EHF BAW resonator). Similarly, layer thicknesses of Super High Frequency (SHF) harmonically tuned top sensor electrodes (e.g., layer thickness of EHF harmonically tuned top sensor electrodes2015A through2015K) may be selected to determine half wavelength resonant frequency of such EHF harmonically tuned top sensor electrodes at a frequency, e.g., half wavelength resonant frequency, within the Super High Frequency (SHF) wave band. For example, layer thickness of de-tuned of EHF harmonically tuned top sensor electrodes2015A through2015K may be acoustically de-tuned (e.g., tuned up in frequency) from the resonant frequency (e.g. main resonant frequency) of the BAW resonator (e.g., tuned up to have a half wavelength resonant frequency that is up higher than the 77 GHz main resonant frequency of the EHF BAW resonator). The general structures of the harmonically tuned top sensor electrode and the multilayer metal acoustic reflector electrode have already been discussed previously herein with respect ofFIGS.1A and1B. As already discussed, the multilayer metal acoustic reflector electrode is directed to respective pairs of metal electrode layers, in which a first member of the pair has a relatively low acoustic impedance (relative to acoustic impedance of another member of the pair), in which the other member of the pair has a relatively high acoustic impedance (relative to acoustic impedance of the first member of the pair). For example, in bottom de-tuned reflector electrodes2013A through2013I and2013K, the first member having the relatively lower acoustic impedance of the first pair may be arranged nearest, e.g. may abut, a first piezoelectric layer (e.g. bottom piezoelectric layer of the BAW resonator, e.g., piezoelectric stack of the BAW resonator). For example, in bottom de-tuned reflector electrodes2013J, the first member of the first pair of layers of bottom de-tuned reflector electrodes2013J having the relatively lower acoustic impedance of the first pair may be arranged substantially nearest, e.g. may substantially abut, the first piezoelectric layer (e.g. bottom piezoelectric layer of the BAW resonator, e.g., piezoelectric stack of the BAW resonator). This may facilitate suppressing parasitic lateral modes. In bottom de-tuned reflector electrodes2013A through2013K, the first member having the relatively lower acoustic impedance may be arranged sufficiently proximate to the first layer of piezoelectric material (e.g. may be arranged sufficiently proximate to the bottom piezoelectric layer, e.g. may be arranged sufficiently proximate to the piezoelectric stack), so that the first member having the relatively lower acoustic impedance may contribute more to the multilayer metal acoustic reflector electrode being acoustically de-tuned from the resonant frequency of the BAW resonator than is contributed by any other bottom metal electrode layer of the multilayer metal acoustic reflector electrode. In bottom de-tuned reflector electrodes2013A through2013K, the first member having the relatively lower acoustic impedance may be arranged sufficiently proximate to the first layer of piezoelectric material (e.g. may be arranged sufficiently proximate to the bottom piezoelectric layer, e.g. may be arranged sufficiently proximate to the piezoelectric stack), so that the first member having the relatively lower acoustic impedance may contribute more, e.g., may contribute more to facilitate suppressing parasitic lateral resonances in operation of the BAW resonator than is contributed by any other bottom metal electrode layer of the multilayer metal acoustic reflector electrode. Shown inFIG.2Ais a bulk acoustic SHF or EHF wave resonator2001A including a normal axis piezoelectric layer201A sandwiched between SHF or EHF detuned harmonically tuned top sensor electrode2015A and multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013A. For the single piezoelectric layer201A of bulk acoustic SHF or EHF wave resonator2001A, simulation may predict optimal facilitation of suppressing parasitic lateral resonances by de-tuning of the resonant frequency of the bulk acoustic wave resonator2001A, for de-tuning of SHF or EHF harmonically tuned top sensor electrode2015A and the multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013A. Also shown inFIG.2Ais a bulk acoustic SHF or EHF wave resonator2001B including a normal axis piezoelectric layer201B and a reverse axis piezoelectric layer202B arranged in a two piezoelectric layer alternating stack arrangement sandwiched between SHF or EHF detuned harmonically tuned top sensor electrode2015B and detuned multi-layer metal acoustic SHF or EHF wave reflector bottom electrode2013B. For the two piezoelectric layer201B,202B of bulk acoustic SHF or EHF wave resonator2001B, simulation may predict optimal facilitation of suppressing parasitic lateral resonances by de-tuning of the resonant frequency of the bulk acoustic wave resonator2001B, for de-tuning of the SHF or EHF detuned harmonically tuned top sensor electrode2015B and the multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013B. A bulk acoustic SHF or EHF wave resonator2001C includes a normal axis piezoelectric layer201C, a reverse axis piezoelectric layer202C, and another normal axis piezoelectric layer203C arranged in a three piezoelectric layer alternating stack arrangement sandwiched between SHF or EHF detuned harmonically tuned top sensor electrode2015C and detuned multi-layer metal acoustic SHF or EHF wave reflector bottom electrode2013C. For the three piezoelectric layer201C,202C,203C of bulk acoustic SHF or EHF wave resonator2001C, simulation may predict optimal facilitation of suppressing parasitic lateral resonances by de-tuning of the resonant frequency of the bulk acoustic wave resonator2001C, for de-tuning of the SHF or EHF detuned harmonically tuned top sensor electrode2015C and de-tuning of the multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013C. Included inFIG.2Bis bulk acoustic SHF or EHF wave resonator2001D in a further simplified view similar to the bulk acoustic wave resonator structure shown inFIGS.1A and1Band including a normal axis piezoelectric layer201D, a reverse axis piezoelectric layer202D, and another normal axis piezoelectric layer203D, and another reverse axis piezoelectric layer204D arranged in a four piezoelectric layer alternating stack arrangement sandwiched between SHF or EHF detuned harmonically tuned top sensor electrode2015D and multi-layer metal acoustic SHF or EHF wave reflector bottom electrode2013D. For the four piezoelectric layer201D,202D,203D,204D of bulk acoustic SHF or EHF wave resonator2001D, simulation may predict optimal facilitation of suppressing parasitic lateral resonances by de-tuning of the resonant frequency of the bulk acoustic wave resonator2001D, for de-tuning of the SHF or EHF detuned harmonically tuned top sensor electrode2015D and the multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013D. A bulk acoustic SHF or EHF wave resonator2001E includes a normal axis piezoelectric layer201E, a reverse axis piezoelectric layer202E, another normal axis piezoelectric layer203E, another reverse axis piezoelectric layer204E, and yet another normal axis piezoelectric layer205E arranged in a five piezoelectric layer alternating stack arrangement sandwiched between SHF or EHF detuned harmonically tuned top sensor electrode2015E and multi-layer metal acoustic SHF or EHF wave reflector bottom electrode2013E. For the five piezoelectric layer201E,202E,203E,204E,205E of bulk acoustic SHF or EHF wave resonator2001E, simulation may predict optimal facilitation of suppressing parasitic lateral resonances by de-tuning of the resonant frequency of the bulk acoustic wave resonator2001E, for de-tuning of the SHF or EHF detuned harmonically tuned top sensor electrode2015E and the multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013E. A bulk acoustic SHF or EHF wave resonator2001F includes a normal axis piezoelectric layer201F, a reverse axis piezoelectric layer202F, another normal axis piezoelectric layer203F, another reverse axis piezoelectric layer204F, yet another normal axis piezoelectric layer205F, and yet another reverse axis piezoelectric layer206F arranged in a six piezoelectric layer alternating stack arrangement sandwiched between SHF or EHF detuned harmonically tuned top sensor electrode2015F and multi-layer metal acoustic SHF or EHF wave reflector bottom electrode2013F. For the six piezoelectric layer201F,202F,203F,204F,205F,206F of bulk acoustic SHF or EHF wave resonator2001F, simulation may predict optimal facilitation of suppressing parasitic lateral resonances by de-tuning of the resonant frequency of the bulk acoustic wave resonator2001F, for de-tuning of the SHF or EHF detuned harmonically tuned top sensor electrode2015F and the multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013F. InFIG.2A, shown directly to the right of the bulk acoustic SHF or EHF wave resonator2001A including the normal axis piezoelectric layer201A and half-wavelength thick harmonically tuned top sensing electrode2015A, is a corresponding diagram2019A depicting its impedance versus frequency response during its electrical operation, as predicted by simulation. The diagram2019A depicts the main resonant peak2021A of the main resonant mode (e.g., main series resonant peak2021A) of the bulk acoustic SHF or EHF wave resonator2001A at its main resonant frequency (e.g., its 24 GHz series resonant frequency, e.g., its main series resonant frequency, e.g., Fs) and main parallel resonant peak2022A of the bulk acoustic SHF or EHF wave resonator2001A at its main parallel resonant frequency, Fp. The diagram2019A also depicts the satellite resonance peaks2023A,2025A of the satellite resonant modes of the bulk acoustic SHF or EHF wave resonator2001A at satellite frequencies above and below the main resonant frequency2021A (e.g., above and below the 24 GHz series resonant frequency). Relatively speaking, the main resonant mode corresponding to the main resonance peak2021A is the strongest resonant mode because it is stronger than other resonant modes of the resonator2001A, (e.g., stronger than the satellite modes corresponding to relatively lesser satellite resonance peaks2023A,2025A). Similarly, inFIGS.2A and2B, shown directly to the right of the bulk acoustic SHF or EHF wave resonators2001B through2001F are respective corresponding diagrams2019B through2019F depicting corresponding impedance versus frequency response during electrical operation, as predicted by simulation. The resonators2001B,2001D and2001F comprise one wavelength thick harmonically tuned top sensing electrodes2015B,2015D and2015F, respectively, while the resonators2001C and2001E comprise half-wavelength thick harmonically tuned top sensing electrodes2015C and2015E, respectively. The diagrams2019B through2019F depict respective example SHF main resonant peaks2021B through2021F of respective corresponding main resonant modes of bulk acoustic SHF wave resonators2001B through2001F at respective corresponding main resonant frequencies (e.g., respective 24 GHz series resonant frequencies, e.g., main series resonant frequencies, Fs) and main parallel resonant peak2022B through2022F of the bulk acoustic SHF or EHF wave resonator2001A at its main parallel resonant frequencies, Fp. The diagrams2019B through2019F also depict respective example SHF satellite resonance peaks2023B through2023F,2025B through2025F of respective corresponding satellite resonant modes of the bulk acoustic SHF wave resonators2001B through2001F at respective corresponding SHF satellite frequencies above and below the respective corresponding main SHF resonant frequencies2021B through2021F (e.g., above and below the corresponding respective 24 GHz series resonant frequencies). Relatively speaking, for the corresponding respective main SHF resonant modes, its corresponding respective SHF main resonance peak2021B through2021F is the strongest for its bulk acoustic SHF wave resonators2001B through2001F (e.g., stronger than the corresponding respective SHF satellite modes and corresponding respective lesser SHF satellite resonance peaks2023B,2025B). As mentioned previously,FIG.2Cshows additional alternative bulk acoustic wave resonator structures with additional numbers of alternating axis piezoelectric layers. A bulk acoustic SHF or EHF wave resonator2001G includes four normal axis piezoelectric layers201G,203G,205G,207G, and four reverse axis piezoelectric layers202G,204G,206G,208G arranged in an eight piezoelectric layer alternating stack arrangement sandwiched between SHF or EHF detuned harmonically tuned top sensor electrode2015G and multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013G. For the eight piezoelectric layer201G,202G,203G,204G,205G,206G,207G,208G of bulk acoustic SHF or EHF wave resonator2001G, simulation may predict optimal facilitation of suppressing parasitic lateral resonances by de-tuning of the resonant frequency of the bulk acoustic wave resonator2001G, for de-tuning of the SHF or EHF detuned harmonically tuned top sensor electrode2015G and the multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013G. A bulk acoustic SHF or EHF wave resonator2001H includes five normal axis piezoelectric layers201H,203H,205H,207H,209H and five reverse axis piezoelectric layers202H,204H,206H,208H,210H arranged in a ten piezoelectric layer alternating stack arrangement sandwiched between SHF or EHF detuned harmonically tuned top sensor electrode2015H and multi-layer metal bottom acoustic SHF or EHF wave reflector electrode2013H. For the ten piezoelectric layer201H,202H,203H,204H,205H,206H,207H,208H,209H,210H of bulk acoustic SHF or EHF wave resonator2001H, simulation may predict optimal facilitation of suppressing parasitic lateral resonances by de-tuning of the resonant frequency of the bulk acoustic wave resonator2001H, for de-tuning of the SHF or EHF detuned harmonically tuned top sensor electrode2015H and the multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013H. A bulk acoustic SHF or EHF wave resonator2001I includes nine normal axis piezoelectric layers201I,203I,205I,207I,209I,211I,213I,215I,217I and nine reverse axis piezoelectric layers202I,204I,206I,208I,210I,212I,214I,216I,218I arranged in an eighteen piezoelectric layer alternating stack arrangement sandwiched between SHF or EHF detuned harmonically tuned top sensor electrode2015I and multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector bottom electrode2013I. For the eighteen piezoelectric layer201I,202I,203I,204I,205I,206I,207I,208I,209I,210I,211I,212I,213I,214I,215I,216I,217I,218I of bulk acoustic SHF or EHF wave resonator2001H, simulation may predict optimal facilitation of suppressing parasitic lateral resonances by de-tuning of the resonant frequency of the bulk acoustic wave resonator2001I, for de-tuning of the SHF or EHF detuned harmonically tuned top sensor electrode2015I and the multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013I. In the example resonators,2001A through2001F, ofFIGS.2A through2B, respective sensing region216A through216F is explicitly shown. For the sake of simplicity in the example resonators2001G through2001I ofFIG.2C, respective sensing regions are present but are not explicitly shown. In the example resonators,2001A through2001I, ofFIGS.2A through2C, a notional heavy dashed line is used in depicting respective etched edge region,253A through253I, associated with the example resonators,2001A through2001I. Similarly, in the example resonators,2001A through2001I, ofFIGS.2A through2C, a laterally opposed etched edge region254A through254I may be arranged laterally opposite from etched edge region,253A through253I. The respective etched edge region may, but need not, assist with acoustic isolation of the resonators,2001A through2001I. The respective etched edge region may, but need not, help with avoiding acoustic losses for the resonators,2001A through2001I. The respective etched edge region,253A through253I, (and the laterally opposed etched edge region254A through254I) may extend along the thickness dimension of the respective piezoelectric layer stack. The respective etched edge region,253A through253I, (and the laterally opposed etched edge region254A through254I) may extend through (e.g., entirely through or partially through) the respective piezoelectric layer stack. The respective etched edge region,253A through253I may extend through (e.g., entirely through or partially through) the respective first piezoelectric layer,201A through201I. The respective etched edge region,253B through253I, (and the laterally opposed etched edge region254B through254I) may extend through (e.g., entirely through or partially through) the respective second piezoelectric layer,202B through202I. The respective etched edge region,253C through253I, (and the laterally opposed etched edge region254C through254I) may extend through (e.g., entirely through or partially through) the respective third piezoelectric layer,203C through203I. The respective etched edge region,253D through253I, (and the laterally opposed etched edge region254D through254I) may extend through (e.g., entirely through or partially through) the respective fourth piezoelectric layer,204D through204I. The respective etched edge region,253E through253I, (and the laterally opposed etched edge region254E through254I) may extend through (e.g., entirely through or partially through) the respective additional piezoelectric layers of the resonators,2001E through2001I. The respective etched edge region,253A through253I, (and the laterally opposed etched edge region254A through254I) may extend along the thickness dimension of the respective multi-layer metal acoustic SHF or EHF wave reflector bottom electrode,2013A through2013I, of the resonators,2001A through2001I. The respective etched edge region,253A through253I, (and the laterally opposed etched edge region254A through254I) may extend through (e.g., entirely through or partially through) the respective,2013A through2013I. The respective etched edge region,253A through253I, (and the laterally opposed etched edge region254A through254I) may extend along the thickness dimension of the respective SHF or EHF detuned harmonically tuned top sensor electrode2015A through2015I of the resonators,2001A through2001I. The etched edge region,253A through253I, (and the laterally opposed etched edge region254A through254I) may extend through (e.g., entirely through or partially through) the respective multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode,2013A through2013I. As shown inFIGS.2A through2C, first mesa structures corresponding to the respective stacks of piezoelectric material layers may extend laterally between (e.g., may be formed between) etched edge regions253A through253I and laterally opposing etched edge region254A through254I. Second mesa structures corresponding to multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013A through2013I may extend laterally between (e.g., may be formed between) etched edge regions253A through253I and laterally opposing etched edge region254A through254I. Third mesa structures corresponding to SHF or EHF detuned harmonically tuned top sensor electrode2015A through2015I may extend laterally between (e.g., may be formed between) etched edge regions253A through253I and laterally opposing etched edge region254A through254I. In accordance with the teachings herein, various bulk acoustic SHF or EHF wave resonators may include: a seven piezoelectric layer alternating axis stack arrangement; a nine piezoelectric layer alternating axis stack arrangement; an eleven piezoelectric layer alternating axis stack arrangement; a twelve piezoelectric layer alternating axis stack arrangement; a thirteen piezoelectric layer alternating axis stack arrangement; a fourteen piezoelectric layer alternating axis stack arrangement; a fifteen piezoelectric layer alternating axis stack arrangement; a sixteen piezoelectric layer alternating axis stack arrangement; and a seventeen piezoelectric layer alternating axis stack arrangement; and that these stack arrangements may be sandwiched between respective SHF or EHF detuned harmonically tuned top sensor electrodes and respective multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrodes. As mentioned previously, in accordance with the teachings of this disclosure, number of member piezoelectric layers in an alternating piezoelectric axis arrangement may be increased in designs extending to higher resonant frequencies. This may, but need not boost quality factor (Q factor). A total quality factor of the BAW resonator including the sheet resistance of the top electrode may be within a range from approximately three hundred to approximately fifteen hundred. Further, it should be understood that interposer layers as discussed previously herein with respect toFIG.1Aare explicitly shown in the simplified diagrams of the various resonators shown inFIGS.2A,2B and2C. Such interposers may be included and interposed between adjacent piezoelectric layers in the various resonators shown inFIGS.2A,2B and2C, and further may be included and interposed between adjacent piezoelectric layers in the various resonators having the alternating axis stack arrangements of various numbers of piezoelectric layers, as described in this disclosure. In some other alternative bulk acoustic wave resonator structures, fewer interposer layers may be employed. For example,FIG.2Dshows another alternative bulk acoustic wave resonator structure2001J, similar to bulk acoustic wave resonator structure2001I shown inFIG.2C, but with differences. For example, relatively fewer interposer layers may be included in the alternative bulk acoustic wave resonator structure2001J shown inFIG.2D. For example,FIG.2Dshows a first interposer layer261J interposed between second layer of (reverse axis) piezoelectric material202J and third layer of (normal axis) piezoelectric material203J, but without an interposer layer interposed between first layer of (normal axis) piezoelectric material201J and second layer of (reverse axis) piezoelectric material202J. As shown inFIG.2Din a first detailed view220J, without an interposer layer interposed between first layer of piezoelectric material201J and second layer of piezoelectric material202J, the first and second piezoelectric layer201J,202J may be a monolithic layer222J of piezoelectric material (e.g., Aluminum Nitride (AlN)) having first and second regions224J,226J. A central region of monolithic layer222J of piezoelectric material (e.g., Aluminum Nitride (AlN)) between first and second regions224J,226J may be oxygen rich. The first region224J of monolithic layer222J (e.g., bottom region224J of monolithic layer222J) has a first piezoelectric axis orientation (e.g., normal axis orientation) as representatively illustrated in detailed view220J using a downward pointing arrow at first region224J, (e.g., bottom region224J). This first piezoelectric axis orientation (e.g., normal axis orientation, e.g., downward pointing arrow) at first region224J of monolithic layer222J (e.g., bottom region224J of monolithic layer222J) corresponds to the first piezoelectric axis orientation (e.g., normal axis orientation, e.g., downward pointing arrow) of first piezoelectric layer201J. The second region226J of monolithic layer222J (e.g., top region226J of monolithic layer222J) has a second piezoelectric axis orientation (e.g., reverse axis orientation) as representatively illustrated in detailed view220J using an upward pointing arrow at second region226J, (e.g., top region226J). This second piezoelectric axis orientation (e.g., reverse axis orientation, e.g., upward pointing arrow) at second region226J of monolithic layer222J (e.g., top region226J of monolithic layer222J) may be formed to oppose the first piezoelectric axis orientation (e.g., normal axis orientation, e.g., downward pointing arrow) at first region224J of monolithic layer222J (e.g., bottom region224J of monolithic layer222J) by adding gas (e.g., oxygen) to flip the axis while sputtering the second region226J of monolithic layer222J (e.g., top region226J of monolithic layer222J) onto the first region224J of monolithic layer222J (e.g., bottom region224J of monolithic layer222J). The second piezoelectric axis orientation (e.g., reverse axis orientation, e.g., upward pointing arrow) at second region226J of monolithic layer222J (e.g., top region226J of monolithic layer222J) corresponds to the second piezoelectric axis orientation (e.g., reverse axis orientation, e.g., upward pointing arrow) of second piezoelectric layer202J. Similarly, as shown inFIG.2Din a second detailed view230J, without an interposer layer interposed between third layer of piezoelectric material203J and fourth layer of piezoelectric material204J, the third and fourth piezoelectric layer203J,204J may be an additional monolithic layer232J of piezoelectric material (e.g., Aluminum Nitride (AlN)) having first and second regions234J,236J. A central region of additional monolithic layer232J of piezoelectric material (e.g., Aluminum Nitride (AlN)) between first and second regions234J,236J may be oxygen rich. The first region234J of additional monolithic layer232J (e.g., bottom region234J of additional monolithic layer232J) has the first piezoelectric axis orientation (e.g., normal axis orientation) as representatively illustrated in second detailed view230J using the downward pointing arrow at first region234J, (e.g., bottom region224J). This first piezoelectric axis orientation (e.g., normal axis orientation, e.g., downward pointing arrow) at first region234J of additional monolithic layer232J (e.g., bottom region234J of additional monolithic layer232J) corresponds to the first piezoelectric axis orientation (e.g., normal axis orientation, e.g., downward pointing arrow) of third piezoelectric layer203J. The second region236J of additional monolithic layer232J (e.g., top region236J of additional monolithic layer232J) has the second piezoelectric axis orientation (e.g., reverse axis orientation) as representatively illustrated in second detailed view230J using the upward pointing arrow at second region236J, (e.g., top region236J). This second piezoelectric axis orientation (e.g., reverse axis orientation, e.g., upward pointing arrow) at second region236J of additional monolithic layer232J (e.g., top region236J of additional monolithic layer232J) may be formed to oppose the first piezoelectric axis orientation (e.g., normal axis orientation, e.g., downward pointing arrow) at first region234J of additional monolithic layer232J (e.g., bottom region234J of additional monolithic layer232J) by adding gas (e.g., oxygen) to flip the axis while sputtering the second region236J of additional monolithic layer232J (e.g., top region236J of additional monolithic layer232J) onto the first region234J of additional monolithic layer232J (e.g., bottom region234J of additional monolithic layer232J). The second piezoelectric axis orientation (e.g., reverse axis orientation, e.g., upward pointing arrow) at second region236J of additional monolithic layer232J (e.g., top region236J of additional monolithic layer232J) corresponds to the second piezoelectric axis orientation (e.g., reverse axis orientation, e.g., upward pointing arrow) of fourth piezoelectric layer204J. Similar to what was just discussed, without an interposer layer interposed between fifth layer of piezoelectric material205J and sixth layer of piezoelectric material206J, the fifth and sixth piezoelectric layer205J,206J may be another additional monolithic layer of piezoelectric material (e.g., Aluminum Nitride (AlN)) having first and second regions. More generally, for example inFIG.2D, where N is an odd positive integer, without an interposer layer interposed between Nth layer of piezoelectric material and (N+1)th layer of piezoelectric material, the Nth and (N+1)th piezoelectric layer may be an (N+1)/2th monolithic layer of piezoelectric material (e.g., Aluminum Nitride (AlN)) having first and second regions. Accordingly, without an interposer layer interposed between seventeenth layer of piezoelectric material217J and eighteenth layer of piezoelectric material218J, the seventeenth and eighteenth piezoelectric layer217J,218J may be ninth monolithic layer of piezoelectric material (e.g., Aluminum Nitride (AlN)) having first and second regions. The first interposer layer261J is shown inFIG.2Das interposing between a first pair of opposing axis piezoelectric layers201J,202J, and a second pair of opposing axis piezoelectric layers203J,204J. More generally, for example, where M is a positive integer, an Mth interposer layer is shown inFIG.2Das interposing between an Mth pair of opposing axis piezoelectric layers and an (M+1)th pair of opposing axis piezoelectric layers. Accordingly, an eighth interposer layer is shown inFIG.2Das interposing between an eighth pair of opposing axis piezoelectric layers215J,216J, and a ninth pair of opposing axis piezoelectric layers217J,218J.FIG.2Dshows an eighteen piezoelectric layer alternating axis stack arrangement sandwiched between SHF or EHF detuned harmonically tuned top sensor electrodes2015J and multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013J. Etched edge region253J (and laterally opposing etched edge region254J) may extend through (e.g., entirely through, e.g., partially through) the eighteen piezoelectric layer alternating axis stack arrangement and its interposer layers, and may extend through (e.g., entirely through, e.g., partially through) SHF or EHF detuned harmonically tuned top sensor electrodes2015J, and may extend through (e.g., entirely through, e.g., partially through) multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013J. As shown inFIG.2D, a first mesa structure corresponding to the stack of eighteen piezoelectric material layers may extend laterally between (e.g., may be formed between) etched edge region253J and laterally opposing etched edge region254J. A second mesa structure corresponding to multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013J may extend laterally between (e.g., may be formed between) etched edge region253J and laterally opposing etched edge region254J. Third mesa structure corresponding to SHF or EHF detuned harmonically tuned top sensor electrodes2015J may extend laterally between (e.g., may be formed between) etched edge region253J and laterally opposing etched edge region254J. As mentioned previously herein, one or more (e.g., one or a plurality of) interposer layers may be metal interposer layers. Alternatively or additionally, one or more (e.g., one or a plurality of) interposer layers may be dielectric interposer layers. Interposer layers may be metal and/or dielectric interposer layers. Alternatively or additionally, one or more (e.g., one or a plurality of) interposer layers may be formed of different metal layers. For example, high acoustic impedance metal layer such as Tungsten (W), Molybdenum (Mo) may (but need not) raise effective electromechanical coupling coefficient (Kt2) while subsequently deposited metal layer with hexagonal symmetry such as Titanium (Ti) may (but need not) facilitate higher crystallographic quality of subsequently deposited piezoelectric layer. Alternatively or additionally, one or more (e.g., one or a plurality of) interposer layers may be formed of different dielectric layers. For example, high acoustic impedance dielectric layer such as Hafnium Dioxide (HfO2) may (but need not) raise effective electromechanical coupling coefficient (Kt2). Subsequently deposited amorphous dielectric layer such as Silicon Dioxide (SiO2) may (but need not) facilitate compensating for temperature dependent frequency shifts. Alternatively or additionally, one or more (e.g., one or a plurality of) interposer layers may comprise metal and dielectric for respective interposer layers. For example, high acoustic impedance metal layer such as Tungsten (W), Molybdenum (Mo) may (but need not) raise effective electromechanical coupling coefficient (Kt2) while subsequently deposited amorphous dielectric layer such as Silicon Dioxide (SiO2) may (but need not) facilitate compensating for temperature dependent frequency shifts. For example, inFIG.2Done or more of the interposer layers (e.g., interposer layer268J) may comprise metal and dielectric for respective interposer layers. For example, detailed view240J of interposer268J shows interposer268J as comprising metal sub-layer268JB over dielectric sub-layer268JA. For interposer268J, example thickness of metal sub-layer268JB may be approximately two hundred Angstroms (200 A). For interposer268J, example thickness of dielectric sub-layer268JA may be approximately two hundred Angstroms (200 A). The second piezoelectric axis orientation (e.g., reverse axis orientation, e.g., upward pointing arrow) at region244J (e.g., bottom region244J) corresponds to the second piezoelectric axis orientation (e.g., reverse axis orientation, e.g., upward pointing arrow) of eighth piezoelectric layer208J. The first piezoelectric axis orientation (e.g., normal axis orientation, e.g., downward pointing arrow) at region246J (e.g., top region246J) corresponds to the first piezoelectric axis orientation (e.g., normal orientation, e.g., downward pointing arrow) of ninth piezoelectric layer209J. FIG.2Dshows sensing region216J acoustically coupled with SHF or EHF detuned harmonically tuned top sensor electrode2015. Multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013J may comprise a first pair of metal top electrode layers (not shown). Multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013J may also include additional similar pairs (not shown) of alternating high acoustic impedance metal layers. The first pair of metal top electrode layers may comprise a first member of low acoustic impedance metal layer and a second member of high acoustic impedance metal layer (not shown). In addition to the first pair of metal bottom electrode layers (not shown), the multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode may include additional pairs (not shown) of alternating high acoustic impedance/low acoustic metal layers. In multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013J, the first member of low acoustic impedance metal layer (not shown) may be arranged nearer to a piezoelectric layer (e.g., nearer to bottom piezoelectric layer201J, e.g., nearer to stack of piezoelectric layers254J) than second member of high acoustic impedance metal layer (not shown). This arrangement may facilitate suppressing parasitic lateral resonances in operation of the BAW resonator. InFIG.2D, an additional intervening high acoustic impedance layer may be present in multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode2013J but is not shown. This additional intervening high acoustic impedance layer may be very thin (e.g. thickness about one tenth or less of an acoustic wavelength of the main resonant frequency of the BAW resonator2001J). However, in alternative examples, intervening high acoustic impedance layer may be otherwise embodied, e.g., in a very thin additional intervening multi-layer structure (not shown) in which an aggregate thickness of the entire additional intervening multi-layer structure is about one tenth or less of an acoustic wavelength of the main resonant frequency of the BAW resonator2001J, e.g., various different materials comprising additional intervening multi-layer structure (not shown) in which an aggregate thickness of the entire additional intervening multi-layer structure is about one tenth or less of an acoustic wavelength of the main resonant frequency of the BAW resonator2001J. As mentioned previously, in bottom de-tuned reflector electrodes2013J, the first member having the relatively lower acoustic impedance of the first pair may be arranged substantially nearest, e.g. may substantially abut, the first piezoelectric layer (e.g. top piezoelectric layer of the BAW resonator, e.g., piezoelectric stack of the BAW resonator). It is theorized that because any intervening layers are so thin (e.g., in aggregate any intervening multi-layer structures are so thin), despite their presence, there is still facilitation of suppressing parasitic lateral resonances in operation of the BAW resonator. As discussed, interposer layers shown inFIG.1A, and as explicitly shown in the simplified diagrams of the various resonators shown inFIGS.2A,2B,2C and2Dmay be included and interposed between adjacent piezoelectric layers in the various resonators. Such interposer layers may laterally extend within the mesa structure of the stack of piezoelectric layers a full lateral extent of the stack, e.g., between the etched edge region of the stack and the opposing etched edge region of the stack. However, in some other alternative bulk acoustic wave resonator structures, interposer layers may be patterned during fabrication of the interposer layers (e.g., patterned using masking and selective etching techniques during fabrication of the interposer layers). Such patterned interposer layers need not extend a full lateral extent of the stack (e.g., need not laterally extend to any etched edge regions of the stack.) For example,FIG.2Eshows another alternative bulk acoustic wave resonator structure2001K, similar to bulk acoustic wave resonator structure2001J shown inFIG.2D, but with differences. For example, in the alternative bulk acoustic wave resonator structure2001K shown inFIG.2E, patterned interposer layers (e.g., first patterned interposer layer261K) may be interposed between sequential pairs of opposing axis piezoelectric layers (e.g., first patterned interposer layer261K may be interposed between a first pair of opposing axis piezoelectric layers201K,202K, and a second pair of opposing axis piezoelectric layers203K,204K). FIG.2Eshows an eighteen piezoelectric layer alternating axis stack arrangement having an active region of the bulk acoustic wave resonator structure2001K sandwiched between overlap of SHF or EHF detuned harmonically tuned top sensor electrode2015K and multi-layer metal acoustic SHF or EHF wave reflector bottom electrode2013K. InFIG.2E, patterned interposer layers (e.g., first patterned interposer layer261K) may be patterned to have extent limited to the active region of the bulk acoustic wave resonator structure2001K sandwiched between overlap of SHF or EHF detuned harmonically tuned top sensor electrode2015K and multi-layer metal acoustic SHF or EHF wave reflector bottom electrode2013K. A planarization layer265K at a limited extent of multi-layer metal acoustic SHF or EHF wave reflector bottom electrode2013K may facilitate fabrication of the eighteen piezoelectric layer alternating axis stack arrangement (e.g., stack of eighteen piezoelectric layers201K through218K). Patterning of interposer layers may be done in various combinations. For example, some interposer layers need not be patterned (e.g., may be unpatterned) within lateral extent of the stack of piezoelectric layers (e.g., some interposer layers may extend to full lateral extent of the stack of piezoelectric layers). For example, first interposer layer261J shown inFIG.2Dneed not be patterned (e.g., may be unpatterned) within lateral extent of the stack of piezoelectric layers (e.g., first interposer layer261J may extend to full lateral extent of the stack of piezoelectric layers). For example, inFIG.2Dinterposer layers interposed between adjacent sequential pairs of normal axis and reverse axis piezoelectric layers need not be patterned (e.g., may be unpatterned) within lateral extent of the stack of piezoelectric layers (e.g., interposer layers interposed between sequential pairs of normal axis and reverse axis piezoelectric layers may extend to full lateral extent of the stack of piezoelectric layers). For example inFIG.2D, first interposer layer261J interposed between first sequential pair of normal axis and reverse axis piezoelectric layers201J,202J and adjacent second sequential pair of normal axis and reverse axis piezoelectric layers203J,204J need not be patterned within lateral extent of the stack of piezoelectric layers (e.g., first interposer layer261J may extend to full lateral extent of the stack of piezoelectric layers). In contrast to these unpatterned interposer layers (e.g., in contrast to unpatterned interposer layer261J) as shown inFIG.2D, inFIG.2Epatterned interposer layers (e.g., first patterned interposer layer261K) may be patterned, for example, to have extent limited to the active region of the bulk acoustic wave resonator structure2001K shown inFIG.2E.FIG.2Eshows sensing region216J acoustically coupled with SHF or EHF detuned harmonically tuned top sensor electrode2015K. FIGS.3A through3Cillustrate example integrated circuit structures used to form the example bulk acoustic wave resonator structure ofFIG.1A. As shown inFIG.3A, magnetron sputtering may sequentially deposit layers on silicon substrate101. Initially, a seed layer103of suitable material (e.g., aluminum nitride (AlN), e.g., silicon dioxide (SiO2), e.g., aluminum oxide (Al2O3), e.g., silicon nitride (Si3N4), e.g., amorphous silicon (a-Si), e.g., silicon carbide (SiC)) may be deposited, for example, by sputtering from a respective target (e.g., from an aluminum, silicon, or silicon carbide target). The seed layer may have a layer thickness in a range from approximately one hundred Angstroms (100 A) to approximately one micron (1 um). In some examples, the seed layer103may also be at least partially formed of electrical conductivity enhancing material such as Aluminum (Al) or Gold (Au). Next, successive pairs of alternating layers of high acoustic impedance metal and low acoustic impedance metal may be deposited by alternating sputtering from targets of high acoustic impedance metal and low acoustic impedance metal. For example, sputtering targets of high acoustic impedance metal such as Molybdenum or Tungsten may be used for sputtering the high acoustic impedance metal layers, and sputtering targets of low acoustic impedance metal such as Aluminum or Titanium may be used for sputtering the low acoustic impedance metal layers. For example, the fourth pair of bottom metal electrode layers,133,131, may be deposited by sputtering the high acoustic impedance metal for a first bottom metal electrode layer133of the pair on the seed layer103, and then sputtering the low acoustic impedance metal for a second bottom metal electrode layer131of the pair on the first layer133of the pair. Similarly, the third pair of bottom metal electrode layers,129,127, may then be deposited by sequentially sputtering from the high acoustic impedance metal target and the low acoustic impedance metal target. Similarly, the second pair of bottom metal electrodes125,123, may then be deposited by sequentially sputtering from the high acoustic impedance metal target and the low acoustic impedance metal target. Similarly, the first pair of bottom metal electrodes121,119, may then be deposited by sequentially sputtering from the high acoustic impedance metal target and the low acoustic impedance metal target. Respective layer thicknesses of bottom metal electrode layers of the second, third and fourth pairs119,121,123,125,127,129,131,133may correspond to approximately a quarter wavelength (e.g., a quarter of an acoustic wavelength) of the resonant frequency at the resonator (e.g., respective layer thickness of about six hundred Angstroms (660 A) for the example 24 GHz resonator.) However, in the figures, the first member119of the first pair of bottom metal electrode layers for the bottom acoustic reflector is depicted as relatively thicker (e.g., thickness of the first member119of the first pair of bottom metal electrode layers is depicted as relatively thicker) than thickness of remainder bottom acoustic layers. For example, a thickness of the first member119of the first pair of bottom metal electrode layers may be about 60 Angstroms greater, e.g., substantially greater, than an odd multiple (e.g., 1×, 3×, etc.) of a quarter of a wavelength (e.g., 60 Angstroms greater than one quarter of the acoustic wavelength) for the first member119of the first pair of bottom metal electrode layers. For example, if Titanium is used as the low acoustic impedance metal for a 24 GHz resonator (e.g., resonator having a main resonant frequency of about 24 GHz), a thickness for the first member119of the first pair of bottom metal electrode layers of the bottom acoustic may be about 690 Angstroms, while respective layer thicknesses shown in the figures for corresponding members of the other pairs of bottom metal electrode layers may be substantially thinner. Next, as shown inFIG.3B, the bottom electrode layers may be patterned (e.g., by photolithographic masking and etching) and planarized, for example using the planarization material165. A suitable planarization material (e.g., Silicon Dioxide (SiO2), Hafnium Dioxide (HfO2), Polyimide, or BenzoCyclobutene (BCB)). These materials may be deposited by suitable methods, for example, chemical vapor deposition, standard or reactive magnetron sputtering (e.g., in cases of SiO2 or HfO2) or spin coating (e.g., in cases of Polyimide or BenzoCyclobutene (BCB)). FIG.3Cshows that a stack of four layers of piezoelectric material, for example, four layers of Aluminum Nitride (AlN) having the wurtzite structure deposited by sputtering. For example, bottom piezoelectric layer105, first middle piezoelectric layer107, second middle piezoelectric layer109, and top piezoelectric layer111may be deposited by sputtering. The four layers of piezoelectric material in the stack104, may have the alternating axis arrangement in the respective stack104. For example the bottom piezoelectric layer105may be sputter deposited to have the normal axis orientation, which is depicted inFIG.3Cusing the downward directed arrow. The first middle piezoelectric layer107may be sputter deposited to have the reverse axis orientation, which is depicted in theFIG.3Cusing the upward directed arrow. The second middle piezoelectric layer109may have the normal axis orientation, which is depicted in theFIG.3Cusing the downward directed arrow. The top piezoelectric layer may have the reverse axis orientation, which is depicted in theFIG.3Cusing the upward directed arrow. As mentioned previously herein, polycrystalline thin film MN may be grown in the crystallographic c-axis negative polarization, or normal axis orientation perpendicular relative to the substrate surface using reactive magnetron sputtering of the Aluminum target in the nitrogen atmosphere. As was discussed in greater detail previously herein, changing sputtering conditions, for example by adding oxygen, may reverse the axis to a crystallographic c-axis positive polarization, or reverse axis, orientation perpendicular relative to the substrate surface. Interposer layers may be sputtered between sputtering of piezoelectric layers, so as to be sandwiched between piezoelectric layers of the stack. For example, first interposer layer159, may sputtered between sputtering of bottom piezoelectric layer105, and the first middle piezoelectric layer107, so as to be sandwiched between the bottom piezoelectric layer105, and the first middle piezoelectric layer107. For example, second interposer layer161may be sputtered between sputtering first middle piezoelectric layer107and the second middle piezoelectric layer109so as to be sandwiched between the first middle piezoelectric layer107, and the second middle piezoelectric layer109. For example, third interposer layer163, may be sputtered between sputtering of second middle piezoelectric layer109and the top piezoelectric layer111so as to be sandwiched between the second middle piezoelectric layer109and the top piezoelectric layer111. As discussed previously, one or more of the interposer layers (e.g., interposer layers159,161,163) may be metal interposer layers, e.g., high acoustic impedance metal interposer layers, e.g., Molybdenum metal interposer layers. These may be deposited by sputtering from a metal target. As discussed previously, one or more of the interposer layers (e.g., interposer layers159,161,163) may be dielectric interposer layers, e.g., silicon dioxide interposer layers. These may be deposited by reactive sputtering from a Silicon target in an oxygen atmosphere. Alternatively or additionally, one or more (e.g., one or a plurality of) interposer layers may be formed of different metal layers. For example, high acoustic impedance metal layer such as Tungsten (W), Molybdenum (Mo) may (but need not) raise effective electromechanical coupling coefficient (Kt2) while subsequently deposited metal layer with hexagonal symmetry such as Titanium (Ti) may (but need not) facilitate higher crystallographic quality of subsequently deposited piezoelectric layer. Alternatively or additionally, one or more (e.g., one or a plurality of) interposer layers may be formed of different dielectric layers. For example, high acoustic impedance dielectric layer such as Hafnium Dioxide (HfO2) may (but need not) raise effective electromechanical coupling coefficient (Kt2). Subsequently deposited amorphous dielectric layer such as Silicon Dioxide (SiO2) may (but need not) facilitate compensating for temperature dependent frequency shifts. Alternatively or additionally, one or more (e.g., one or a plurality of) interposer layers may comprise metal and dielectric for respective interposer layers. For example, high acoustic impedance metal layer such as Tungsten (W), Molybdenum (Mo) may (but need not) raise effective electromechanical coupling coefficient (Kt2). Subsequently deposited amorphous dielectric layer such as Silicon Dioxide (SiO2) may (but need not) facilitate compensating for temperature dependent frequency shifts. Sputtering thickness of interposer layers may be as discussed previously herein. Interposer layers may facilitate sputter deposition of piezoelectric layers. For example, initial sputter deposition of second interposer layer166on reverse axis first middle piezoelectric layer107may facilitate subsequent sputter deposition of normal axis piezoelectric layer109. Similar to the previous discussion of patterning, etching and planarization in forming the bottom electrode layers of multilayer metal acoustic reflector electrode113, the stack104of four piezoelectric layers105,107,109,111and their interposers may be patterned (e.g., by photolithographic masking and etching) and planarized. The harmonically tuned top sensor electrode115may be deposited by sputtering the high acoustic impedance metal onto the stack104of four piezoelectric layers105,107,109,111. Thickness of the harmonically tuned top electrode115may be approximately an integral multiple of a half of an acoustic wavelength (e.g., one wavelength) of the resonant frequency of the BAW resonator coupled with the sensing region. The harmonically tuned top sensor electrode115may be patterned (e.g., by photolithographic masking and etching) and planarized. In aggregate, the etching of harmonically tuned top sensor electrode115, the etching of the stack104of four piezoelectric layers105,107,109,111, and the etching of multilayer metal acoustic reflector electrode113for etched edge region153extending there through as shown inFIG.3C. A notional heavy dashed line is used inFIG.3Cdepicting the etched edge region153associated with the harmonically tuned top sensor electrode115. A first portion of etched edge region153may extend along the thickness dimension T25of the harmonically tuned top sensor electrode115. The mesa structure (e.g., third mesa structure) corresponding to the harmonically tuned top sensor electrode115may extend laterally between (e.g., may be formed between) etched edge region153and laterally opposing etched edge region154. Dry etching may be used, e.g., reactive ion etching may be used to etch the materials of the harmonically tuned top sensor electrode115. Chlorine based reactive ion etch may be used to etch Aluminum, in cases where Aluminum is used in the harmonically tuned top sensor electrode115. Fluorine based reactive ion etch may be used to etch Tungsten (W), Molybdenum (Mo), Titanium (Ti), Silicon Nitride (SiN), Silicon Dioxide (SiO2) and/or Silicon Carbide (SiC) in cases where these materials are used in the harmonically tuned top sensor electrode115. An isolation layer167may also be included and arranged over the planarization layer165. For the acoustic resonator based sensor of this disclosure, a suitable dielectric material may be used for the isolation layer167, for example Silicon Nitride, Silicon Dioxide, or Aluminum Nitride. Thickness of isolation layer167may be controlled, for example, to be very thin, for example, within a range from approximately fifty Angstroms to approximately three hundred Angstroms (approximately 50 A to approximately 300 A) for resonators designed to operate at approximately 24 GHz. After planarization layer165(e.g., in one or more steps) and the isolation layer167have been deposited, additional procedures of photolithographic masking, layer etching, and mask removal may be done to form a pair of etched acceptance locations183A,183B for electrical interconnections. Reactive ion etching or inductively coupled plasma etching with a gas mixture of argon, oxygen and a fluorine containing gas such as tetrafluoromethane (CF4) or Sulfur hexafluoride (SF6) may be used to etch through the isolation layer167and the planarization layer165to form the pair of etched acceptance locations183A,183B for electrical interconnections. Photolithographic masking, sputter deposition, and mask removal may then be used form electrical interconnects in the pair of etched acceptance locations183A,183B shown inFIG.3C, so as to provide for the bottom electrical interconnect169and top electrical interconnect171that are shown explicitly inFIG.1A. A suitable material, for example Gold (Au) or copper (CU) may be used for the bottom electrical interconnect169and top electrical interconnect171. FIGS.4A through4Cshow alternative example bulk acoustic wave resonators400A through400C to the example bulk acoustic wave resonator100A shown inFIG.1A. For example, the bulk acoustic wave resonator400A shown inFIG.4Amay have a cavity483A e.g., an air cavity483A e.g., extending into substrate401A e.g., extending into silicon substrate401A e.g., arranged below multilayer metal acoustic reflector electrode413A. The cavity483A may be formed using techniques known to those with ordinary skill in the art. For example, the cavity483A may be formed by initial photolithographic masking and etching of the substrate401A (e.g., silicon substrate401A), and deposition of a sacrificial material (e.g., phosphosilicate glass (PSG)). The phosphosilicate glass (PSG) may comprise 8% phosphorous and 92% silicon dioxide. The resonator400A may be formed over the sacrificial material (e.g., phosphosilicate glass (PSG)). The sacrificial material may then be selectively etched away beneath the resonator400A, leaving cavity483A, beneath the resonator400A. For example phosphosilicate glass (PSG) sacrificial material may be selectively etched away by hydrofluoric acid beneath the resonator400A leaving cavity483A beneath the resonator400A. The cavity483A may, but need not, be arranged to provide acoustic isolation of the structures, e.g., multilayer metal acoustic reflector electrode413A, e.g., stack404A of piezoelectric layers, e.g., resonator400A, from the substrate401A. Similarly, inFIGS.4B,4C, a via485B,485C (e.g., through silicon via485B, e.g., through silicon carbide via485C) may, but need not, be arranged to provide acoustic isolation of the structures, e.g., multilayer metal acoustic reflector electrode413B,413C e.g., stack404B,404C, of piezoelectric layers, e.g., resonator400B,400C from the substrate401B,401C. The via485B,485C, (e.g., through silicon via485B, e.g., through silicon carbide via485C) may be formed using techniques (e.g., using photolithographic masking and etching techniques) known to those with ordinary skill in the art. For example, inFIG.4B, backside photolithographic masking and etching techniques may be used to form the through silicon via485B, and an additional passivation layer487B may be deposited, after the resonator400B is formed. For example, inFIG.4C, backside photolithographic masking and etching techniques may be used to form the through silicon carbide via485C, after the harmonically tuned top sensor electrode415C, and stack404C, of piezoelectric layers are formed. InFIG.4C, after the through silicon carbide via485C is formed, backside photolithographic masking and deposition techniques may be used to form multilayer metal acoustic reflector electrode413C, and additional passivation layer487C. InFIGS.4A,4B,4C, multilayer metal acoustic reflector electrode413A,413B,413C, may include the acoustically reflective bottom electrode stack of the plurality of bottom metal electrode layers, in which thicknesses of the bottom metal electrode layers may be related to wavelength (e.g., acoustic wavelength) at the main resonant frequency of the example resonator400A,400B,400C. Respective layer thicknesses, (e.g., T02through T04, explicitly shown inFIGS.4A,4B,4C) for members of the pairs of bottom metal electrode layers may be about one quarter of the wavelength (e.g., one quarter acoustic wavelength) at the main resonant frequency of the example resonators400A,400B,400C. Relatively speaking, in various alternative designs of the example resonators400A,400B,400C, for relatively lower main resonant frequencies (e.g., five Gigahertz (5 GHz)) and having corresponding relatively longer wavelengths (e.g., longer acoustic wavelengths), may have relatively thicker bottom metal electrode layers in comparison to other alternative designs of the example resonators400A,400B,400C, for relatively higher main resonant frequencies (e.g., twenty-four Gigahertz (24 GHz)). There may be corresponding longer etching times to form, e.g., etch through, the relatively thicker bottom metal electrode layers in designs of the example resonator400A,400B,400C, for relatively lower main resonant frequencies (e.g., five Gigahertz (5 GHz)). Accordingly, in designs of the example resonators400A,400B,400C, for relatively lower main resonant frequencies (e.g., five Gigahertz (5 GHz)) having the relatively thicker bottom metal electrode layers, there may (but need not) be an advantage in etching time in having a relatively fewer number (e.g., four (4)) of bottom metal electrode layers, shown in4A,4B,4C, in comparison to a relatively larger number (e.g., eight (8)) of bottom metal electrode layers, shown inFIG.1A. The relatively larger number (e.g., eight (8)) of bottom metal electrode layers, shown inFIG.1Amay (but need not) provide for relatively greater acoustic isolation than the relatively fewer number (e.g., four (4)) of bottom metal electrode layers. However, inFIGS.4A and4Ethe cavity483A,483E, (e.g., air cavity483A,483E) may (but need not) be arranged to provide acoustic isolation enhancement relative to some designs without the cavity483A. Similarly, inFIGS.4B,4C, the via483B,483C, (e.g., through silicon via485B, e.g., through silicon carbide via485C) may (but need not) be arranged to provide acoustic isolation enhancement relative to some designs without the via483B,483C. InFIG.4A, the cavity483A may (but need not) be arranged to compensate for relatively lesser acoustic isolation of the relatively fewer number (e.g., four (4)) of bottom metal electrode layers. InFIG.4A, the cavity483A may (but need not) be arranged to provide acoustic isolation benefits, while retaining possible electrical conductivity improvements and etching time benefits of the relatively fewer number (e.g., four (4)) of bottom metal electrode layers, e.g., particularly in designs of the example resonator400A for relatively lower main resonant frequencies (e.g., five Gigahertz (5 GHz)). Similarly, inFIGS.4B,4C, the via483B,483C, may (but need not) be arranged to compensate for relatively lesser acoustic isolation of the relatively fewer number (e.g., four (4)) of bottom metal electrode layers. InFIGS.4B,4C, the via483B,483C, may (but need not) be arranged to provide acoustic isolation benefits, while retaining possible electrical conductivity improvement benefits and etching time benefits of the relatively fewer number (e.g., four (4)) of bottom metal electrode layers, e.g., particularly in designs of the example resonator400B,400C, for relatively lower main resonant frequencies (e.g., five Gigahertz (5 GHz), e.g., below six Gigahertz (6 GHz), e.g., below five Gigahertz (5 GHz)). Although in various example resonators,100A,400A,400B, polycrystalline piezoelectric layers (e.g., polycrystalline Aluminum Nitride (AlN)) may be deposited (e.g., by sputtering), in another example resonator400C, alternative single crystal or near single crystal piezoelectric layers (e.g., single/near single crystal Aluminum Nitride (AlN)) may be deposited (e.g., by metal organic chemical vapor deposition (MOCVD)). Normal axis piezoelectric layers (e.g., normal axis Aluminum Nitride (AlN) piezoelectric layers) may be deposited by MOCVD using techniques known to those with skill in the art. As discussed previously herein, the interposer layers may be deposited by sputtering, but alternatively may be deposited by MOCVD. Reverse axis piezoelectric layers (e.g., reverse axis Aluminum Nitride (AlN) piezoelectric layers) may likewise be deposited via MOCVD. For the example resonators400C shown inFIG.4C, the alternating axis piezoelectric stack404C comprised of piezoelectric layers405C,407C,409C,411C, as well as interposer layers459C,461C,463C, extending along stack thickness dimension T27fabricated using MOCVD on a silicon carbide substrate401C. For example, aluminum nitride of piezoelectric layers405C,407C,409C,411C, may grow nearly epitaxially on silicon carbide (e.g., 4H SiC) by virtue of the small lattice mismatch between the polar axis aluminum nitride wurtzite structure and specific crystal orientations of silicon carbide. Alternative small lattice mismatch substrates may be used (e.g., sapphire, e.g., aluminum oxide). By varying the ratio of the aluminum and nitrogen in the deposition precursors, an aluminum nitride film may be produced with the desired polarity (e.g., normal axis, e.g., reverse axis). For example, normal axis aluminum nitride may be synthesized using MOCVD when a nitrogen to aluminum ratio in precursor gases approximately 1000. For example, reverse axis aluminum nitride may synthesized when the nitrogen to aluminum ratio is approximately 27000. In accordance with the foregoing,FIG.4Cshows MOCVD synthesized normal axis piezoelectric layer405C, MOCVD synthesized reverse axis piezoelectric layer407C, MOCVD synthesized normal axis piezoelectric layer409C, and MOCVD synthesized reverse axis piezoelectric layer411C. For example, normal axis piezoelectric layer405C may be synthesized by MOCVD in a deposition environment where the nitrogen to aluminum gas ratio is relatively low, e.g., 1000 or less. Next an oxyaluminum nitride layer,459C at lower temperature, may be deposited by MOCVD that may reverse axis (e.g., reverse axis polarity) of the growing aluminum nitride under MOCVD growth conditions, and has also been shown to be able to be deposited by itself under MOCVD growth conditions. Increasing the nitrogen to aluminum ratio into the several thousands during the MOCVD synthesis may enable the reverse axis piezoelectric layer407C to be synthesized. Interposer layer461C may be an oxide layer such as, but not limited to, aluminum oxide or silicon dioxide. This oxide layer may be deposited in a low temperature physical vapor deposition process such as sputtering or in a higher temperature chemical vapor deposition process. Normal axis piezoelectric layer409C may be grown by MOCVD on top of interposer layer461C using growth conditions similar to the normal axis layer405C as discussed previously, namely MOCVD in a deposition environment where the nitrogen to aluminum gas ratio is relatively low, e.g., 1000 or less. Next an aluminum oxynitride, interposer layer463C may be deposited in a low temperature MOCVD process followed by a reverse axis piezoelectric layer411C synthesized in a high temperature MOCVD process and an atmosphere of nitrogen to aluminum ratio in the several thousand range. Upon conclusion of these depositions, the piezoelectric stack404C shown inFIG.4Cmay be realized. FIG.5shows a simplified top plan view of an example fluidic system5000A of this disclosure, along with a simplified cross sectional view of the fluidic system5000B showing operation of an example bulk acoustic wave resonator structure500B and sensing region516A,516B of this disclosure. Top plan view of fluidic system5000A shows resonator electrical interconnects569A,569B extending through isolation layer567A. Fluid containment member550A (e.g., microfluidic containment550A) provides for fluid circulation there through, for example, by including fluid entrance aperture552A (e.g., microfluidic entrance aperture552A) to provide for fluid entering an inner fluid lumen of fluid containment member550A, and by including fluid exit aperture554A (e.g., microfluidic exit aperture554A) to provide for fluid exiting the inner fluid lumen of fluid containment member550A. Top plan view of fluidic system5000A shows lateral support features564A as visible through fluid entrance aperture552A and fluid exit aperture554A. Top plan view of fluidic system5000A shows a dashed line rectangle representatively illustrating sensing region516A associated with the bulk acoustic wave resonator structure disposed proximate a surface of the inner fluid lumen of fluid containment member550A. Sensing region516A may have a sensing area within a range from approximately sixteen hundred square microns to approximately twenty five thousand six hundred square microns. Sensing region516A may have a sensing area of approximately sixty four hundred square microns. Sensing region516A may have a width dimension W of approximately forty (40) microns wide. Sensing region516A may have a length dimension L of approximately one hundred sixty (160) microns long. These width and length dimensions of sensing region516A may accommodate a microfluidic channel of the inner fluid lumen of fluid containment member550A. Dimensions of the bulk acoustic wave resonator associated with sensing region516A may be sized to approximately accommodate the dimensions of sensing region516A. FIG.5shows simplified cross sectional view of the fluidic system5000B showing operation of the example bulk acoustic wave resonator structure500B and sensing region516B. Cross sectional view of fluidic system5000B shows resonator electrical interconnects569B,569B extending through isolation layer567B. Fluid containment member550B (e.g., microfluidic containment550B) may provide for fluid circulation there through, for example, by including fluid entrance aperture552B (e.g., microfluidic entrance aperture552B) to provide for fluid entering the inner fluid lumen of fluid containment member550B, and by including fluid exit aperture554B (e.g., microfluidic exit aperture554B) to provide for fluid exiting the inner fluid lumen of fluid containment member550B. Fluid circulation through fluid containment member550B is depicted using a downward pointing dark arrow at fluid entrance aperture552B, a horizontal dark arrow extending laterally through the inner fluid lumen (e.g., microfluidic channel) of fluid containment member550B, and an upward pointing dark arrow at fluid exit aperture554B. Target analytes, depicted using solid black triangles may be suspended in the fluid flow (e.g. liquid flow, e.g., liquid flow comprising water), along with other particles (e.g., un-targeted particle) depicted. Functionalized layer568B may comprise antibodies targeted for binding with the target analytes (e.g., target antigens) suspended in the circulation of the fluid flow through the inner lumen (e.g., microfluidic channel) of fluid containment member550B. Surface features of the antibodies conform to complementary surface features of target analytes (e.g., target antigens) to facilitate selectivity in binding of the antibodies with target analytes (e.g., target antigens). This is representatively illustrated in the cross sectional view of fluidic system5000B by the surface features of antibodies being depicted with angled surface features, and by the target analytes (e.g., target antigens) being depicted with complementary, e.g., triangular, surface features. For example, the antibodies depicted inFIG.5may be coronavirus antibodies, the target antigens may be coronavirus, and the fluid flow may be liquid derived from a blood sample from an infected patient. Cross sectional view of fluidic system5000B shows lateral support features564B. The cross sectional view of fluidic system5000B shows sensing region516B associated with the bulk acoustic wave resonator structure500B disposed proximate a surface of the inner fluid lumen of fluid containment550B. Details of the bulk acoustic wave resonator structure500B have already been discussed in detail with reference similar bulk acoustic resonator100shown inFIG.1A. Accordingly, the bulk acoustic wave resonator structure500B is only briefly discussed here. As shown inFIG.5, bulk acoustic wave resonator structure500B may comprise a stack of alternating axis piezoelectric layers504B sandwiched between multilayer metal acoustic reflector electrode513B and harmonically tuned top sensor electrode515B. Top sensor electrode515B may be a harmonically tuned top sensor electrode515B, e.g., may have a thickness of approximately an integral multiple of a half acoustic wavelength of the main resonant frequency of BAW resonator500B, e.g., e.g., may have a thickness of approximately N*λ/2 where N is an integer. For example, harmonically tuned top sensor electrode515B may have a thickness of approximately a half acoustic wavelength of the main resonant frequency of BAW resonator500B. For example, harmonically tuned top sensor electrode515B may have a thickness of approximately an acoustic wavelength of the main resonant frequency of BAW resonator500B. Top sensor electrode515B may be a non-harmonically tuned top sensor electrode515B, e.g., may have a thickness that is not approximately an integral multiple of a half acoustic wavelength of the main resonant frequency of BAW resonator500B. For example, in a case where top sensor electrode515B may be a non-harmonically tuned top sensor electrode515B, top sensor electrode515B may have a thickness that is approximately a tenth (0.1) of an acoustic wavelength of the main resonant frequency of BAW resonator500B. Top sensor electrode515B may have thickness that may be within a range from approximately one tenth of the acoustic wavelength of the main resonant frequency of BAW resonator500B to approximately one acoustic wavelength of the main resonant frequency of BAW resonator500B. Planarization material568B may be used to electrically insulate harmonically tuned top sensor electrode515B from bottom electrical interconnect569B. Bottom electrical interconnect569B may be electrically coupled with multilayer metal acoustic reflector electrode513B. Top electrical interconnect571B may be electrically coupled with harmonically tuned top sensor electrode515B. The stack of alternating axis piezoelectric layers504B may be electrically and acoustically coupled with the multilayer metal acoustic reflector electrode513B and the harmonically tuned top sensor electrode504B to excite the piezoelectrically excitable resonance mode (e.g., main resonance mode, e.g., thickness extensional main resonance mode) of BAW resonator500B acoustically coupled with the sensing region516B at the resonant frequency (e.g., main resonant frequency). For example, such excitation may be done by using the multilayer metal acoustic reflector electrode513B and the harmonically tuned top sensor electrode515B to apply an oscillating electric field having a frequency corresponding to the resonant frequency (e.g., main resonant frequency) of the BAW resonator500B acoustically coupled with the sensing region516B. Sensing region516A,516B may comprise a functionalized layer to facilitate binding to an analyte. For example, the functionalized layer may comprise antibodies. The functionalized layer of sensing region516A,516B may comprise a self-assembled monolayer. The functionalized layer of sensing region516A,516B may comprise one or more binding biomolecules (e.g., antibodies) configured to bind with target biomolecules (e.g., antigens, e.g., coronavirus). For example, antibodies of the functionalized layer acoustically coupled with bulk acoustic wave resonator500B at the sensing region516B may selectively bind the mass of one or more analytes (e.g., antigens, e.g., coronavirus). The mass of one or more antigens (e.g., coronavirus) binding to antibodies of the functionalized layer acoustically coupled with bulk acoustic wave resonator500B may cause detectable resonance frequency shifts (e.g., decrease in resonance frequency) in operation of bulk acoustic wave resonators500B in its thickness extensional main resonant mode. Electrical circuitry may be coupled with bulk acoustic wave resonator500B to determine the resonance frequency shift. This may detect the presence of the targeted antigen (e.g., coronavirus). Further, mass sensitivity may increase with the square of frequency. The thickness extensional main resonant mode BAW resonator500B may operate with resonant frequencies in the Super High Frequency band (e.g., resonant frequency of 24.25 GHz, or higher bands, e.g., higher resonant frequencies), and so their mass sensitivity may be much higher than resonators operating below the Super High Frequency band. Sensitivity is discussed in greater detail subsequently herein. Sensitivity of the fluidic system5000A,5000B when the sensing region516A,516B may be exposed to fluid may be within a range from approximately one half part per million per one hundred attograms to approximately fifty parts per million per one hundred attograms, e.g., for a sensing area of approximately sixty four hundred square microns. Sensitivity of the fluidic system5000A,5000B when the sensing region116may be exposed to fluid may be within a range from one KiloHertz CentiMeter Squared per NanoGram to approximately two hundred KiloHertz CentiMeter Squared per NanoGram. As discussed fluid containment member550B (e.g., microfluidic containment550B) may provide for fluid circulation there through, for example, by including fluid entrance aperture552B (e.g., microfluidic entrance aperture552B) to provide for fluid entering the inner fluid lumen of fluid containment member550B, and by including fluid exit aperture554B (e.g., microfluidic exit aperture554B) to provide for fluid exiting the inner fluid lumen of fluid containment member550B. The fluid flow may be liquid derived from a blood sample from an infected patient. The functionalized layer of sensing region516A,516B may have an affinity for a constituent of blood. For example, the functionalized layer of sensing region516A,516B may comprise antibodies that may have an affinity for a virus (e.g., antigen) constituent of blood from an infected patient. However, in other examples the functionalized layer of sensing region516A,516B may have an affinity for a biomarker (e.g., glucose, e.g., prostate specific antigen) constituent of blood from a patient managing a disease (e.g., diabetes, e.g., prostate cancer). In another example, fluid containment member550B may be an insertable hollow microneedle having an inner lumen bore and one or more apertures to access an interstitial fluid of a patient. The functionalized layer of sensing region516A,516B may have an affinity for a biomarker (e.g., glucose) constituent of interstitial fluid of a patient managing a disease (e.g., diabetes). Microneedles may be desirable because their small size and extremely sharp tip may reduce insertion pain and tissue trauma to the patient. The length of the microneedles may be kept short enough to not penetrate to the pain receptors in the inner layers of the patient's skin. For example microneedle length may be one (1) millimeter or less. The inner lumen bore of the hollow microneedle may, for example, have a cross-sectional dimension of greater than 25 microns. The inner lumen bore of the hollow microneedle may, for example, have a cross-sectional dimension of greater than 100 microns. Advantageously, BAW resonators of this disclosure operating at high frequencies (e.g., 24 GHz) may be made small, e.g., with dimensions small enough to accommodate being disposed in the inner lumen bore of the hollow microneedle having access to the interstitial fluid, e.g., the sensing region516A,516B of BAW resonator500B may contact the interstitial fluid of a patient via the microneedle having access to the interstitial fluid. Broadly speaking, a fluid need not necessarily be a liquid. Broadly speaking air or more particularly a patient's breath may be recognized as fluid. Accordingly, in another example, a patient's breath, or portions thereof may circulate through fluid containment member550B. The functionalized layer of sensing region516A,516B may have an affinity for a biomarker (e.g., acetone, e.g., tetrahydrocannabinol (THC)) constituent of a person's breath, which may be associated with a person's condition (e.g., lipid oxidation, e.g., marijuana intoxication). In other examples, air circulating through fluid containment member550B may provide for detection of targeted analytes of interest. For example, in cases where infectious disease carriers may be airborne, tainted air may be circulated through fluid containment member550B. The functionalized layer of sensing region516A,516B may have an affinity for airborne infections disease carriers. Sensing region516A,516B of BAW resonator500B may detect airborne infections disease carriers. Similarly, coughs or sneezes of infected people may give rise to respiratory droplets that include infectious disease carrier constituents (e.g., coronavirus). The functionalized layer of sensing region516A,516B may have an affinity for infectious disease carrier constituents (e.g., coronavirus). Sensing region516A,516B of BAW resonator500B may detect infectious disease carrier constituents (e.g., coronavirus). In other examples, tainted air circulating through fluid containment member550B may provide for detection of other targeted analytes of interest. Polluted air may include particulate matter. Sensing region516A,516B of BAW resonator500B may be used to detect particulate matter. Tainted air may include a toxin (e.g., hydrocarbon gas, e.g., carbon monoxide e.g., a nerve agent). Sensing region516A,516B of BAW resonator500B may detect the toxin. Tainted air may include a toxin (e.g., hydrocarbon gas, e.g., carbon monoxide e.g., a nerve agent). Sensing region516A,516B of BAW resonator500B may detect the toxin. Tainted air may include volatile organic compounds (e.g., hydrocarbons, e.g., alcohols, e.g., ammonia, e.g., acetone, e.g., ketones, e.g., aldehydes, e.g., esters, e.g., heterocycles). Sensing region516A,516B of BAW resonator500B may detect volatile organic compounds. Further, tainted air may be indicative of other dangers. Sensing region516A,516B of BAW resonator500B may detect presence of explosives (e.g., trinitrotoluene (TNT), 1,3,5-trinitro-1,3,5-triazacyclohexane (RDX)). In other examples, air circulating through fluid containment member550B may provide for detection of changes in environmental variables. Sensing region516A,516B of BAW resonator500B may detect changes in environment variables (e.g., changes in air temperature, e.g., changes in air pressure, e.g., changes in air humidity). In other examples, fluid (e.g., water) circulating through fluid containment member550B may provide for detection of changes in water quality (e.g., presence of toxins, e.g., presence of heavy metals, e.g., presence of lead). Sensing region516A,516B of BAW resonator500B may detect changes in water quality (e.g., presence of toxins, e.g., presence of heavy metals, e.g., presence of lead). FIGS.6A through6Care simplified diagrams of various example resonators of this disclosure, along with respective diagrams illustrating respective corresponding properties as predicted by simulation. The respective top halves ofFIGS.6A through6Cdepict respective simplified depictions of six BAW resonators:6001A through6001F inFIG.6A,6001I through6001NinFIG.6B, and6001R through6001WinFIG.6C. BAW resonators6001A,6001I,60001R comprise respective normal axis piezoelectric layers601A,601I,601R sandwiched between respective multilayer metal acoustic reflector electrodes6013A,6013I,6013R and top sensor electrodes6015A,6015I,6015R. BAW resonators6001B,6001J,6001S comprise respective two layer alternating arrangements of normal axis piezoelectric layers601B,601J,601S and reverse axis piezoelectric layers602B,602J,602S sandwiched between respective multilayer metal acoustic reflector electrodes6013B,6013J,6013S and top sensor electrodes6015B,6015J,6015S. BAW resonators6001C,6001K,6001T comprise respective three layer alternating arrangements of normal axis piezoelectric layers601C,601K,601T, reverse axis piezoelectric layers602C,602K,602T and second normal axis piezoelectric layers603C,603K,603T sandwiched between respective multilayer metal acoustic reflector electrodes6013C,6013K,6013T and top sensor electrodes6015C,6015K,6015T. BAW resonators6001D,6001L,6001U comprise respective four layer alternating arrangements of normal axis piezoelectric layers601D,601L,601U, reverse axis piezoelectric layers602D,602L,602U, second normal axis piezoelectric layers603D,603L,603U and second reverse axis piezoelectric layers604D,604L,604U sandwiched between respective multilayer metal acoustic reflector electrodes6013D,6013L,6013U and top sensor electrodes6015D,6015L,6015U. BAW resonators6001E,6001M,6001V comprise respective five layer alternating arrangements of normal axis piezoelectric layers601E,601M,601V, reverse axis piezoelectric layers602E,602M,602V, second normal axis piezoelectric layers603E,603M,603V, second reverse axis piezoelectric layers604E,604M,604V, and third normal axis piezoelectric layers605E,605M,605V sandwiched between respective multilayer metal acoustic reflector electrodes6013E,6013M,6013V and top sensor electrodes6015E,6015M,6015V. BAW resonators6001F,6001N,6001W comprise respective six layer alternating arrangements of normal axis piezoelectric layers601F,601N,601W, reverse axis piezoelectric layers602F,602N,602W, second normal axis piezoelectric layers603F,603N,603W, second reverse axis piezoelectric layers604F,604N,604W, third normal axis piezoelectric layers605F,605N,605W and third reverse axis piezoelectric layers606F,606N,606W sandwiched between respective multilayer metal acoustic reflector electrodes6013F,6013N,6013W and top sensor electrodes6015F,6015N,6015W. As shown inFIG.6A, for BAW resonators6001A through6001F, thickness of top sensor electrodes6015A through6015F may vary with N times a half acoustic wavelength (λ/2) of BAW resonator resonant frequency, with N being 0.2 or N being 1 or N being 2. Harmonic top sensor electrodes having thicknesses that are approximately an integral multiple (e.g., N=1, e.g., N=2) of a half acoustic wavelength (λ/2) of BAW resonator resonant frequency may differing performance characteristics relative to non-harmonic top sensor electrodes having thicknesses that are not approximately an integral multiple (e.g., N˜0.2) of a half acoustic wavelength (λ/2) of BAW resonator resonant frequency. For purposes of simulation, BAW resonators6001A through6001F are designed to have a main resonant frequency of 24.25 GHz and have a sensing region area at the top sensor electrodes6015A through6015F of approximately 80×80 microns. For N being 0.2 the thicknesses of piezoelectric layers abutting the multilayer metal acoustic reflector electrodes6013A through6013F and top sensor electrodes6015A through6015F have been adjusted in such way that the main resonance frequency of resonators6001A through6015F is substantially the same as for N being 1 or N being 2. A lower left diagram6019G inFIG.6Ashows a normalized (e.g., ratioed) top surface displacement of top sensor electrodes6015A through6015F of BAW resonators6001A through6001F versus number of half acoustic wavelength (λ/2) alternating axis piezoelectric layers as calculated from finite-element simulations. The displacement of top sensor electrodes6015A through6015F of BAW resonators6001A through6001F may create a pressure wave in a liquid placed of the top surface and therefore may lead to Quality factor (Q fractor) loss for resonators6001A through6001F operating in the thickness extensional mode. For reference, simulation results of displacement of top sensor electrodes6015A through6015F of BAW resonators6001A through6001F have been normalized (e.g., ratioed) relative to displacement of top sensor electrode6015A for N=0.2 design. Trace6021G shows that for harmonic top sensor electrodes having thicknesses that are approximately an integral multiple (e.g., N=1, e.g., N=2) of a half acoustic wavelength (λ/2), as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, normalized ratio of top surface displacement of top sensor electrodes6015A through6015F ranges from about 0.8 to 0.1. Trace6023G shows that for non-harmonic top sensor electrodes having thicknesses that are not approximately an integral multiple (e.g., N˜0.2) of a half acoustic wavelength (λ/2), as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, normalized ratio of top surface displacement of top sensor electrodes6015A through6015F ranges from about 1 to 0.1. This shows that increasing number of alternating axis piezoelectric layers in BAW resonators6001A through6001F from one layer to six layers correspondingly decreases normalized ratio of top surface displacement. It is theorized that for a given oscillating voltage amplitude applied to the BAW resonators6001A through6001F the displacement of each piezoelectric layer and therefore the displacement of top sensor electrodes6015A through6015F of BAW resonators6001A through6001F may decrease proportionately to the number of number of alternating axis piezoelectric layers. Decreasing normalized ratio of top surface displacement in thickness extensional resonant mode BAW resonators operable in liquid by increasing number of alternating axis piezoelectric layers may be important, since this may limit acoustic energy losses in liquid. A lower right diagram6019H inFIG.6Ashows estimated normalized (e.g., ratioed) energy loss in liquid ratios versus number of half acoustic wavelength (λ/2) alternating axis piezoelectric layers. For reference, energy losses in liquid for BAW resonators6001A through6001F have been normalized to energy losses in liquid BAW resonator6001A with N=0.2 non-harmonic top sensing electrode6015A. It is theorized that energy loss in liquid may be proportional to the body force of top electrode on liquid multiplied by the displacement of top sensor electrodes6015A through6015F of BAW resonators6001A through6001F. Since it is theorized that the body force of top electrode on liquid may be proportional to the squared displacement of top sensor electrode, therefore the energy loss in liquid for the thickness extensional mode may be proportional to the third power of the displacement of top sensor electrodes6015A through6015F of BAW resonators6001A through6001F. Trace6021H shows that for harmonic top sensor electrodes having thicknesses that are approximately an integral multiple (e.g., N=1, e.g., N=2) of a half acoustic wavelength (λ/2), as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, normalized ratio of loss in liquid for BAW resonators6001A through6001F may range from about 0.8 to 0.002. Trace6023H shows that for non-harmonic top sensor electrodes having thicknesses that are not approximately an integral multiple (e.g., N˜0.2) of a half acoustic wavelength (λ/2), as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, normalized ratio of loss in liquid for BAW resonators6001A through6001F ranges from about 1 to 0.001. This may indicate that thickness extensional resonant mode BAW resonators operable in liquid with increasing number of alternating axis piezoelectric layers may limit acoustic energy losses in liquid, e.g., by a factor up to 500 for BAW resonators6001F with harmonic electrodes having N=1 or N=2, and by a factor up to 1000 for BAW resonator6001F with non-harmonic electrode having N=0.2. Notably, the normalized results presented in diagrams6019G and6019H need not necessarily depend on specific frequency for which the BAW resonators6001A through6001F have been designed for, nor the specific sensing area sizes of BAW resonators6001A through6001F, as should be appreciated by one skilled in the art, e.g., upon learning from this disclosure. FIG.6Bshows BAW resonators6001I through6001N having non-harmonic top sensor electrodes6015I through6015N having thicknesses that are not approximately an integral multiple (e.g., N˜0.2) of a half acoustic wavelength (λ/2) of the resonant frequency of BAW resonators6001I through6001N (e.g., non-harmonic top sensor electrodes6015I through6015N may have thicknesses of 0.1 acoustic wavelength of the resonant frequency of BAW resonators6001I through6001N). For purposes of simulation, BAW resonators6001I through6001N are designed to have a sensing region area at the top sensor electrodes6015A through6015F of approximately 80×80 microns. Lower left diagram60190and center diagram6019P ofFIG.6Bshow sensitivity of BAW resonators6001I through6001N versus number of half acoustic wavelength (λ/2) alternating axis piezoelectric layers for varied designs of BAW resonators6001I through6001N having varied main resonant frequencies of 4 GHz, 8 GHz and 24.25 GHz. Units of sensitivity for the lower left diagram60190ofFIG.6Bare in parts per million per one hundred attograms, e.g., for 80×80 microns squared resonator sensing area. These units for sensitivity may be particularly helpful for understanding sensitivity in terms of virus detection. Electronics may measure one part per million or better in frequency shift of resonant frequency (e.g., delta Fs). A virus, e.g., coronavirus may have a mass of 100 attograms in water. Accordingly, the change in mass (delta m) for detecting one virus, e.g., one coronavirus, binding to an antibody of the functionalized layer at the sensing region of the BAW resonator may be 100 attograms. A sensitivity for a limit of detection for detecting one virus, e.g., one coronavirus may having a mass of 100 attograms in water, may be one part per million per one hundred attograms, e.g., for 80×80 microns squared resonator sensing area (assuming electronics measuring one part per million in frequency shift of resonant frequency). Trace60210shows sensitivity ranging from about 2 parts per million per one hundred attograms to about 0.35 parts per million per one hundred attograms as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, for BAW resonators6001I through6001N designed to operate at a main resonant frequency of 4 GHz. Trace60230shows sensitivity ranging from about 4 parts per million per one hundred attograms to about 0.7 parts per million per one hundred attograms as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, for BAW resonators6001I through6001N designed to operate at a main resonant frequency of 8 GHz. Trace60250shows sensitivity ranging from about 12 parts per million per one hundred attograms to about 2.1 parts per million per one hundred attograms as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, for BAW resonators6001I through6001N designed to operate at a main resonant frequency of 24.25 GHz. This diagram60190may show that BAW resonators operating at high frequency may demonstrate enhanced sensitivity. Moreover, BAW resonators operating at high frequency may have sufficient sensitivity to detect one virus, e.g., one coronavirus. This diagram60190may also show that although BAW resonators may show decreasing sensitivity as number of alternating axis piezoelectric layers increase, higher frequency resonators may still retain sufficient sensitivity. Units of sensitivity for the lower center diagram6019P ofFIG.6Bare in kHz cm{circumflex over ( )}2 per nanogram. These units of sensitivity may be equivalent to the sensitivity units of parts per million per one hundred attograms, e.g., for 80×80 microns squared resonator sensing area, used in the other sensitivity diagram60190just discussed and shown in the lower left of FIG.6B. Trace6021P shows sensitivity ranging from about 5 kHz cm{circumflex over ( )}2 per nanogram to about 0.9 kHz cm{circumflex over ( )}2 per nanogram as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, for BAW resonators6001I through6001N designed to operate at a main resonant frequency of 4 GHz. Trace6023P shows sensitivity ranging from about 20 kHz cm{circumflex over ( )}2 per nanogram to about 3.5 kHz cm{circumflex over ( )}2 per nanogram as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, for BAW resonators6001I through6001N designed to operate at a main resonant frequency of 8 GHz. Trace6025P shows sensitivity ranging from about 180 kHz cm{circumflex over ( )}2 per nanogram to about 32 kHz cm{circumflex over ( )}2 per nanogram as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, for BAW resonators6001I through6001N designed to operate at a main resonant frequency of 24.25 GHz. A lower right diagram6019Q ofFIG.6Bshows a vertical axis of Total Qs, e.g., total quality factor at series resonance including electrical resistance of non-harmonic top sensor electrodes6015I through6015N having thicknesses of 0.1 acoustic wavelength of the resonant frequency of BAW resonators. Notably, the calculation of Total Qs may be performed in two steps. First, two-dimensional finite-element calculations of Q-factor at series resonance frequency Fs for each BAW resonator6001I through6001N having an area corresponding to a 50 ohm resonator design at the respective frequency may be performed, without initially accounting for series resistance of the top sensing electrodes6015I through6015N. Second, series resistance of the top sensing electrodes6015I through6015N may be estimated for one square geometry, and Total Qs may be calculated for a fixed 80×80 microns squared resonator sensing area. As resonator frequency may increase, there may be an electrode thinning, which may in turn increase electrical resistance and may decrease Total Qs below what may be required. The lower right diagram6019Q ofFIG.6Bshows Total Qs versus number of half acoustic wavelength (λ/2) alternating axis piezoelectric layers for varied designs of BAW resonators6001I through6001N having varied main resonant frequencies of 4 GHz, 8 GHz and 24.25 GHz. Trace6021Q shows Total Qs, e.g., total quality factor at series resonance ranging from about 300 to about 1200 as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, for BAW resonators6001I through6001N designed to operate at a main resonant frequency of 4 GHz. Trace6023Q shows Total Qs, e.g., total quality factor at series resonance ranging from about 90 to about 400 as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, for BAW resonators6001I through6001N designed to operate at a main resonant frequency of 8 GHz. Trace6025Q shows Total Qs, e.g., total quality factor at series resonance ranging from about 4 to about 45 as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, for BAW resonators6001I through6001N designed to operate at a main resonant frequency of 24.25 GHz. The lower right diagram6019Q ofFIG.6Bmay show that Total Qs, e.g., total quality factor may decrease as resonators are designed to operate at higher frequency. However, the lower right diagram6019Q ofFIG.6Bmay also show that Total Qs, e.g., total quality factor may increase as number of alternating axis piezoelectric layers increase, e.g., ranging from one piezoelectric layer to six piezoelectric layers. Further, the lower right diagram6019Q ofFIG.6Bmay show that Total Qs, e.g., total quality factor may suffer using non-harmonic electrodes (e.g., non-harmonic top sensor electrodes6015I through6015N having thicknesses of 0.1 acoustic wavelength). The lower right diagram6019Q ofFIG.6Bmay show that Total Qs, e.g., total quality factor may suffer using non-harmonic electrodes, particularly as BAW resonators are designed to operate at higher frequencies. It is theorized that as resonator frequency may increase, there may be an electrode thinning, which may in turn increase electrical resistance and may decrease Total Qs below what may be required. InFIG.6C, for BAW resonators6001R through6001W, thickness of top sensor electrodes6015R through6015W may vary with N times a half acoustic wavelength (λ/2) of BAW resonator resonant frequency, with N being 0.2 or N being 1 or N being 2. Harmonic top sensor electrodes having thicknesses that are approximately an integral multiple (e.g., N=1, e.g., N=2) of a half acoustic wavelength (λ/2) of BAW resonator resonant frequency may have differing performance characteristics relative to non-harmonic top sensor electrodes having thicknesses that are not approximately an integral multiple (e.g., N˜0.2) of a half acoustic wavelength (λ/2) of BAW resonator resonant frequency. For purposes of simulation, BAW resonators6001R through6001W are designed to have a main resonant frequency of 24.25 GHz and have a sensing region area at the top sensor electrodes6015R through6015W of approximately 80×80 square microns. Lower left diagram6019X and center diagram6019Y ofFIG.6Cshow sensitivity of BAW resonators6001R through6001W versus number of half acoustic wavelength (λ/2) alternating axis piezoelectric layers for varied designs of BAW resonators6001R through6001W having varied thickness of top sensor electrodes6015R through6015W. Units of sensitivity for the lower left diagram6019X ofFIG.6Care in parts per million per one hundred attograms, e.g., for 80×80 square microns resonator sensing area. As mentioned previously, these units for sensitivity may be particularly helpful for understanding sensitivity in terms of virus detection. Electronics may measure one part per million or better in frequency shift of resonant frequency (e.g., delta Fs). A virus, e.g., coronavirus may have a mass of 100 attograms in water. Accordingly, the change in mass (delta m) for detecting one virus, e.g., one coronavirus, binding to an antibody of the functionalized layer at the sensing region of the BAW resonator may be 100 attograms. A sensitivity for a limit of detection for detecting one virus, e.g., one coronavirus may having a mass of 100 attograms in water, may be one part per million per one hundred attograms e.g., for 80×80 square microns resonator sensing area (assuming electronics measuring one part per million, or better, in frequency shift of resonant frequency). Trace6021X shows sensitivity ranging from less than about 6 parts per million per one hundred attograms to about 2 parts per million per one hundred attograms, e.g., for 80×80 square microns resonator sensing area, as number of alternating axis piezoelectric layers range from two piezoelectric layers to six piezoelectric layers, for BAW resonators6001R through6001W designed for a non-harmonic top sensor electrode having a thickness that is not approximately an integral multiple (e.g., N˜0.2) of a half acoustic wavelength (λ/2) of the resonant frequency of BAW resonators6001R through6001W, (e.g., non-harmonic top sensor electrodes6015R through6015W may have thicknesses of 0.1 acoustic wavelength of the resonant frequency of BAW resonators6001R through6001W). (Total Qs, e.g., total quality factor, may be too low (e.g., Total Qs of about 4) to provide meaningful data for a one piezoelectric layer resonator, so it is omitted from Trace6021X) Trace6023X shows sensitivity ranging from about 6 parts per million per one hundred attograms to about 3 parts per million per one hundred attograms, e.g., for 80×80 square microns resonator sensing area, as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, for BAW resonators6001R through6001W designed for harmonic top sensor electrodes having thicknesses that are approximately one half acoustic wavelength (e.g., N=1 of a half acoustic wavelength (λ/2)). Trace6025X shows sensitivity ranging from about 4 parts per million per one hundred attograms to about 2 parts per million per one hundred attograms, e.g., for 80×80 square microns resonator sensing area, as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, for BAW resonators6001R through6001W designed for harmonic top sensor electrodes having thicknesses that are approximately one acoustic wavelength (e.g., N=2 of a half acoustic wavelength (λ/2)). This diagram6019X may show that BAW resonators operating at high frequency (e.g., 24.25 GHz may demonstrate relatively high sensitivity. Moreover, BAW resonators operating at high frequency may have sufficient sensitivity to detect one virus, e.g., one coronavirus. This diagram6019X may also show that although BAW resonators may show decreasing sensitivity as number of alternating axis piezoelectric layers increase, relatively high frequency resonators (e.g., 24.25 GHz) may still retain sufficient sensitivity. This diagram6019X may also show that although BAW resonators may show decreasing sensitivity as thickness of top sensor electrodes6015R through6015W may increase, relatively high frequency resonators (e.g., 24.25 GHz) may still retain sufficient sensitivity. Units of sensitivity for the lower center diagram6019Y ofFIG.6Care in kHz cm{circumflex over ( )}2 per nanogram. These units of sensitivity may be equivalent to the sensitivity units of parts per million per one hundred attograms, e.g., for 80×80 square microns resonator sensing area, used in the other sensitivity diagram6019X just discussed and shown in the lower left ofFIG.6C. Trace6021Y shows sensitivity ranging from less than about 95 kHz cm{circumflex over ( )}2 per nanogram to about 30 kHz cm{circumflex over ( )}2 per nanogram as number of alternating axis piezoelectric layers range from two piezoelectric layers to six piezoelectric layers, for BAW resonators6001R through6001W designed for a non-harmonic top sensor electrode having a thickness that is not approximately an integral multiple (e.g., N˜0.2) of a half acoustic wavelength (λ/2) of the resonant frequency of BAW resonators6001R through6001W, (e.g., non-harmonic top sensor electrodes6015R through6015W may have thicknesses of about 0.1 acoustic wavelength of the resonant frequency of BAW resonators6001R through6001W). (Total Qs, e.g., total quality factor, may be too low to provide meaningful data for a one piezoelectric layer resonator, so it is omitted from Trace6021X) Trace6023Y shows sensitivity ranging from about 100 kHz cm{circumflex over ( )}2 per nanogram to about 40 kHz cm{circumflex over ( )}2 per nanogram as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, for BAW resonators6001R through6001W designed for harmonic top sensor electrodes having thicknesses that are approximately one half acoustic wavelength (e.g., N=1 of a half acoustic wavelength (λ/2)). Trace6025Y shows sensitivity ranging from about 60 kHz cm{circumflex over ( )}2 per nanogram to about 30 kHz cm{circumflex over ( )}2 per nanogram as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, for BAW resonators6001R through6001W designed to operate for harmonic top sensor electrodes having thicknesses that are approximately one acoustic wavelength (e.g., N=2 of a half acoustic wavelength (λ/2)). A lower right diagram6019Z ofFIG.6Cshows a vertical axis of Total Qs, e.g., total quality factor at series resonance for varied designs of BAW resonators6001R through6001W operating a main resonant frequency of 24.25 GHz and having varied thickness of top sensor electrodes6015R through6015W. Notably, the calculation of Total Qs may be performed in two steps. First, two-dimensional finite-element calculations of Q-factor at series resonance frequency Fs for each BAW resonator6001R through6001W, e.g., having an area corresponding to 50 ohm design at the respective frequency may be performed without accounting for series resistance of the top sensing electrodes6015R through6015W. Second, series resistance of the top sensing electrodes6015R through6015W be estimated estimated for one square geometry, and Total Qs may be calculated for a fixed 80×80 microns squared resonator sensing area. At relatively high resonator frequency (e.g., 24.25 GHz), there may be an electrode thinning, which may in turn increase electrical resistance, and may decrease Total Qs below what may be required. The lower right diagram6019Z ofFIG.6Cshows Total Qs versus number of half acoustic wavelength (λ/2) alternating axis piezoelectric layers for varied designs of BAW resonators6001R through6001W having varied thickness of top sensor electrodes6015R through6015W. Trace6021Z shows Total Qs, e.g., total quality factor at series resonance ranging from about 280 to about 540 as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, for BAW resonators6001R through6001W designed for harmonic top sensor electrodes having thicknesses that are approximately one acoustic wavelength (e.g., N=2 of a half acoustic wavelength (λ/2)). Trace6023Z shows Total Qs, e.g., total quality factor at series resonance ranging from about 110 to about 270 as number of alternating axis piezoelectric layers range from one piezoelectric layer to six piezoelectric layers, for BAW resonators6001R through6001W designed for harmonic top sensor electrodes having thicknesses that are approximately one half acoustic wavelength (e.g., N=1 of a half acoustic wavelength (λ/2)). Trace6025Z shows Total Qs, e.g., total quality factor at series resonance ranging from about 12 to about 45 as number of alternating axis piezoelectric layers range from two piezoelectric layers to six piezoelectric layers, for BAW resonators6001R through6001W having a thickness that is not approximately an integral multiple (e.g., N˜0.2) of a half acoustic wavelength (λ/2) of the resonant frequency of BAW resonators6001R through6001W, (e.g., non-harmonic top sensor electrodes6015R through6015W may have thicknesses of 0.1 acoustic wavelength of the resonant frequency of BAW resonators6001R through6001W). The lower right diagram6019Z ofFIG.6Cmay show that Total Qs, e.g., total quality factor may decrease as resonators are designed to operate at higher frequency. However, the lower right diagram6019Z ofFIG.6Cmay also show that Total Qs, e.g., total quality factor may increase as number of alternating axis piezoelectric layers increase, e.g., ranging from one piezoelectric layer to six piezoelectric layers. Further, the lower right diagram6019Z ofFIG.6Cmay show that Total Qs, e.g., total quality factor, may suffer using non-harmonic electrodes (e.g., non-harmonic top sensor electrodes6015I through6015N having thicknesses of 0.1 acoustic wavelength). The lower right diagram6019Z ofFIG.6Cmay show that Total Qs, e.g., total quality factor may suffer using non-harmonic electrodes, particularly as BAW resonators are designed to operate at higher frequencies. It is theorized that as resonator frequency may increase, there may be an electrode thinning, which may in turn increase electrical resistance and may decrease Total Qs below what may be required, unless electrodes are thickened, e.g., using harmonically tuned top sensor electrodes. FIGS.7A and7Bare simplified diagrams of various additional example resonators of this disclosure, along with respective diagrams illustrating respective corresponding properties as predicted by simulation. A top half ofFIG.7Ashows BAW resonators7001A,7001B,7001C that may comprise respective normal axis piezoelectric layers701A,701B,701C sandwiched between respective multilayer metal acoustic reflector electrodes7013A,7013B,7013C and top sensor electrodes7015A,7015B,7015C. BAW resonators7001A,7001B,7001C may have respective etched edge regions753A,753B,753C, and respective opposing etched edge regions754A,754B,754C.FIG.7Ashows BAW resonators7001A through7001C having non-harmonic top sensor electrodes7015A,7015B,7015C having thicknesses that are not approximately an integral multiple (e.g., N˜0.2) of a half acoustic wavelength (λ/2) of the resonant frequency of BAW resonators7001A through7001C (e.g., non-harmonic top sensor electrodes7015A through7015C may have thicknesses of about 0.1 acoustic wavelength of the resonant frequency of BAW resonators7001A through7001C). Area size of respective sensing regions716A through716C is varied for corresponding BAW resonators7001A,7001B,7001C. For example, sensing region716A of BAW resonator7001A may have an area size of 40×40 microns. Sensing region716B of BAW resonator7001B may have an area size of 80×80 microns. Sensing region716C of BAW resonator7001C may have an area size of 160×160 microns. Diagram7019D shown inFIG.7Ashows sensitivity of BAW resonators7001A through7001C versus resonant frequencies of 4 GHz, 8 GHz and 24 GHz for varied designs of BAW resonators7001A through7001C having varied sizes of 40×40 microns (corresponding to BAW resonator7001A), 80×80 microns (corresponding to resonator7001B) and 160×160 microns (corresponding to resonator7001C). Units of sensitivity for diagram7019D ofFIG.7Aare in parts per million per one hundred attograms. These units for sensitivity may be particularly helpful for understanding sensitivity in terms of virus detection. Electronics may measure one part per million or better in frequency shift of resonant frequency (e.g., delta Fs). A virus, e.g., coronavirus may have a mass of 100 attograms in water. Accordingly, the change in mass (delta m) for detecting one virus, e.g., one coronavirus, binding to an antibody of the functionalized layer at the sensing region of the BAW resonator may be 100 attograms. A sensitivity for a limit of detection for detecting one virus, e.g., one coronavirus may having a mass of 100 attograms in water, may be one part per million per one hundred attograms (assuming electronics measuring one part per million in frequency shift of resonant frequency). Trace7021D shows sensitivity ranging from about 2 parts per million per one hundred attograms to about 50 parts per million per one hundred attograms as designs for resonant frequency range through 4 GHz, 8 GHz and 24 GHz, for BAW resonator7001A having area size of 40×40 microns. Trace7023D shows sensitivity ranging from about 2 parts per million per one hundred attograms to about 12 parts per million per one hundred attograms as designs for resonant frequency range through 4 GHz, 8 GHz and 24 GHz, for BAW resonator7001B having area size of 80×80 microns. Trace7025D shows sensitivity ranging from about 0.5 parts per million per one hundred attograms to about 3 parts per million per one hundred attograms as designs for resonant frequency range through 4 GHz, 8 GHz and 24 GHz, for BAW resonator7001C having area size of 160×160 microns. This diagram7019D may show that BAW resonators operating at high frequency may demonstrate enhanced sensitivity. Moreover, BAW resonators operating at high frequency may have sufficient sensitivity to detect one virus, e.g., one coronavirus. This diagram7019D may also show that although BAW resonators may show decreasing sensitivity as area size increases, higher frequency resonators may still retain sufficient sensitivity. It may be desirable, in some ways to increase area size to some extent, for example to fill a base of a microfluidic channel. Further, increasing area size to some extent, may increase a probability of detecting a low concentration analyte quickly. However, in terms of maintaining BAW resonator sensitivity, diagram7019D shows that it may be desirable to limit increases in sensing region area size. Rather than increases in sensing region area size, it may be desirable instead to effectively increase area by employing an array of BAW resonators, e.g., having an aggregated increased area size. A top half ofFIG.7Bshows BAW resonators7001E,7001F,7001G designed for operation at a 24.25 GHz main resonant frequency that may comprise respective normal axis piezoelectric layers701E,701F,701G sandwiched between respective multilayer metal acoustic reflector electrodes7013E,7013F,7013G and top sensor electrodes7015E,7015F,7015G. BAW resonators7001E,7001F,7001G may have respective etched edge regions753E,753F,753CG, and respective opposing etched edge regions754E,754F,754G.FIG.7Bshows BAW resonators7001E through7001G may have varied thickness of top sensor electrodes7015E,7015F,7015G. For example, BAW resonator7001E may have varied thickness of its top sensor electrodes7015E, but one example of BAW resonator7001E is shown inFIG.7Bas having a non-harmonic top sensor electrode7015E having a thickness that is not approximately an integral multiple (e.g., N˜0.2) of a half acoustic wavelength (λ/2) of the resonant frequency of BAW resonator7001E (e.g., one example of BAW resonator7001E is shown inFIG.7Bas having thicknesses of 0.1 acoustic wavelength of the resonant frequency of BAW resonator7001E). The example resonator7001E shown inFIG.7Bhaving top electrode thicknesses of 0.1 acoustic wavelength may have different piezoelectric layer thickness than the other example resonators (e.g.,7001F,7001G) shown inFIG.7Bhaving harmonic top electrodes. For non-harmonic (e.g., 0.1 acoustic wavelength) electrode examples, the piezoelectric layer may be sandwiched between top and bottom 0.1 acoustic wavelength electrodes, and the entire thickness of the stack of the piezoelectric layer sandwiched between top and bottom 0.1 acoustic wavelength electrodes may be about a half acoustic wavelength. For example, the piezoelectric layer may be about nine hundred Angstroms thick (900 A thick) and the bottom and top Mo electrodes may be about two hundred seventy Angstroms thick (270 A). In contrast, example resonators having harmonic top electrodes may have full half wavelength thick piezoelectric layers, for example, having thicknesses of about 2200 A each and a bottom multilayer metal acoustic reflector electrode. BAW resonator7001F may have varied thickness of its top sensor electrodes7015F, but one example of BAW resonator7001F is shown inFIG.7Bas having a harmonic top sensor electrode7015F having a thickness that is approximately an integral multiple (e.g., N=1) of a half acoustic wavelength (λ/2) of the resonant frequency of BAW resonator7001F (e.g., one example of BAW resonator7001F is shown inFIG.7Bas having thicknesses of a half acoustic wavelength of the resonant frequency of BAW resonator7001F). BAW resonator7001G may have varied thickness of its top sensor electrodes7015F, but one example of BAW resonator7001G is shown inFIG.7Bas having a harmonic top sensor electrode7015G having a thickness that is approximately an integral multiple (e.g., N=2) of a half acoustic wavelength (λ/2) of the resonant frequency of BAW resonator7001G (e.g., one example of BAW resonator7001G is shown inFIG.7Bas having thicknesses of one acoustic wavelength of the resonant frequency of BAW resonator7001G). Area size of respective sensing regions716E through716G is varied for corresponding BAW resonators7001E,7001F,7001G. For example, sensing regions716E through716G of BAW resonators7001E through7001G may have area sizes ranging from 40×40 microns, through 80×80 microns, and through 160×160 microns. Diagram7019H shown inFIG.7Bshows sensitivity of BAW resonators (e.g., BAW resonators7001E through7000G) versus thickness of top sensor electrodes (e.g., top sensor electrodes7015E through7015G) for varied designs of BAW resonators7001A through7001C having varied sensor region area sizes of 40×40 microns, 80×80 microns and 160×160 microns. Units of sensitivity for diagram7019H ofFIG.7Bare in parts per million per one hundred attograms. Trace7021H shows sensitivity ranging from about 50 parts per million per one hundred attograms to about 16 parts per million per one hundred attograms as designs for top sensor electrode thickness ranging through 0.1 acoustic wavelength, one half acoustic wavelength, and one acoustic wavelength of the 24.25 GHz BAW resonator, for sensing regions having area size of 40×40 microns. Trace7023H shows sensitivity ranging from about 12 parts per million per one hundred attograms to about 4 parts per million per one hundred attograms as designs for top sensor electrode thickness ranging through 0.1 acoustic wavelength, one half acoustic wavelength, and one acoustic wavelength of the 24.25 GHz BAW resonator, for sensing regions having area size of 80×80 microns. Trace7025H shows sensitivity ranging from about 3 parts per million per one hundred attograms to about 1 part per million per one hundred attograms as designs for top sensor electrode thickness ranging through 0.1 acoustic wavelength, one half acoustic wavelength, and one acoustic wavelength of the 24.25 GHz BAW resonator, for sensing regions having area size of 160×160 microns. This diagram7019H may show that BAW resonators operating at high frequency (e.g., 24.25 GHz) may demonstrate enhanced sensitivity. Moreover, BAW resonators operating at high frequency (e.g., 24.25 GHz) may have sufficient sensitivity to detect one virus, e.g., one coronavirus. This diagram7019H may also show that although BAW resonators may show decreasing sensitivity as area size increases, high frequency resonators (e.g., 24.25 GHz) may still retain sufficient sensitivity. Although, increasing thickness of the top sensor electrode, e.g., to one acoustic wavelength may decrease sensitivity somewhat, limiting area size of sensing regions of BAW resonators (e.g., to 80×80 microns or 40×40 microns) may still provide very high sensitivity. FIGS.8A and8Bshow an example oscillator800A,800B (e.g., millimeter wave oscillator800A,800B, e.g., Super High Frequency (SHF) wave oscillator800A,800B, e.g., Extremely High Frequency (EHF) wave oscillator800A,800B) using the bulk acoustic wave resonator structure ofFIG.1A. For example,FIGS.8A and8Bshows simplified views of bulk acoustic wave resonator801A,801B and electrical coupling nodes856A,858A,856B,858B that may be electrically coupled with bulk acoustic wave resonator801A,801B. As shown inFIGS.8A and8B, electrical coupling nodes856A,858A,856B,858B may facilitate an electrical coupling of bulk acoustic wave resonator801A,801B with electrical oscillator circuitry (e.g., active oscillator circuitry802A,802B), for example, through phase compensation circuitry803A,803B (Φcomp). The example oscillator800A,800B may be a negative resistance oscillator, e.g., in accordance with a one-port model as shown inFIGS.8A and8B. The electrical oscillator circuitry, e.g., active oscillator circuitry may include one or more suitable active devices (e.g., one or more suitably configured amplifying transistors) to generate a negative resistance commensurate with resistance of the bulk acoustic wave resonator801A,801B. In other words, energy lost in bulk acoustic wave resonator801A,801B may be replenished by the active oscillator circuitry, thus allowing steady oscillation, e.g., steady SHF or EHF wave oscillation. To ensure oscillation start-up, active gain (e.g., negative resistance) of active oscillator circuitry802A,802B may be greater than one. As illustrated on opposing sides of a notional dashed line inFIGS.8A and8B, the active oscillator circuitry802A,802B may have a complex reflection coefficient of the active oscillator circuitry (Γamp), and the bulk acoustic wave resonator801A,801B together with the phase compensation circuitry803A,803B (Φcomp) may have a complex reflection coefficient (Γres). To provide for the steady oscillation, e.g., steady SHF or EHF wave oscillation, a magnitude may be greater than one for |Γamp Γres|, e.g., magnitude of a product of the complex reflection coefficient of the active oscillator circuitry (Γamp) and the complex reflection coefficient (Γres) of the resonator to bulk acoustic wave resonator801A,801B together with the phase compensation circuitry803A,803B (Φcomp) may be greater than one. Further, to provide for the steady oscillation, e.g., steady SHF or EHF wave oscillation, phase angle may be an integer multiple of three-hundred-sixty degrees for ∠Γamp Γres, e.g., a phase angle of the product of the complex reflection coefficient of the active oscillator circuitry (Γamp) and the complex reflection coefficient (Γres) of the resonator to bulk acoustic wave resonator801A,801B together with the phase compensation circuitry803A,803B (Φcomp) may be an integer multiple of three-hundred-sixty degrees. The foregoing may be facilitated by phase selection, e.g., electrical length selection, of the phase compensation circuitry803A,803B (Φcomp). In the simplified view ofFIG.8A, the bulk acoustic wave resonator801A may have a sensing region816acoustically coupled with the bulk acoustic wave resonator801A via a harmonically tuned top sensor electrode815A of the bulk acoustic wave resonator801A. The bulk acoustic wave resonator801A (e.g., bulk acoustic SHF or EHF wave resonator) includes first normal axis piezoelectric layer805A, first reverse axis piezoelectric layer807A, and another normal axis piezoelectric layer809A, and another reverse axis piezoelectric layer811A arranged in a four piezoelectric layer alternating axis stack arrangement sandwiched between a detuned SHF or EHF harmonically tuned top sensor electrode815A and de-tuned multilayer metal acoustic SHF or EHF wave reflector electrode813A. General structures and applicable teaching of this disclosure for the detuned SHF or EHF harmonically tuned top sensor electrode815A and the de-tuned multilayer metal acoustic SHF or EHF wave reflector electrode813A have already been discussed in detail previously herein with respect ofFIGS.1A and4A through4C, which for brevity are incorporated by reference rather than repeated fully here. As already discussed, the de-tuned multilayer metal acoustic SHF or EHF wave reflector electrode813A is directed to respective pairs of metal electrode layers, in which a first member of the pair has a relatively low acoustic impedance (relative to acoustic impedance of an other member of the pair), in which the other member of the pair has a relatively high acoustic impedance (relative to acoustic impedance of the first member of the pair), and in which the respective pairs of metal electrode layers have layer thicknesses corresponding to one quarter wavelength (e.g., one quarter acoustic wavelength) at a main resonant frequency of the resonator. Accordingly, it should be understood that the bulk acoustic SHF or EHF wave resonator801A shown inFIG.8Amay include a de-tuned multilayer metal acoustic SHF or EHF wave reflector electrode813A. Similarly, SHF or EHF harmonically tuned top sensor electrode815A may be detuned. For example, to provide for de-tuning (e.g., tuning up) of the SHF or EHF harmonically tuned top sensor electrode815A, thickness (e.g., one acoustic wavelength thickness) of the harmonically tuned top sensor electrode815A may be made somewhat thinner. For example, thickness of the SHF or EHF harmonically tuned top sensor electrode815A may be about 260 Angstroms lesser, e.g., about 10% thinner than an acoustic wavelength corresponding to an example BAW resonator resonant frequency of 24.25 GHz. An output816A of the oscillator800A may be coupled to the bulk acoustic wave resonator801A (e.g., coupled to harmonically tuned top sensor electrode815A). It should be understood that interposer layers as discussed previously herein with respect toFIG.1Aare explicitly shown in the simplified view the example resonator801A shown inFIG.8A. Such interposer layers may be included and interposed between adjacent piezoelectric layers. For example, a first interposer layer is arranged between first normal axis piezoelectric layer805A and first reverse axis piezoelectric layer807A. For example, a second interposer layer is arranged between first reverse axis piezoelectric layer807A and another normal axis piezoelectric layer809A. For example, a third interposer is arranged between the another normal axis piezoelectric layer809A and another reverse axis piezoelectric layer811A. As discussed previously herein, such interposer may be metal or dielectric, and may, but need not provide various benefits, as discussed previously herein. Alternatively or additionally, one or more (e.g., one or a plurality of) interposer layers may comprise metal and dielectric for respective interposer layers. For example, high acoustic impedance metal layer such as Tungsten (W) or Molybdenum (Mo) may (but need not) raise effective electromechanical coupling coefficient (Kt2). Subsequently deposited amorphous dielectric layer such as Silicon Dioxide (SiO2) may (but need not) facilitate compensating for temperature dependent frequency shifts. Alternatively or additionally, one or more (e.g., one or a plurality of) interposer layers may be formed of different metal layers. For example, high acoustic impedance metal layer such as Tungsten (W) or Molybdenum (Mo) may (but need not) raise effective electromechanical coupling coefficient (Kt2) while subsequently deposited metal layer with hexagonal symmetry such as Titanium (Ti) may (but need not) facilitate higher crystallographic quality of subsequently deposited piezoelectric layer. Alternatively or additionally, one or more (e.g., one or a plurality of) interposer layers may be formed of different dielectric layers. For example, high acoustic impedance dielectric layer such as Hafnium Dioxide (HfO2) may (but need not) raise effective electromechanical coupling coefficient (Kt2). Subsequently deposited amorphous dielectric layer such as Silicon Dioxide (SiO2) may (but need not) facilitate compensating for temperature dependent frequency shifts. A notional heavy dashed line is used in depicting an etched edge region853A associated with example resonator801A. The example resonator801A may also include a laterally opposing etched edge region854A arranged opposite from the etched edge region853A. The etched edge region853A (and the laterally opposing etch edge region854A) may similarly extend through various members of the example resonator801A ofFIG.8A, in a similar fashion as discussed previously herein with respect to the etched edge region253D (and the laterally opposing etch edge region254D) of example resonator2001D shown inFIG.2B. As shown inFIG.8A, a first mesa structure corresponding to the stack of four piezoelectric material layers805A,807A,809A,811A may extend laterally between (e.g., may be formed between) etched edge region853A and laterally opposing etched edge region854A. A second mesa structure corresponding to multi-layer metal bottom de-tuned acoustic SHF or EHF wave reflector electrode813A may extend laterally between (e.g., may be formed between) etched edge region853A and laterally opposing etched edge region854A. Third mesa structure corresponding to harmonically tuned top sensor electrode815A may extend laterally between (e.g., may be formed between) etched edge region853A and laterally opposing etched edge region854A. FIG.8Bshows a schematic of an example circuit implementation of the oscillator shown inFIG.8A. Active oscillator circuitry802B may include active elements, symbolically illustrated inFIG.8Bby alternating voltage source804B (Vs) coupled through negative resistance806B (Rneg), e.g., active gain element806B, to example bulk acoustic wave resonator801B (e.g., bulk acoustic SHF or EHF wave resonator) via phase compensation circuitry803B (Φcomp). The representation of example bulk acoustic wave resonator801B (e.g., bulk acoustic SHF or EHF wave resonator) may include passive elements, symbolically illustrated inFIG.8Bby electrode ohmic loss parasitic series resistance808B (Rs), motional capacitance810B (Cm), acoustic loss motional resistance812B (Rm), motional inductance814B (Lm), static or plate capacitance816B (Co), and acoustic loss parasitic818B (Ro). Additionally, a variable mass inductance814BB (Lmass) is depicted in dashed line to represent the variable mass of an analyte binding to the functionalized layer of the sensing region of the BAW resonator. An output816B of the oscillator800B may be coupled to the bulk acoustic wave resonator801B (e.g., coupled to the detuned SHF or EHF harmonically tuned top sensor electrode of bulk acoustic wave resonator801B). FIG.8Cshows an array of eighteen Smith charts showing Scattering-parameters (S-parameters, e.g., S11) at various operating frequencies corresponding various example BAW resonators having from one to six piezoelectric layers in alternating piezoelectric axis stack arrangements, and having top electrodes thickness varying from about a tenth of the acoustic wavelength of the BAW resonators to one half of the acoustic wavelength of the BAW resonators, to one acoustic wavelength of the BAW resonators. The Smith charts have been simulated using two-dimensional finite element models of resonators, for example, having characteristic impedance of 50 ohm at respective series resonance frequencies. For example, a first row of Smith charts870A through870F include respective trances873A through873F of Scattering-parameters (S-parameters, e.g., S11) over frequency corresponding to BAW resonators of this disclosure having top electrode thickness of about a tenth of acoustic wavelength (˜0.1λ) of the main resonant frequency of the BAW resonator and having from 1 piezoelectric layer (e.g., Npiezo=1) to an increasing number alternating axis piezoelectric layers, up to six alternating axis piezoelectric layers (e.g., Npiezo=6). It is theorized in this disclosure that uneven artifacts apparent in respective traces873A through873F of may correspond to parasitic lateral resonances. It is theorized in this disclosure that increasing number of alternating axis piezoelectric layers in BAW resonators of this disclosure may facilitate suppressing parasitic lateral resonances. This may be indicated inFIG.8Cby fewer/less uneven artifacts being present in trace873F (corresponding to a BAW resonator having six alternating axis piezoelectric layers (e.g., Npiezo=6)) relative to more uneven artifacts being present in trace873A (corresponding to a BAW resonator having one piezoelectric layer (e.g., Npiezo=1)). For example, a second row of Smith charts870G through870L include respective trances873G through873L of Scattering-parameters (S-parameters, e.g., S11) over frequency corresponding to BAW resonators of this disclosure having top electrode thickness of about half of an acoustic wavelength (λ/2) of the main resonant frequency of the BAW resonator and having from 1 piezoelectric layer (e.g., Npiezo=1) to an increasing number alternating axis piezoelectric layers, up to six alternating axis piezoelectric layers (e.g., Npiezo=6). It is theorized in this disclosure that uneven artifacts apparent in respective traces873G through873L may correspond to parasitic lateral resonances. It is theorized in this disclosure that increasing number of alternating axis piezoelectric layers in BAW resonators of this disclosure may facilitate suppressing parasitic lateral resonances. This may be indicated inFIG.8Cby fewer/less uneven artifacts being present in trace873L (corresponding to a BAW resonator having six alternating axis piezoelectric layers (e.g., Npiezo=6)) relative to more uneven artifacts being present in trace873G (corresponding to a BAW resonator having one piezoelectric layer (e.g., Npiezo=1)). Further, comparing traces873A through873F of the first row of Smith charts870A through870F to traces873G through873L of the second row of Smith charts870G through870L may show fewer/less uneven artifacts being present in traces873G through873L of the second row of Smith charts870G through870L, relative to more uneven artifacts being present in traces873A through873F of the first row of Smith charts870A through870F. Accordingly it is theorized in this disclosure that increasing top electrode thickness, e.g., from a tenth of acoustic wavelength (˜0.1λ) to about half of an acoustic wavelength (λ/2) (e.g. increasing thickness to provide a harmonic top electrode) may facilitate suppressing parasitic lateral resonances. For example, a third row of Smith charts870M through870R include respective trances873M through873R of Scattering-parameters (S-parameters, e.g., S11) over frequency corresponding to BAW resonators of this disclosure having top electrode thickness of about one acoustic wavelength (1λ) of the main resonant frequency of the BAW resonator and having from 1 piezoelectric layer (e.g., Npiezo=1) to an increasing number alternating axis piezoelectric layers, up to six alternating axis piezoelectric layers (e.g., Npiezo=6). It is theorized in this disclosure that uneven artifacts that are decreasingly apparent in respective traces873M through873R may correspond to decreasing parasitic lateral resonances. It is theorized in this disclosure that increasing number of alternating axis piezoelectric layers in BAW resonators of this disclosure may facilitate suppressing parasitic lateral resonances. This may be indicated inFIG.8Cby fewer/less uneven artifacts being present in trace873R (corresponding to a BAW resonator having six alternating axis piezoelectric layers (e.g., Npiezo=6)) relative to more uneven artifacts being present in trace873M (corresponding to a BAW resonator having one piezoelectric layer (e.g., Npiezo=1)). Further, comparing traces873G through873L of the second row of Smith charts870G through870L to traces873M through873R of the third row of Smith charts870M through870R may show fewer/less uneven artifacts being present in traces873M through873R of the third row of Smith charts870M through870R, relative to more uneven artifacts being present in traces873G through873L of the second row of Smith charts870G through870R. Accordingly, it is theorized in this disclosure that increasing top electrode thickness further, e.g., from about a half acoustic wavelength (λ/2) to about one acoustic wavelength (1λ) may further facilitate suppressing parasitic lateral resonances. FIGS.9A and9Bare simplified diagrams of a frequency spectrum illustrating application frequencies and application frequency bands of the example bulk acoustic wave resonators shown inFIG.1AandFIGS.4A through4Cand the example oscillators shown inFIGS.8A and8B. A widely used standard to designate frequency bands in the microwave range by letters is established by the United States Institute of Electrical and Electronic Engineers (IEEE). In accordance with standards published by the IEEE, as defined herein, and as shown inFIGS.9A and9Bare application bands as follows: L Band (1 GHz-2 GHz), S Band (2 GHz-4 GHz), C Band (4 GHz-8 GHz), X Band (8 GHz-12 GHz), Ku Band (12 GHz-18 GHz), K Band (18 GHz-27 GHz), Ka Band (27 GHz-40 GHz), V Band (40 GHz-75 GHz), and W Band (75 GHz-110 GHz).FIG.9Ashows a first frequency spectrum portion9000A in a range from one Gigahertz (1 GHz) to eight Gigahertz (8 GHz), including application bands of L Band (1 GHz-2 GHz), S Band (2 GHz-4 GHz) and C Band (4 GHz-8 GHz). As described subsequently herein, the 3rd Generation Partnership Project standards organization (e.g., 3GPP) has standardized various 5G frequency bands. For example, included is a first application band9010(e.g., 3GPP 5G n77 band) (3.3 GHz-4.2 GHz) configured for fifth generation broadband cellular network (5G) applications. As described subsequently herein, the first application band9010(e.g., 5G n77 band) includes a 5G sub-band9011(3.3 GHz-3.8 GHz). The 3GPP 5G sub-band9011includes Long Term Evolution broadband cellular network (LTE) application sub-bands9012(3.4 GHz-3.6 GHz),9013(3.6 GHz-3.8 GHz), and9014(3.55 GHz-3.7 GHz). A second application band9020(4.4 GHz-5.0 GHz) includes a sub-band9021for China specific applications. Discussed next are Unlicensed National Information Infrastructure (UNII) bands. A third application band9030includes a UNII-1 band9031(5.15 GHz-5.25 GHz) and a UNII-2A band9032(5.25 GHz 5.33 GHz). An LTE band9033(LTE Band252) overlaps the same frequency range as the UNII-1 band6031. A fourth application band9040includes a UNII-2C band9041(5.490 GHz-5.735 GHz), a UNII-3 band9042(5.735 GHz-5.85 GHz), a UNII-4 band9043(5.85 GHz-5.925 GHz), a UNII-5 band9044(5.925 GHz-6.425 GHz), a UNII-6 band9045(6.425 GHz-6.525 GHz), a UNII-7 band9046(6.525 GHz-6.875 GHz), and a UNII-8 band9047(6.875 GHz-7125 GHz). An LTE band9048overlaps the same frequency range (5.490 GHz-5.735 GHz) as the UNII-3 band9042. A sub-band9049A shares the same frequency range as the UNII-4 band9043. An LTE band9049B shares a subsection of the same frequency range (5.855 GHz-5.925 GHz). FIG.9Bshows a second frequency spectrum portion9000B in a range from eight Gigahertz (8 GHz) to one-hundred and ten Gigahertz (110 GHz), including application bands of X Band (8 GHz-12 GHz), Ku Band (12 GHz-18 GHz), K Band (18 GHz-27 GHz), Ka Band (27 GHz-40 GHz), V Band (40 GHz-75 GHz), and W Band (75 GHz-110 GHz). A fifth application band9050includes 3GPP 5G bands configured for fifth generation broadband cellular network (5G) applications, e.g., 3GPP 5G n258 band9051(24.25 GHz-27.5 GHz), e.g., 3GPP 5G n261 band9052(27.5 GHz-28.35 GHz), e.g., 3GPP 5G n257 band9053(26.5 GHz-29.5).FIG.9Bshows an EESS (Earth Exploration Satellite Service) band9051A (23.6 GHz-24 GHz) adjacent to the 3GPP 5G n258 band9051(24.25 GHz-27.5 GHz). As will be discussed in greater detail subsequently herein, an example EESS notch filter of the present disclosure may facilitate protecting the EESS (Earth Exploration Satellite Service) band9051A (23.6 GHz-24 GHz) from energy leakage from the adjacent 3GPP 5G n258 band9051(24.25 GHz-27.5 GHz). For example, this may facilitate satisfying (e.g., facilitate compliance with) a specification of a standards setting organization, e.g., International Telecommunications Union (ITU) specifications, e.g., ITU-R SM.329 Category A/B levels of −20 db W/200 MHz, e.g., 3rd Generation Partnership Project (3GPP) 5G specifications, e.g., 3GPP 5G, unwanted (out-of-band & spurious) emission levels, worst case of −20 db W/200 MHz. Alternatively or additionally, this may facilitate satisfying (e.g., facilitate compliance with) a regulatory requirement, e.g., a government regulatory requirement, e.g., a Federal Communications Commission (FCC) decision or requirement, e.g., a European Commission decision or requirement of −42 db W/200 MHz for 200 MHz for Base Stations (BS) and −38 db W/200 MHz for User Equipment (UE), e.g., European Commission Decision (EU) 2019/784 of 14 May 2019 on harmonization of the 24.25-27.5 GHz frequency band for terrestrial systems capable of providing wireless broadband electronic communications services in the Union, published May 16, 2019, which is hereby incorporated by reference in its entirety, e.g., a European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) decision, requirement, recommendation or study, e.g., a ESA/EUMETSAT/EUMETNET study result of −54.2 db W/200 MHz for Base Stations (BS) and 50.4 db W/200 MHz for User Equipment (UE), e.g., the United Nations agency of the World Meteorological Organization (WMO) decision, requirement, recommendation or study, e.g., the WMO decision of −55 db W/200 MHz for Base Stations (BS) and −51 db W/200 MHz for User Equipment (UE). These specifications and/or decisions and/or requirements may be directed to suppression of energy leakage from an adjacent band, e.g., energy leakage from an adjacent 3GPP 5G band, e.g., suppression of transmit energy leakage from the adjacent 3GPP 5G n258 band9051(24.250 GHz-27.500 GHz), e.g. limiting of spurious out of n258 band emissions. A sixth application band9060includes the 3GPP 5G n260 band9060(37 GHz-40 GHz). A seventh application band9070includes United States WiGig Band for IEEE 802.11ad and IEEE 802.11ay9071(57 GHz-71 GHz), European Union and Japan WiGig Band for IEEE 802.11ad and IEEE 802.11ay9072(57 GHz-66 GHz), South Korea WiGig Band for IEEE 802.11ad and IEEE 802.11ay9073(57 GHz-64 GHz), and China WiGig Band for IEEE 802.11ad and IEEE 802.11ay9074(59 GHz-64 GHz). An eighth application band9080includes an automobile radar band9080(76 GHz-81 GHz). Accordingly, it should be understood from the foregoing that the acoustic wave devices (e.g., resonators, e.g., oscillators) of this disclosure may be implemented in the respective application frequency bands just discussed. For example, the layer thicknesses of the detuned harmonically tuned top sensor electrodes and the de-tuned multilayer metal acoustic reflector electrodes and piezoelectric layers in alternating axis arrangement for the example acoustic wave devices (e.g., the example 24 GHz bulk acoustic wave resonators) of this disclosure may be scaled up and down as needed to be implemented in the respective application frequency bands just discussed. This is likewise applicable to example oscillators (e.g., bulk acoustic wave resonator based oscillators) of this disclosure to be implemented in the respective application frequency bands just discussed. The following examples pertain to further embodiments for acoustic wave devices, including but not limited to, e.g., bulk acoustic wave resonators, e.g., bulk acoustic wave resonator based oscillators, and from which numerous permutations and configurations will be apparent. Example 1 is a bulk acoustic wave (BAW) resonator comprising a substrate, a first layer of piezoelectric material having a first piezoelectric axis orientation, and a top electrode electrically and acoustically coupled with the first layer of piezoelectric material to excite a resonance mode at a main resonant frequency of the BAW resonator in a Super High Frequency (SHF) band or Extremely High Frequency (EHF) band. Example 2, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in a 3rd Generation Partnership Project (3GPP) band. Example 3, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in a 3GPP n77 band9010as shown inFIG.9A. Example 4, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in a 3GPP n79 band9020as shown inFIG.9A. Example 5, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in a 3GPP n258 band9051as shown inFIG.9B. Example 6, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in a 3GPP n261 band9052as shown inFIG.9B. Example 7, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in a 3GPP n260 band as shown inFIG.9B. Example 8, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in an Institute of Electrical and Electronic Engineers (IEEE) S band as shown inFIG.9A. Example 9, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in an Institute of Electrical and Electronic Engineers (IEEE) C band as shown inFIG.9A. Example 10, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in an Institute of Electrical and Electronic Engineers (IEEE) X band as shown inFIG.9B. Example 11, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in an Institute of Electrical and Electronic Engineers (IEEE) Ku band as shown inFIG.9B. Example 12, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in an Institute of Electrical and Electronic Engineers (IEEE) K band as shown inFIG.9B. Example 13, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in an Institute of Electrical and Electronic Engineers (IEEE) Ka band as shown inFIG.9B. Example 14, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in an Institute of Electrical and Electronic Engineers (IEEE) V band as shown inFIG.9B. Example 15, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in an Institute of Electrical and Electronic Engineers (IEEE) W band as shown inFIG.9B. Example 16, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in UNII-1 band9031, as shown inFIG.9A. Example 17, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in UNII-2 A band9032, as shown inFIG.9A. Example 18, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in UNII-2C band9041, as shown inFIG.9A. Example 19, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in UNII-3 band9042, as shown inFIG.9A. Example 20, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in UNII-4 band9043, as shown inFIG.9A. Example 21, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in UNII-5 band9044, as shown inFIG.9A. Example 22, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in UNII-6 band9045, as shown inFIG.9A. Example 23, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in UNII-7 band9046, as shown inFIG.9A. Example 24, the subject matter of Example 1 optionally includes in which the main resonant frequency of the BAW resonator is in UNII-8 band9047, as shown inFIG.9A. Example 25, the subject matter of any one or more of Examples 1 through 24 optionally include a sensing region acoustically coupled with the top electrode. Example 26, the subject matter of any one or more of Examples 1 through 25 optionally include a second layer of piezoelectric material having a second piezoelectric axis orientation substantially opposing the first piezoelectric axis orientation of the first layer of piezoelectric material. Example 27, the subject matter of any one or more of Examples 1 through 26 optionally include in which a sensitivity associated with the BAW resonator is within a range from approximately one half part per million per one hundred attograms to approximately fifty parts per million per one hundred attograms. Example 28, the subject matter of any one or more of Examples 1 through 27 optionally include in which a sensitivity associated with the BAW resonator is within a range from one KiloHertz CentiMeter Squared per NanoGram to approximately two hundred KiloHertz CentiMeter Squared per NanoGram. Example 29, the subject matter of any one or more of Examples 1 through 28 optionally include in which the BAW resonator is associated with detection of an analyte. Example 30, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of a biomolecule. Example 31, the subject matter of any one or more of Examples 1 through 30 optionally include in which the BAW resonator is associated with detection of an infectious agent. Example 32, the subject matter of any one or more of Examples 1 through 31 optionally include in which the BAW resonator is associated with detection of a virus. Example 33, the subject matter of any one or more of Examples 1 through 32 optionally include in which the BAW resonator is associated with detection of a coronavirus. Example 34, the subject matter of any one or more of Examples 1 through 33 optionally include in which the BAW resonator is associated with detection of a SARS-Cov-2 virus. Example 35, the subject matter of any one or more of Examples 1 through 31 optionally include in which the BAW resonator is associated with detection of bioweapon. Example 36, the subject matter of any one or more of Examples 1 through 31 optionally include in which the BAW resonator is associated with detection of anthrax. Example 37, the subject matter of any one or more of Examples 1 through 30 optionally include in which the BAW resonator is associated with detection of a biomarker. Example 38, the subject matter of any one or more of Examples 1 through 30 optionally include in which the BAW resonator is associated with detection of acetone. Example 39, the subject matter of any one or more of Examples 1 through 30 optionally include in which the BAW resonator is associated with detection of a prostate specific antigen. Example 40, the subject matter of any one or more of Examples 1 through 30 optionally include in which the BAW resonator is associated with detection of a cancer biomarker. Example 41, the subject matter of any one or more of Examples 1 through 30 optionally include in which the BAW resonator is associated with detection of glucose. Example 42, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of an air pollutant. Example 43, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of particulate matter. Example 44, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of a toxin. Example 45, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of carbon monoxide. Example 46, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of a volatile organic compound. Example 47, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of a hydrocarbon gas. Example 48, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of a biological weapon. Example 49, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of a chemical weapon. Example 50, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of a nerve agent. Example 51, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of a Sarin. Example 52, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of a water pollutant. Example 53, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of a heavy metal. Example 54, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of lead. Example 55, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of a heavy metal. Example 56, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of an antigen. Example 57, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of an antibody. Example 58, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of a constituent of blood. Example 59, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of a constituent of interstitial fluid. Example 60, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of a constituent of breadth. Example 61, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of tetrahydrocannabinol. Example 62, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of an explosive. Example 63, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of trinitrotoluene (TNT). Example 64, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of 1,3,5-trinitro-1,3,5-triazacyclohexane (RDX). Example 65, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of a chemical associated with a chemical weapon. Example 66, the subject matter of any one or more of Examples 1 through 29 optionally include in which the BAW resonator is associated with detection of dimethyl methylphosphonate. Example 67, the subject matter of any one or more of Examples 1 through 29 optionally include a functionalized layer acoustically coupled with the top electrode of the BAW resonator, the functionalized layer having an selective analyte affinity. Example 68, the subject matter of any one or more of Examples 1 through 29 optionally include a functionalized layer acoustically coupled with the top electrode of the BAW resonator, the functionalized layer having an selective analyte binding affinity. Example 69, the subject matter of any one or more of Examples 1 through 29 optionally include a molecularly imprinted polymer layer acoustically coupled with the top electrode. Example 70, the subject matter of any one or more of Examples 1 through 29 optionally include a metal-organic framework acoustically coupled with the top electrode. Example 71, the subject matter of any one or more of Examples 1 through 29 optionally include a layer of bacteria acoustically coupled with the top electrode. Example 72, the subject matter of any one or more of Examples 1 through 28 optionally include in which the BAW resonator is associated with detection of a change in an environmental variable. Example 73, the subject matter of any one or more of Examples 1 through 28 optionally include in which the BAW resonator is associated with detection of a change in pressure. Example 74, the subject matter of any one or more of Examples 1 through 28 optionally include in which the BAW resonator is associated with detection of a change in temperature. Example 75, the subject matter of any one or more of Examples 1 through 28 optionally include in which the BAW resonator is associated with detection of a change in humidity. Example 76, the subject matter of any one or more of Examples 1 through 28 optionally include in which the BAW resonator is associated with detection of a change in a flux of neutrons. Example 77, the subject matter of any one or more of Examples 1 through 28 optionally include in which the BAW resonator is associated with detection of a change in a magnetic field. Example 78, the subject matter of any one or more of Examples 1 through 28 optionally include in which the BAW resonator is associated with detection of terahertz radiation. Example 79, the subject matter of any one or more of Examples 1 through 28 optionally include in which the BAW resonator is associated with detection of solar blind ultraviolet light Example 80, the subject matter of any one or more of Examples 1 through 28 optionally include in which the BAW resonator is associated with detection of infrared light. Example 81, the subject matter of any one or more of Examples 1 through 28 optionally include a nanoporous layer acoustically coupled with the top electrode. Example 82, the subject matter of any one or more of Examples 1 through 28 optionally include a nanocomposite layer acoustically coupled with the top electrode. Example 83, the subject matter of any one or more of Examples 1 through 28 optionally include a nanostructured layer acoustically coupled with the top electrode. Example 84, the subject matter of any one or more of Examples 1 through 28 optionally include a magnetostrictive layer acoustically coupled with the top electrode. Example 85, the subject matter of any one or more of Examples 1 through 28 optionally include a multiferroic layer acoustically coupled with the top electrode. Example 86, the subject matter of any one or more of Examples 1 through 28 optionally include a magnetoelectric layer acoustically coupled with the top electrode. Example 87, the subject matter of any one or more of Examples 1 through 28 optionally include a heterostructure layer acoustically coupled with the top electrode. Example 88, the subject matter of any one or more of Examples 1 through 28 optionally include a perovskite layer acoustically coupled with the top electrode. Example 89, the subject matter of any one or more of Examples 1 through 28 optionally include magnetostrictive exchange biased multilayers acoustically coupled with the top electrode. Example 90, the subject matter of any one or more of Examples 1 through 28 optionally include antiparallel magnetostrictive exchange biased multilayers acoustically coupled with the top electrode. Example 91, the subject matter of any one or more of Examples 1 through 28 optionally include a metallic glass acoustically coupled with the top electrode. Example 92, the subject matter of any one or more of Examples 1 through 28 optionally include a tunable region acoustically coupled with the top electrode. Example 93, the subject matter of any one or more of Examples 1 through 28 optionally include in which the BAW resonator forms a portion of a filter. Example 94, the subject matter of any one or more of Examples 1 through 28 optionally include in which the BAW resonator forms a portion of a tunable filter. Example 95, the subject matter of any one or more of Examples 1 through 94 optionally include in which the top electrode is electrically and acoustically coupled with the first layer of piezoelectric material to excite a thickness extensional main mode of the BAW resonator. Example 96, the subject matter of any one or more of Examples 1 through 95 optionally include in which the BAW resonator comprises at least one additional piezoelectric layer. Example 97, the subject matter of any one or more of Examples 1 through 96 optionally include in which the BAW resonator comprises two additional layers of piezoelectric material with alternating piezoelectric axis orientations. Example 98, the subject matter of any one or more of Examples 1 through 97 optionally include in which the BAW resonator comprises three additional layers of piezoelectric material with alternating piezoelectric axis orientations. Example 99, the subject matter of any one or more of Examples 1 through 98 optionally include in which the BAW resonator comprises four additional layers of piezoelectric material with alternating piezoelectric axis orientations. Example 100, the subject matter of any one or more of Examples 1 through 99 optionally include in which the BAW resonator comprises five additional layers of piezoelectric material with alternating piezoelectric axis orientations. Example 101, the subject matter of any one or more of Examples 1 through 100 optionally include in which the top electrode has sheet resistance of less than one Ohm per square. Example 102, the subject matter of any one or more of Examples 1 through 101 optionally include in which the top electrode a harmonically tuned top electrode. Example 103, the subject matter of any one or more of Examples 1 through 102 optionally include in which the top electrode has a thickness that is approximately an integral multiple of a half of an acoustic wavelength of the main resonant frequency of the BAW resonator. Example 104, the subject matter of any one or more of Examples 1 through 102 optionally include in which the top electrode has a thickness that is approximately half an acoustic wavelength of the main resonant frequency of the BAW resonator. Example 105, the subject matter of any one or more of Examples 1 through 102 optionally include in which the top electrode has a thickness that is approximately an acoustic wavelength of the main resonant frequency of the BAW resonator. Example 106, the subject matter of any, the subject matter of any one or more of Examples 1 through 105 optionally include in which the BAW resonator is a plurality of BAW resonators. Example 107, the subject matter of example 106 optionally includes in which the plurality of BAW resonators have different respective main resonant frequencies. Example 108, the subject matter of any one or more examples 106 through 107 optionally include in which differing functionalized layers are coupled with respective members of the plurality of BAW resonators. Example 109, the subject matter of any one or more examples 106 through 108 optionally include in which differing functionalized layers are coupled with respective members of the plurality of BAW resonators to facilitate detection of differing analytes. Example 110, the subject matter of any one or more examples 106 through 109 optionally include in which a member of the plurality of BAW resonators is a reference BAW resonator that is substantially shielded. Example 111, the subject matter of any one or more examples 106 through 110 optionally include in which a plurality of heaters respectively are thermally coupled with respective top electrodes regions of respective of BAW resonators. Example 112, the subject matter of example 111 optionally includes a heaters controller coupled with the plurality of heaters to selectively activate and to selectively deactivate the members of the plurality of heaters. Example 113, the subject matter of example 112 optionally includes a timing controller coupled with the heaters controller to control timing of the heaters controller selectively activating and to selectively deactivating the members of the plurality of heaters. Example 114, the subject matter of example 113 optionally includes the timing controller is coupled with the heaters controller to control respective temperatures of the respective top electrodes. Example 115, the subject matter of example 113 optionally includes the timing controller is coupled with the heaters controller to control respective temperatures of the respective top electrodes to be different from one another. Example 116, the subject matter of example 113 optionally includes the timing controller coupled with the heaters controller to control respective temperatures of respective functionalized layers coupled with respective top electrodes, so as to control respective analyte adsorption at the respective functionalized layers. Example 117, the subject matter of example 113 optionally includes the timing controller coupled with the heaters controller to vary respective temperatures of respective top electrodes over time. Example 118, the subject matter of any one or more examples 111 through 117 optionally include a resonant signals receiver coupled with the plurality of BAW resonators. Example 119, the subject matter of any one or more examples 111 through 117 optionally include a resonant signals receiver wirelessly coupled with the plurality of BAW resonators. Example 120, the subject matter of any one or more examples 111 through 117 optionally include a frequency sweep signals transmitter coupled with the plurality of BAW resonators. Example 121, the subject matter of any one or more examples 111 through 117 optionally include a frequency sweep signals transmitter wirelessly coupled with the plurality of BAW resonators. Example 122, the subject matter of any one or more examples 111 through 121 optionally include a replaceable cartridge supportively coupled with the plurality of Bulk Acoustic Wave (BAW) resonators. Example 123, the subject matter of any one or more examples 111 through 122 optionally include a computing system coupled with the plurality of Bulk Acoustic Wave (BAW) resonators. Example 124, the subject matter of example 123 optionally includes the computing system having a wireless communication capability. Example 125, the subject matter of any one or more of examples 1 through 124 optionally include wherein the BAW resonator forms a portion of fluidic system. Example 126, the subject matter of any one or more of examples 1 through 125 optionally include wherein the BAW resonator forms a portion of microfluidic system. Example 127, the subject matter of any one or more of examples 1 through 124 optionally include a hollow microneedle in which the BAW resonator is disposed within the hollow microneedle. FIG.9Cshows a simplified system9000C employing an array900C of BAW resonator structures91through9N,91M through9NM, through to91Z through9NZ, for sensing according to this disclosure. As shown inFIG.9C, system9000C may include a plurality heaters911through91N,911M through91NM, through to911Z through91NZ that may be thermally coupled with respective BAW resonator structures91through9N,91M through9NM, through to91Z through9NZ. The heaters may be electrical heaters. The heaters may be resistive heaters. Respective heaters may be respective resonant heaters, e.g., individually identifiable based on respective resonant frequency of respective resonant heaters, e.g., respective differing resonant heaters may have respective resonant frequencies differing from one another. Respective heaters may be individually addressable, e.g., based on respective resonant frequency of respective resonant heaters. Respective heaters may be wirelessly activatable, e.g., selectively activatable, e.g., selectively activatable based on respective differing resonant frequencies of respective differing respective resonant heaters. The heaters may be wirelessly deactivatable e.g., selectively deactivatable, e.g., selectively deactivatable based on respective differing resonant frequencies of respective differing respective resonant heaters. The plurality of heaters (e.g., resonant heaters) may be fabricated separately from the plurality of BAW resonators. The plurality of heaters (e.g., resonant heaters) may be thermally coupled with the plurality of BAW resonators after fabrication. Alternatively, the plurality of heaters (e.g., resonant heaters) may be integrally fabricated along with the plurality of BAW resonators. The plurality of heaters (e.g., resonant heaters) may be integrally coupled with the plurality of BAW resonators. Alternatively, the heaters may be recognized as a heating function integral with prolonged operation of the plurality of BAW resonators. In other words, operational duration of the plurality of BAW resonators may result in heating of the plurality of BAW resonators. The plurality of heaters may be a plurality heater functions integral with duration of operation of respective BAW resonators and heat produced thereby. By the system9000C controlling time duration of operation of the plurality of BAW resonators, heating of the BAW resonators (e.g., the heating function), may be controlled by the system9000C. By the system9000C controlling time duration of heating during operation of the plurality of BAW resonators, temperature of the respective sensing regions of the respective BAW resonators may be controlled by the system9000C. By the system9000C controlling frequency selective power level of operation of the plurality of BAW resonators, heating of the BAW resonators (e.g., the heating function), may be controlled by the system9000C. By the system9000C controlling frequency selective power level of operation of the plurality of BAW resonators, temperature of the respective sensing regions of the respective BAW resonators may be controlled by the system9000C. The BAW resonators of array900C may be similar to those already discussed, for example, they may be similar to the BAW resonator shown inFIG.1A. For example, the BAW resonators of array900C may include respective sensing regions and respective functionalized layers acoustically coupled with respective harmonic top senor electrodes, which may be similar to sensing region116and functionalized layer168acoustically coupled with harmonic top senor electrode115shown inFIG.1A. For the plurality of BAW resonators of array900C, respective sensing regions may comprise respective functionalized layers that are different from one another, e.g., to facilitate respective responses to differing environmental variables. For the plurality of BAW resonators of array900C, the respective sensing regions may sensing areas may be different sizes from one another, e.g., to facilitate respective differing responses to an environmental variable. The plurality of BAW resonators of array900C may be designed and fabricated having differing piezoelectric layer thickness, e.g., to have respective resonant frequencies that are different from one another. In the system9000C employing the array900C of BAW resonators as shown inFIG.9C, one or more members of the plurality of BAW resonators may be reference BAW resonators. Reference BAW resonators may be substantially shielded from one or more environmental variable. It may be assumed that other members of the plurality of BAW resonators may be unshielded BAW resonators, e.g., substantially unshielded from environmental variables, e.g., to facilitate the unshielded BAW resonators sensing changes in the environmental variables. Noise may be reduced by comparing sensory output of one or more unshielded BAW resonators to output or one or more shielded (reference) BAW resonators. The BAW resonators of array900C may be used in combination with the fluidic system shown and discussed with respect toFIG.5(e.g., the BAW resonators of array900C shown inFIG.9Cmay be used in place of BAW resonator500B shown inFIG.5). System9000C may comprise control circuitry903. The control circuitry903may be electrically coupled with the plurality of BAW resonators of array900C. For example, the control circuitry903may be wirelessly coupled with the plurality of BAW resonators of array900C. The control circuitry903may comprise a frequency sweep signals transmitter905, e.g., wirelessly coupled with the plurality of BAW resonators of array900C, e.g., to transmit a sweep of frequency signals, e.g., a sweep of frequency signals comprising the respective differing resonant frequencies of differing members of the plurality of BAW resonators of array900C, e.g., to stimulate resonant sensing at the respective differing resonant frequencies of differing members of the plurality of BAW resonators of array900C. The control circuitry903may comprise a resonant signals receiver907, e.g., wirelessly coupled with the plurality of BAW resonators of array900C, e.g., to receive resonant sensing signals from the plurality of BAW resonators of array900C in response to their sensing activation by the sweep of frequency signals from frequency sweep signals transmitter905. The resonant signals receiver907may receive respective signals differing in frequency corresponding to respective differing resonant frequencies of differing members of the plurality of BAW resonators of array900C. The resonant signals receiver907may receive responsive resonant sensing signals at the respective differing resonant frequencies of differing members of the plurality of BAW resonators of array900C. The control circuitry903may comprise processing909(e.g., a suitable programmed microprocessor). The processing909may be communicatively coupled with the frequency sweep signals transmitter905and the resonant signals receiver907. The processing909may control operation of the frequency sweep signals transmitter905. The processing909may control operation of the resonant signals receiver907. The processing909may receive from the resonant signals receiver907the resonant sensing signals of the plurality of BAW resonators of array900C. The processing909may process these resonant sensing signals. For example, processing909may use respective frequencies of the resonant sensing signals to identify respective members of the plurality of BAW resonators of array900C that generated the resonant sensing signals. For example, processing909may identify different resonant sensor responses from different respective members of the plurality of BAW resonators of array900C e.g., using respective differing resonant frequencies of BAW resonators of array900C. In some cases at least a portion of system9000C may be implemented wirelessly. The processing909may use respective frequencies of the resonant sensing signals to wirelessly identify respective members of the plurality of BAW resonators of array900C that generated the resonant sensing signals. For example, processing909may wirelessly identify different resonant sensor responses from different respective members of the plurality of BAW resonators of array900C e.g., using respective differing resonant frequencies of BAW resonators of array900C. The control circuitry903may comprise a heaters controller911. The heaters controller911may be coupled with the processing909. The heaters controller911may be coupled with the plurality of heaters to selectively activate and to selectively deactivate respective members of the plurality of heaters. The control circuitry903may comprise a timing controller913. The timing controller913may be coupled with the heaters controller to control duration of operation of the heaters. The timing controller913may be coupled with the heaters controller to control timing of the heaters controller selectively activating and selectively deactivating members of the plurality of heaters. The timing controller913may be coupled with the heaters controller to control respective temperatures of respective sensing regions associated with respective of BAW resonators. The timing controller913may be coupled with the heaters controller911to control respective temperatures of respective sensing regions to be different from one another. The timing controller913may be coupled with the heaters controller to control respective temperatures of respective sensing regions to control respective analyte adsorption at the sensing regions. The timing controller913is coupled with the heaters controller911to control respective temperatures of respective sensing regions to control respective analyte desorption at the sensing regions. The timing controller913may be coupled with the heaters controller911to vary respective temperatures of respective sensing regions over time. Respective temperatures of respective sensing regions associated with respective of BAW resonators of the array900C may be controlled by the timing controller913coupled with the heaters controller911, for example, while the resonant signals receiver907coupled with the BAW resonators of the array900C may receive respective resonant signals therefrom over time. The heaters controller911may be coupled with the frequency sweep signals transmitter to control frequency selective power level of operation of respective members of the plurality of BAW resonators, e.g., heating of the BAW resonators (e.g., the heating function). The heaters controller911may be coupled with the frequency sweep signals transmitter to control frequency selective power level transmission to respective members of the plurality of BAW resonators, so as to control heating of the BAW resonators (e.g., the heating function). The heaters controller911may be coupled with the frequency sweep signals transmitter to control frequency selective power level of operation of respective members of the plurality of BAW resonators, so as to control temperature of respective members of the plurality of BAW resonators. The heaters controller911may be coupled with the frequency sweep signals transmitter to control frequency selective power level transmission to respective members of the plurality of BAW resonators, so as to control temperature of the BAW resonators. FIG.10illustrates a computing system implemented with integrated circuit structures or devices formed using the techniques disclosed herein, in accordance with an embodiment of the present disclosure. As may be seen, the computing system1000houses a motherboard1002. The motherboard1002may include a number of components, including, but not limited to, a processor1004and at least one communication chip1006A,1006B each of which may be physically and electrically coupled to the motherboard1002, or otherwise integrated therein. As will be appreciated, the motherboard1002may be, for example, any printed circuit board, whether a main board, a daughterboard mounted on a main board, or the only board of system1000, etc. The computing system1000may house acoustic resonator sensor array system1010. Acoustic resonator sensor array system1010shown inFIG.10may be similar to system9000C shown inFIG.9Cand discussed previously herein. Acoustic resonator sensor array system1010shown inFIG.10may be coupled with a cartridge, e.g., replaceable cartridge1012A. Replaceable cartridge1012A may be detachably coupled with cartridge receiver1012B. Depending on its applications, computing system1000may include one or more other components that may or may not be physically and electrically coupled to the motherboard1002. These other components may include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth). Computing system1000may include, or more broadly, may be associated with Any of the components included in computing system1000may include one or more integrated circuit structures or devices formed using the disclosed techniques in accordance with an example embodiment. In some embodiments, multiple functions may be integrated into one or more chips (e.g., for instance, note that the communication chips1006A,1006B may be part of or otherwise integrated into the processor1004). The communication chips1006A,1006B enables wireless communications for the transfer of data to and from the computing system1000. The term “wireless” and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chips1006A,1006B may implement any of a number of wireless standards or protocols, including, but not limited to, Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev− DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing system1000may include a plurality of communication chips1006A,1006B. For instance, a first communication chip1006A may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip1006B may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, 5G and others. In some embodiments, communication chips1006A,1006B may include one or more acoustic wave devices1008A,1008B (e.g., resonators, filters and/or oscillators1008A,1008B) as variously described herein (e.g., acoustic wave devices including one or more respective stacks of alternating axis piezoelectric material). Acoustic wave devices1008A,1008B may be included in various ways, e.g., one or more resonators, e.g., one or more filters, e.g., one or more oscillators. Further, such acoustic wave devices1008A,1008B, e.g., resonators, e.g., filters, e.g., oscillators may be configured to be Super High Frequency (SHF) acoustic wave devices1008A,1008B or Extremely High Frequency (EHF) acoustic wave devices1008A,1008B, e.g., resonators, filters, and/or oscillators (e.g., operating at greater than 3, 4, 5, 6, 7, or 8 GHz, e.g., operating at greater than 23, 24, 25, 26, 27, 28, 29, or 30 GHz, e.g., operating at greater than 36, 37, 38, 39, or 40 GHz). Further still, such Super High Frequency (SHF) acoustic wave devices or Extremely High Frequency (EHF) resonators, filters, and/or oscillators may be included in the RF front end of computing system1000and they may be used for 5G wireless standards or protocols, for example. One or more of communication chips1006A,1006B may be in wireless communication with acoustic resonator sensor array1010. The processor1004of the computing system1000includes an integrated circuit die packaged within the processor1004. In some embodiments, the integrated circuit die of the processor includes onboard circuitry that is implemented with one or more integrated circuit structures or devices formed using the disclosed techniques, as variously described herein. The term “processor” may refer to any device or portion of a device that processes, for instance, electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. Processor1004may perform some functions of control circuitry of acoustic resonator sensor array system1010. The communication chips1006A,1006B also may include an integrated circuit die packaged within the communication chips1006A,1006B. In accordance with some such example embodiments, the integrated circuit die of the communication chip includes one or more integrated circuit structures or devices formed using the disclosed techniques as variously described herein. As will be appreciated in light of this disclosure, note that multi-standard wireless capability may be integrated directly into the processor1004(e.g., where functionality of any communication chips1006A,1006B is integrated into processor1004, rather than having separate communication chips). Further note that processor1004may be a chip set having such wireless capability. In short, any number of processor1004and/or communication chips1006A,1006B may be used. Likewise, any one chip or chip set may have multiple functions integrated therein. In various implementations, the computing device1000may be a laptop, a netbook, a notebook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, a digital video recorder, or any other electronic device that processes data or employs one or more integrated circuit structures or devices formed using the disclosed techniques, as variously described herein. Further Example Embodiments The following examples pertain to further embodiments, from which numerous permutations and configurations will be apparent. The foregoing description of example embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed. Many modifications and variations are possible in light of this disclosure. It is intended that the scope of the present disclosure be limited not by this detailed description, but rather by the claims appended hereto. Future filed applications claiming priority to this application may claim the disclosed subject matter in a different manner, and may generally include any set of one or more limitations as variously disclosed or otherwise demonstrated herein. | 302,966 |
11863154 | Features and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The figures are not drawn to scale since the variation in size of various elements in the Figures is too great to permit depiction to scale. DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS The present disclosure provides, inter alia, a structure and method for constructing piezoelectric acoustic resonator micro-devices. Such piezoelectric micro-devices convert electrical energy provided by electrodes disposed on the micro-device into mechanical energy. The micro-device is sized and shaped to resonate at a desired frequency. Mechanical vibration at the desired resonant frequency is converted to electrical energy by the piezoelectric material to provide a filtered electrical signal. Piezoelectric acoustic resonators have been demonstrated in a variety of types, for example with surface acoustic waves in, for example, a surface acoustic wave (SAW) filter, in a bulk acoustic wave (BAW) filter, a film bulk acoustic resonator (FBAR), or a thin-film bulk acoustic resonator (TFBAR). Such resonators are fixed to a substrate and can incorporate acoustic reflectors or acoustic mirrors that inhibit the dissipation of mechanical energy into the substrate and promote resonance at the desired frequency. Other piezoelectric acoustic resonators are suspended over a cavity in a substrate by straight tethers physically connecting the resonator to the substrate in a straight line. The resonator is therefore free to vibrate independently of the substrate, except for any confounding effects from the tethers, thereby reducing mechanical energy losses and providing a greater device efficiency. Suspended piezoelectric acoustic resonators can be constructed by patterning a bottom electrode on a substrate, disposing and (e.g., patterning) piezoelectric material over the bottom electrode, and then patterning a top electrode over the piezoelectric material to form an acoustic resonator. The substrate material beneath the bottom electrode is etched, for example with a dry etch such as XeF2, to form the cavity and suspend the piezoelectric acoustic resonator over the cavity. In some processes, the cavity etch can be initiated via wet etch, for example when exposed to a hot bath of tetramethylammonium hydroxide (TMAH) or potassium hydroxide (KOH), and finalized with the dry XeF2etch. The top and bottom electrodes and piezoelectric material are patterned to form tethers that connect the main body of the piezoelectric acoustic resonator to the substrate. The present disclosure recognizes that an etch material such as XeF2can be difficult to use and problematic in and with some device substrate materials and structures. For example, XeF2can be incompatible with metals, such as gold, that are useful for electrodes used in piezo-electric devices. Furthermore, XeF2etching can cause physical stress to the devices, possibly damaging or destroying them. The etching process can form bubbles that mechanically stress the resonator or tethers. The etch can be a pulsed etch repeated every two seconds in which a gas is repeatedly introduced into a processing chamber, a plasma is discharged, and the gas is vented, exposing the resonator to repeated high and low vacuum pressures that can mechanically stress the resonator. Moreover, vacuum chamber valve operation can cause vibrations. These various mechanical stresses can cause the resonator and its supporting wafer to vibrate and form cracks in the piezoelectric materials, electrodes, or tethers or even detach the resonator from the substrate during etching, thereby significantly impairing final device performance or rendering the final device non-functional. In order to mitigate such undesired outcomes, the resonator and tethers can be fully encapsulated and undergo multiple chemical baths to remove any potential contaminants or any organic residues at the surface. After etching to release the resonator from the substrate (so that there is no direct physical attachment between the device and the substrate), encapsulation materials can be removed to avoid interfering with the acoustic response of the resonator. These additional operations add expense to a manufacturing process and can themselves crack or break suspended devices or tethers, for example with capillary forces. There is a need therefore for alternative methods and structures for making a suspended micro-device. The present disclosure provides, inter alia, suspended device structures having non-linear tethers and methods of their formation. By using non-linear tethers, the final devices can be released from an underlying substrate and suspended over a cavity, for example using wet etchants such as TMAH or KOH rather than a dry etchant such as XeF2, can have improved mechanical isolation, and damage thereto incurred during etching is reduced or eliminated. As described in further detail below, non-linear tethers can have, for example, right, obtuse, or acute Z-shaped tethers (e.g., tethers with right or oblique angles), X-shaped tethers, V-shaped tethers, Y-shaped tethers or double Y-shaped tethers (having orthogonal segments or non-orthogonal segments), or serpentine tethers. A non-linear tether can comprise linear tether segments with centerlines that that are non-collinear (e.g., not collinear or not formed in a common line). According to illustrative embodiments of the present disclosure and as illustrated in the perspective and corresponding cross section, plan view, and detail view ofFIGS.1A-1D(collectivelyFIG.1) and micrograph ofFIG.21, a suspended device structure99comprises a substrate10, a cavity12disposed in the substrate10, and a device20suspended entirely over a bottom of cavity12at least by a tether30that physically connects device20to substrate10in a tether direction31so that tether30extends from an edge of device20. (InFIG.1, tethers30are made of continuous material, for example formed in a single photolithographic deposition.) Tether30has a centerline32that comprises non-collinear points (e.g., non-collinear portions) and is therefore a non-linear centerline32. According to some embodiments of the present disclosure and as illustrated inFIG.1D, tether30has a first tether portion (e.g., tether device portion34) separated from a second tether portion (e.g., tether substrate portion36) in a direction orthogonal to tether direction31, for example so that the first and second portions are not in direct contact. The separation can be a distance D that is at least a width W of any portion of tether30so that distance D is equal to or greater than width W. Width W can be a width measured at a cross section of tether30. In some embodiments, distance D is no less than an average width, a minimum width, or a maximum width of tether30. Distance D can be measured in a direction parallel to a device edge21from which tether30extends. In some embodiments, “a direction parallel to edge of device21” can refer to a direction parallel to a tangent of an edge of device21, for example when device20has a curved edge21, or a direction orthogonal to an angular bisector of a vertex formed by two edges of device21, for example when tether30extends from a vertex of a device20with a polygonal perimeter. Device20can have a length L that is a longest dimension of device20orthogonal to tether direction31(e.g., parallel to device edge21). In some embodiments, the respective centerlines32of the first and second tether portions are separated by at least a distance L3that is at least twice a width of any portion of tether30in a direction. The direction can be orthogonal to at least one of the centerline of the first portion and the centerline of the second portion, orthogonal to tether direction31, or parallel to device edge21. The width can be, for example, a maximum width, an average width, or a minimum width. Device20can be or can include any one or more of a piezoelectric device, a micro-device, an integrated circuit, an electromechanical filter, an acoustic resonator, or a power source that harvests vibrations to provide electrical power but is not limited to any of these devices. Device20can be native to substrate10, or non-native to substrate10. A piezoelectric device is a device that comprises electrodes and piezoelectric material that converts electrical signals provided by the electrodes to mechanical energy, converts mechanical energy to electrical signals provided on the electrodes, or converts electrical signals to mechanical energy and mechanical energy to electrical signals through electrodes (e.g., converts electrical signals to mechanical energy and then back to electrical signals that are possibly modified or filtered). Electrodes can be disposed on one side of device20(e.g., a top side opposite substrate10) or on opposing top and bottom sides of device20. Electrodes can be solid or interdigitated on one side or both sides of device20and can cover and be in contact with at least 10% (e.g., at least 20%, 40%, 50%, 60%, 80%) of the piezoelectric material. If electrodes cover too small of an area on the piezoelectric material, a conversion of electrical energy in the electrodes to mechanical energy in the piezoelectric material can be inefficient and inadequate. According to some embodiments of the present disclosure, the electrodes cover and are in contact with at least 10% of the piezoelectric material area. A micro-device is any device that has at least one dimension that is in the micron range, for example having a planar extent from 2 microns by 5 microns to 200 microns by 500 microns (e.g., an extent of 2 microns by 5 microns, 20 microns by 50 microns, or 200 microns by 500 microns) and a thickness of from 200 nm to 200 microns (e.g., at least or no more than 2 microns, 20 microns, or 200 microns). Device20can have any suitable aspect ratio or size in any dimension and any useful shape, for example a rectangular cross section or top or bottom surface. Device20can be an electromechanical filter that filters electrical signals through mechanically resonant vibrations, for example an acoustic resonator or a power source that responds to mechanical vibrations with electrical power. As shown in the cross section ofFIGS.2A and2Btaken along cross section line A of a structure generally in accordance withFIG.1C, device20of suspended device structure99can comprise a layer of piezoelectric material54, for example, but not limited to, aluminum nitride (AlN) or potassium sodium niobate (KNN), with a top electrode50on a top side of the piezoelectric material54and a bottom electrode52on a bottom side of the piezoelectric material54opposite the top side. Top and bottom electrodes50,52are collectively electrodes. Top electrode50or bottom electrode52, or both, can extend along a surface of tether30(e.g., a top surface and a bottom surface thereof, respectively) as shown inFIG.2A. As shown inFIG.2B, bottom electrode52can pass through a via and extend over a top side of tether56. As shown in the cross section ofFIG.2Cand plan view ofFIG.2D, bottom electrode52can pass through a via and extend over a top side of the same tether56on which the top electrode50is disposed. Generally, various electrode configurations are possible, such as, for example, those typically found in SAW resonators, BAW resonators, FBARs, or TFBARs and are expressly contemplated as embodiments of the present disclosure. In some embodiments, device20is an acoustic wave filter, such as a SAW filter, a BAW filter, an FBAR filter, or a TFBAR filter. In some embodiments, device20is a piezoelectric sensor. Tethers30can comprise any suitable tether material56and can incorporate one or more layers, for example one or more layers similar to or the same as those layer(s) of device20, for example comprising electrode materials and/or piezoelectric materials54, for example as shown inFIG.2A, or comprising dielectric materials. Top and bottom electrodes50,52can extend over or be a part of tethers30to electrically connect device20to external devices or electrical connections. Electrodes can comprise a patterned layer of metal, or layer(s) of metal, for example titanium and/or gold in, for example, thicknesses from 100 nm to 1 micron. Other materials, such as dielectrics, for example silicon dioxide or silicon nitride, can be used in tethers30. Substrate10can be any useful substrate in which cavity12can be formed, for example as found in the integrated circuit or display industries. Substrate10can be chosen, for example, based on desirable growth characteristics (e.g., lattice constant, crystal structure, or crystallographic orientation) for growing one or more materials thereon. In some embodiments of the present disclosure, substrate10is an anisotropically etchable. For example, substrate10can be a monocrystalline silicon substrate with a (100) or (111) orientation. An anisotropically etchable material etches at different rates in different crystallographic directions, due to reactivities of different crystallographic planes to a given etchant. In particular, silicon (100) is a readily available, relatively lower cost monocrystalline silicon material for which non-linear tethers30of the present disclosure enable etching and releasing device from substrate10. For example, potassium hydroxide (KOH) displays an etch rate selectivity 400 times higher in silicon crystal directions than in silicon directions. In the particular case of silicon (100), use of a non-linear tether30(as opposed to a linear tether) can ensure complete release of device20and tether30when substrate10is etched, for example using KOH, in order to suspend device20with tether30. Generally, monocrystalline substrates10having other orientations (such as a (111) orientation) are less prone to incomplete release of device20and a tether when using a linear tether. Moreover, devices20made on or in a silicon (100) crystal structure can have less stress and therefore less device bowing after release. According to some embodiments of the present disclosure, tethers30have a non-linear (e.g., non-collinear) centerline32(including non-collinear points). A centerline32is a set of points that bisect tether30in a plane that is substantially parallel to a surface of substrate10. Centerline32extends along a length of tether30. A length of tether30can be longer than a width W of tether30. Centerline32can divide tether30into two halves, for example halves that are geometrically congruent or similar, that can completely overlie each other, or that are reflections or rotations of each other. Centerline32can comprise points midway between tether edges33of tether30, for example at the midpoint of a straight line segment that intersects opposite tether edges33of tether30, for example tether edges33A and33B as shown inFIG.1D. Opposite tether edges33can be the closest edge of tether30on an opposite side of centerline32. For example, as shown inFIG.1D, centerline32is a distance L1from first tether edge33A and a distance L2from second tether edge33B in a direction orthogonal to centerline32, and L1is equal to L2. Centerline32can be continuous or discontinuous. For example, if tether30has a portion with an asymmetric cross section adjacent to a portion with a symmetric cross section can have a discontinuous centerline32. At a discontinuous point of centerline32, no orthogonal direction can be defined. A non-linear centerline32is a centerline comprising points that are not all in a common straight line substantially in a common plane (i.e., comprising non-collinear points). As used herein, centerlines32, device edges21, widths W of tether30, and separation distances D are drawn or measured in a plane parallel to a surface of substrate10(e.g., a bottom of cavity12in substrate10). Non-linear (e.g., serpentine) tethers30can have different structures or arrangements, as shown inFIGS.5-11and18-22. Tethers30with a centerline32comprising non-collinear points provide advantages in etching cavity12to release device20from substrate10, where substrate10comprises an anisotropically etchable material (such as monocrystalline silicon (100) and (111)). As illustrated inFIG.3, with such substrates, etching under a conventional straight tether on directly opposite sides of a device20properly oriented with respect to the crystal structure will not release device20from underlying substrate10if the straight tether extends in a direction orthogonal to device20, as is commonly done. Without wishing to be bound to any particular theory, an etchant applied to the crystal surface will form an inverted pyramid P in a crystalline substrate10and will stop etching when it reaches a crystal etch stop plane defined by the crystal structure. Where convex corners exist in the etched structure, the etchant can attack the material from two directions, in at least one of which the etch will proceed, or from a different plane in which etching will proceed. When only concave corners remain that expose crystal planes that are resistant to etching, the etch will stop when the inverted pyramid P shape is attained. Because the ends of device20have convex corners Cx, the etch can proceed to release the ends but when the etchant reaches the straight tethers only concave corners Cvexposing crystal planes resistant to etching remain, so the etch stops and the straight tether and the portion of device20in a line with the straight tethers will not be released, as shown inFIG.3. (A released device20is physically connected to substrate10only by tethers30and is not otherwise directly connected to substrate10. A released tether30is physically connected only to device20and only to substrate10at or on an edge of cavity12(e.g., at an anchor portion18). After a desired complete release, there is no physical attachment from the bottom of device20or tether30to substrate10.) In contrast and according to some embodiments of the present disclosure as illustrated inFIG.4, a non-linear tether30having a non-linear centerline32has convex corners Cxin non-linear tether30as well as device20that are accessible for etching, for example when constructed on, in, or over an anisotropically etchable monocrystalline substrate, e.g., a silicon (100) or silicon (111) substrate10. However, because the etch fronts will cease to advance once a concave corner Cvis met, it is preferred that the portions of tether30that extend in the same direction are separated in a direction orthogonal to the direction in which the portions of tether30extends, for example by a distance D greater than or equal to a width W of tether30(and the respective centerlines32can be separated by a distance L3greater than or equal to twice a width W of tether30, for example in a direction orthogonal to at least one of the portions, in a direction orthogonal to tether direction31or parallel to device edge21). Because the crystalline etch planes of the crystalline substrate10are angled (not orthogonal to a surface of substrate10, for example about 54.7 degrees), to ensure a complete release of device20from substrate10device and substrate tether portions34,36are separated, for example by a distance D equal to or greater than a width of tether30. Tetramethylammonium hydroxide (TMAH) or potassium hydroxide (KOH) can be used to anisotropically etch monocrystalline silicon (100) or (111) and such materials are contemplated for use in structures and methods of the present disclosure. Certain embodiments of the present disclosure provide a structure, materials, and method for a suspended device structure99comprising a device20suspended over a cavity12in a substrate10by non-linear tethers30. Substrate10can be an anisotropically etchable material such as silicon (100). Device20is released from substrate10with an etchant, leaving device20suspended over cavity12is substrate10by non-linear tethers30. Such a structure has the advantage of using etching materials and process that are less stressful to devices20and tethers30, improving manufacturing yields. Moreover, the present disclosure recognizes that a source of parasitic resonance modes in device20, when a piezo-electric device, can result specifically from straight tethers used to connect device20to substrate10over bottom of cavity12. Non-linear tethers30of the present disclosure can have improved performance by reducing the number or magnitude of parasitic resonance modes in device20, where device20comprises piezoelectric materials54. Furthermore, using anisotropically etchable substrate10material in substrate10can reduce contamination during etching, such as particles, as compared to using isotropically etchable materials such as oxides that are etched with etchants such as hydrofluoric acid or hydrochloric acid. As shown in the embodiments illustrated inFIGS.1A-1D,4-6B,8-11, and19-23, centerline32comprises non-collinear line segments that are straight. Thus, each line segment comprises collinear points, but the line segments themselves are not collinear. For example, as shown inFIG.5, tether30comprises a tether device portion34that attaches to device edge21of device20at an orthogonal angle, a tether substrate portion36that attaches to an edge of substrate10(not shown) at an orthogonal angle, and a tether connection portion38that physically connects tether device portion34to tether substrate portion36, optionally at an orthogonal angle. Centerlines32of each of tether portions34,36,38are straight line segments. Thus,FIG.5illustrates an example of a right Z-shaped tether30. Centerlines32of tether device portion34and tether substrate portion36can be offset, parallel, and orthogonal to device edge21of device20and/or an edge of cavity12(e.g., as shown inFIG.1) and centerline32of tether connection portion38is orthogonal to the centerlines32of both tether device portion34and tether substrate portion36, as is also the case inFIG.1. To facilitate device20release from substrate10and suspend device20over cavity12(e.g., as shown inFIG.1), tether device portion34and tether substrate portion36are separated in a direction orthogonal to a tether direction, for example by a distance no less than a width W of tether device portion34or a width W of tether substrate portion36, for example as shown inFIGS.1and5. As shown inFIG.5, tether device portion34is separated from tether substrate portion36by a distance D that is no less than a width W of tether device portion34or tether substrate portion36. Thus, in some embodiments wherein a device centerline is orthogonal to centerlines32of tether device portion34and tether substrate portion36, tether device portion34and tether substrate portion36are separated in a direction of the device centerline, for example by a length of tether connection portion38that is no less than a width W of tether device portion34or tether substrate portion36. A separation distance D of tether connection portion38between tether device and substrate portions34,36that is greater than or equal to a width W of tether30can be, but is not necessarily, equivalent to a centerline32of a first tether portion (e.g., tether device portion34) separated from a centerline32of a second tether portion (e.g., tether substrate portion36) by a distance L3that is at least twice a width W of tether30in a direction parallel to an edge of device20or cavity12since centerline32bisects tether30, if the first and second tether portions have a constant width. Thus, in some embodiments, a tether device portion34centerline32and a tether substrate portion36centerline32are separated by a distance that is at least twice a width of tether30, for example in a direction orthogonal to at least one of tether device portion34centerline32and tether substrate portion36centerline32. A width W of tether30can be a width W of any portion of tether30, for example a minimum, average, or maximum width W, and can be a dimension of tether30that is shorter than a length of tether30in a plane substantially parallel to a surface of substrate10A length of tether30is a length of centerline32of tether30extending from device20to substrate10. By ensuring separation by a distance D between tether device portion34and tether substrate portion36, etching beneath tether30is facilitated and can proceed quicker and release from substrate10is assured. According to some embodiments of the present disclosure, a suspended device structure99comprises a substrate10, a cavity12disposed in a surface of substrate10, and a device20suspended entirely over a bottom of cavity12, device20comprising a device material and one or more electrodes (e.g., top and bottom electrodes50,52) disposed on one or more sides of device20. Device20is suspended at least by a tether30that physically connects device20to substrate10. Tether30has a non-linear centerline32, and (i) the one or more electrodes are in contact with at least 10% of at least one side of device20, (ii) device20comprises a device material that is a piezoelectric material54, or (iii) both (i) and (ii). According to some embodiments and as illustrated inFIGS.1,4, a suspended device structure99comprises a substrate10comprising monocrystalline silicon having a (100) orientation, a cavity12disposed in a surface of substrate10, and a device20suspended entirely over a bottom of cavity12. Device20is suspended at least by a tether30that physically connects device20to substrate10. Tether30has a non-linear centerline32and a length L of device20is oriented with respect to a crystalline structure of substrate10so that an etchant applied to substrate10will etch completely beneath device20to form cavity12, for example in a fast etch direction. Length L of device20can be a long dimension of device20, for example the extent of device20in its largest dimension. Length L of device20(e.g., a largest dimension of device20) can be aligned with a normal direction of fast etch planes for substrate10(that is, with a normal to those planes) in order to promote complete release during etching. As shown in the embodiments illustrated inFIG.6A, centerline32of tether30reverses direction, forming a zigzag path having acute angles between the connected portions (e.g., between tether device portion34and tether connection portion38and between tether substrate portion36and tether connection portion38). As shown inFIG.6A, a portion of centerline32can extend from tether device portion34to tether substrate portion36in a direction toward device20, forming an acute angle in centerline32. Thus,FIG.6Aillustrates an example of an acute Z-shaped tether30. As shown inFIG.6B, centerline32can extend from tether device portion34to tether substrate portion36in a direction toward an edge18of cavity12in substrate10.FIGS.6A and6Bare collectively referred to asFIG.6. Although tethers30are illustrated as having a constant width W inFIGS.5,6A, and8in a plane and direction parallel to a surface of substrate10, in some embodiments tethers30have a variable width W in a direction parallel to a surface of substrate10, as shown inFIG.6B(where tether device portion34and tether connection portion38have a narrower width than tether substrate portion36). As shown in the embodiments illustrated inFIG.7, tether30with centerline32is curved, for example at least partially curved (e.g., is curved).FIG.7illustrates a serpentine tether. A serpentine tether30can be, for example, S-shaped. A serpentine tether30can have a shape of a portion of a sine wave. As shown in the embodiments illustrated inFIGS.8-11and as illustrated in the micro-graphs ofFIGS.18-20, in some embodiments of the present disclosure, tether30comprises an undivided tether portion42that divides into branches40. As shown inFIGS.8and18-20, branches40are attached to substrate10at an edge18of cavity12. As shown inFIGS.9-11, branches40are attached to device20. In some embodiments, as illustrated inFIG.11, branches40attach to both device20and substrate10and form an X-shaped tether30. In some embodiments, as illustrated inFIG.10, tether30can be double Y-shaped. Branches40can be longer or shorter than undivided tether portion42of tether30. Branches40can be wider or narrower than undivided tether portion42of tether30. For branched tethers30, each branch40comprises a centerline32that bisects branch40(assuming no asymmetry in the branch40). Tethers30comprising branches40longer than undivided tether portion42can facilitate releasing device20from substrate10by etching cavity12without damaging tethers30or device20. According to some embodiments of the present disclosure and as shown inFIGS.1-11and18-23, suspended device structure99can comprise multiple tethers30that attach device20to substrate10, for example two tethers30disposed on opposing sides of device20and directly opposite each other. Tethers30can have centerlines32that intersect a center or centerline of device20in a dimension so that tethers30symmetrically suspend device20over cavity12and device20extends an equal distance in opposite directions from tethers30. Thus, suspended device structure99can comprise a first tether30that physically connects device20to substrate10and a second tether30different from the first tether30that physically connects device20to substrate10. First and second tethers30can be disposed on opposite sides of device20and attach to opposite sides of cavity12and can be disposed directly opposite each other with respect to device20or cavity12, or both. First tether30can be a mirror reflection of second tether30, for example as shown inFIGS.1,6B,7, and21. In some embodiments, first tether30is a rotation of second tether30(e.g., has a rotated orientation with respect to second tether30, and vice versa), for example as shown inFIGS.5,6A, and23. Where first and second tethers30are symmetric, they can be both a mirror reflection and a rotation, for example as shown inFIGS.8-11and18-20. According to some embodiments of the present disclosure, a size and shape of second tether30is substantially identical to a size and shape of first tether30. Thus, according to embodiments, tether30can be at least partially X-shaped (e.g., as shown inFIG.11), V-shaped (e.g., as shown inFIG.19), Y-shaped (e.g., as shown inFIGS.8,9,18), S-shaped (e.g., as shown inFIG.7), double Y-shaped (e.g., as shown inFIG.10), acute Z-shaped (e.g., as shown inFIG.6A), obtuse Z-shaped (e.g., as shown inFIG.6B), oblique Z-shaped (e.g., either acute or obtuse), or right Z-shaped (e.g., as shown inFIGS.1,5,21-23). FIGS.1A,1C,1D,4-11, and18-22show devices20connected to substrate10by two tethers30disposed on opposite sides of device20. However, in some embodiments, only one tether30disposed on one side of device20connects device20to substrate10. Additionally, in some embodiments, two or more (e.g., three or more) tethers30disposed on one or more sides of device20connect device20to substrate10. Top electrode50can extend along a surface of first tether30(e.g., a top side of first tether30) and bottom electrode52can extend along a surface of second tether30(e.g., a bottom side of second tether30, for example as shown inFIG.2A). As shown in some embodiments and as illustrated inFIGS.12-15, devices20can be attached to substrate10in a variety of ways and with a corresponding variety of structures. As shown inFIG.12in a cross section of suspended device structure99taken across cross section line B ofFIG.1C, tethers30are attached to a side, wall, anchor, or edge18of cavity12. As shown inFIG.13, tethers30are attached to a surface of substrate10above cavity12. As shown inFIG.14, device20can be constructed in a common layer or plane with tethers30or, as shown inFIG.15, device20can be disposed in a layer beneath tethers30. Thus, device20that is suspended entirely over a bottom portion of cavity12can be disposed, for example, completely within cavity12, at least partially in cavity12, or completely above cavity12. The various structures can be made using photolithographic methods and materials known in the integrated circuit and MEMS industry, for example, and the selection of a specific structure can complement a desired construction process for a desired device20. According to some embodiments of the present disclosure and as shown inFIGS.16and17, a wafer structure98comprises a substrate10comprising a patterned sacrificial layer14defining one or more anchor portions18separating one or more etched sacrificial portions16. Etched sacrificial portions16can correspond to cavities12. One or more devices20are each suspended entirely over an etched sacrificial portion16of the one or more etched sacrificial portions16at least by a tether30that physically connects device20to an anchor portion18of the one or more anchor portions18. Tether30has a centerline32that comprises non-collinear points. A first tether portion can be separated from a second tether portion, e.g., in a direction parallel to a device edge21from which tether30extends. In some embodiments, the tethers are separated by a distance D that is at least a width W of tether30. In some embodiments, centerline32can have a first centerline portion separated from a second centerline portion by a distance L3that is at least twice a width W of tether30. In some embodiments, device20comprises a device material and one or more electrodes disposed on one or more sides of device20, and (i) the one or more electrodes are in contact with at least 10% of at least one side of device20, (ii) the device material is a piezoelectric material54, or (iii) both (i) and (ii). Substrate10can be a source wafer and each device20can be disposed completely over a sacrificial portion16.FIG.16is a cross section taken along cross section A ofFIGS.1A and1Cof a source wafer (substrate10) with multiple etched sacrificial portions16forming inverted pyramids P (cavity12) and devices20suspended over the cavity12.FIG.17is a cross section of taken along cross section B ofFIG.1Cof a source wafer (substrate10) with multiple sacrificial portions16(cavity12) and devices20suspended over the cavity12.FIGS.16and17illustrate devices20connected to source wafer10with right Z-shaped tethers. (For clarity, the inverted pyramids P are not illustrated inFIGS.1B,2A-2C, and12-15.) FIGS.18-22are micrographs of a device20physically connected to substrate10and suspended over cavity12(formed by etching a portion of substrate10).FIG.18illustrates branched tethers30corresponding toFIG.8. InFIG.18, branches40are narrower than undivided tether portion42and the tether branch junctions do not have any right angles.FIG.19illustrates branches40that connect to a common point on device20(in a V shape). InFIG.19, a vertex of the tether is disposed near an edge of the device. Although not shown, a suspended device structure99could comprise branches40that connect to a common point on substrate10(e.g., an edge18of cavity12).FIG.20comprises branches40are not angled but have a line segment portion that is parallel to an edge18of cavity12and a line segment portion that is orthogonal to the edge18of cavity12. That is, inFIG.20, the tether branch junctions have right angles.FIG.21corresponds toFIG.1andFIG.22corresponds toFIG.5. Tethers30are mirrored inFIG.21. InFIG.21(and separately inFIG.22), tethers30have substantially identical sizes and shapes and can be congruent if rotated or reflected. As shown inFIGS.18-22, embodiments of the present disclosure have been constructed and demonstrated using an AlN piezoelectric material54with top and bottom electrodes50,52to form a rectangular device20with opposing tethers30disposed at a central point of device20in the long direction. Bottom electrode52extends under one tether30and top electrode50extends over the top of device20(as shown inFIG.2A). As shown inFIG.23, embodiments of the present disclosure can be constructed by providing a source wafer10with a sacrificial layer14in step100. The source wafer10serves as substrate10as described above and can be provided as monocrystalline silicon (100). In step110, source wafer10is processed to form device20and, in step120tethers30, (such as any electrodes) when source wafer10is etched). Device20and tethers30can be constructed in a common step (so that steps110and120are the same step, or in separate steps110and120, with the same, similar, or different materials using photolithographic methods and materials known in the integrated circuit and MEMs industries. Tethers30can be formed by, for example, depositing a layer of and patterning it or by pattern-wise depositing material. Sacrificial portions16are etched in step130, for example with TMAH or KOH, to form cavity12beneath device20and tethers30and release device20and tethers30from source wafer10, leaving device20physically connected with tethers30to an anchor portion18at the edge18of cavity12or on a portion of source wafer10at the edge18of cavity12. Thus, according to some embodiments, a method of making a suspended device structure99comprises forming a device20on a substrate10entirely over a sacrificial portion16of substrate10, forming a tether30having a non-linear centerline32, and etching sacrificial portion16of substrate10without substantially etching device20or tether30to form a cavity12disposed in a surface of substrate10and to suspend device20entirely over a bottom of cavity12, wherein (a) a first tether portion is separated from a second tether portion by a distance that is at least a width W of tether30, (b) device20comprises a device material and one or more electrodes disposed on one or more sides of the device material, and (i) the one or more electrodes are in contact with at least 10% of at least one side of the device material, (ii) the device material is a piezoelectric material54, or (iii) both (i) and (ii), or (c) both (a) and (b). Forming tether30can comprise any one or more of: forming a layer on substrate10and patterning the layer, pattern-wise depositing material, and forming device20comprises printing an unpackaged bare die component on an intermediate substrate disposed on substrate10. According to various embodiments of the present disclosure, non-linear (e.g., non-collinear or serpentine) tethers30can comprise a variety of shapes, as illustrated. In some embodiments, device20is a MEM device that employs acoustic resonance to process, respond to, or generate electrical signals. Acoustic resonance in device20is a resonant mechanical vibration that can be affected by the structure of device20, for example piezoelectric material54, dielectric layers, protective encapsulation layers, or top and bottom electrodes50,52. Tethers30can also affect the acoustic resonance of device20. Hence, depending on the desired nature of device20acoustic resonance (e.g., magnitude, frequency, wavelength, direction), different tether30structures can be preferred. For example, sharp device20or tether30edges can induce high-frequency acoustic reflections and angled, or curved edges can tend to dampen or redirect such reflections, at least in device20. Tethers30can be disposed at locations that promote desired vibrations, for example at null spots where vibrations are out of phase or extending from one null spot to another on device20. Thus, in some embodiments, tether30can be disposed at or near a midpoint of device edge21from which it extends (and/or at or near a midpoint of cavity wall18) or can be offset toward one end of device edge21from which it extends. In certain embodiments, the source wafer (substrate10) can be any structure with a surface suitable for forming patterned sacrificial layers14, sacrificial portions16(cavity12), anchors18, and patterned device20. For example, source wafers10can comprise any anisotropically etchable material. Suitable semiconductor materials can be silicon or silicon with a (100) crystal structure (e.g., orientation). A surface of source wafer10surface can be substantially planar and suitable for photolithographic processing, for example as found in the integrated circuit or MEMs art. In some embodiments of the present disclosure, devices20are small integrated circuits, for example chiplets, having a thin substrate with at least one of (i) a thickness of only a few microns, for example less than or equal to 25 microns, less than or equal to 15 microns, or less than or equal to 10 microns, (ii) a width of 5-1000 microns (e.g., 5-10 microns, 10-50 microns, 50-100 microns, or 100-1000 microns) and (iii) a length of 5-1000 microns (e.g., 5-10 microns, 10-50 microns, 50-100 microns, or 100-1000 microns). Such chiplets can be made in a native source semiconductor wafer (e.g., a silicon wafer) having a process side and a back side used to handle and transport the wafer using lithographic processes. The devices20can be formed using lithographic processes in an active layer on or in the process side of the source wafer10. Methods of forming such structures are described, for example, in U.S. Pat. No. 8,889,485. According to some embodiments of the present disclosure, source wafers10can be provided with components20, sacrificial layer14(a release layer), and tethers30already formed, or they can be constructed as part of the process in accordance with certain embodiments of the present disclosure. In some embodiments, devices20are piezoelectric devices formed on or in a semiconductor wafer, for example silicon, which can have a crystalline structure. Piezoelectric materials54can be deposited on source wafer10, for example by sputtering, evaporation, or chemical vapor deposition. Suitable piezoelectric materials54can include aluminum nitride (AlN) or potassium sodium niobate (KNN) or other piezoelectric materials54, such as lead zirconate titanate (PZT). In certain embodiments, devices20can be constructed using foundry fabrication processes used in the art. Layers of materials can be used, including materials such as metals, oxides, nitrides and other materials used in the integrated-circuit art. Devices20can have different sizes, for example, less than 1000 square microns or less than 10,000 square microns, less than 100,000 square microns, or less than 1 square mm, or larger. Devices20can have, for example, at least one of a length, a width, and a thickness of no more than 500 microns (e.g., no more than 250 microns, no more than 100 microns, no more than 50 microns, no more than 25 microns, or no more than 10 microns). Devices20can have variable aspect ratios, for example at least 1:1, at least 2:1, at least 5:1, or at least 10:1. Devices20can be rectangular or can have other shapes. As is understood by those skilled in the art, the terms “over” and “under” are relative terms and can be interchanged in reference to different orientations of the layers, elements, and substrates included in the present disclosure. For example, a first layer on a second layer, in some implementations means a first layer directly on and in contact with a second layer. In other implementations, a first layer on a second layer includes a first layer and a second layer with another layer therebetween. Having described certain implementations of embodiments, it will now become apparent to one of skill in the art that other implementations incorporating the concepts of the disclosure may be used. Therefore, the disclosure should not be limited to certain implementations, but rather should be limited only by the spirit and scope of the following claims. Throughout the description, where apparatus and systems are described as having, including, or comprising specific components, or where processes and methods are described as having, including, or comprising specific steps, it is contemplated that, additionally, there are apparatus, and systems of the disclosed technology that consist essentially of, or consist of, the recited components, and that there are processes and methods according to the disclosed technology that consist essentially of, or consist of, the recited processing steps. It should be understood that the order of steps or order for performing certain action is immaterial so long as the disclosed technology remains operable. Moreover, two or more steps or actions in some circumstances can be conducted simultaneously. The disclosure has been described in detail with particular reference to certain embodiments thereof, but it will be understood that variations and modifications can be effected within the spirit and scope of the claimed invention. PARTS LIST A cross section lineB cross section lineCxconvex cornerCvconcave cornerD distanceL lengthL1, L2, L3distanceP inverted pyramidW width10substrate/source wafer12cavity14sacrificial layer16sacrificial portion18anchor portion/wall/cavity edge20device21device edge30tether31tether direction32centerline33tether edge33A first tether edge33B tether edge34tether device portion36tether substrate portion38tether connection portion40branch42undivided tether portion50top electrode52bottom electrode54piezoelectric material56tether material98wafer structure99suspended device structure100provide source wafer with sacrificial layer step110form device over sacrificial portions step120form tether step130etch sacrificial portions step | 45,435 |
11863155 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, preferred embodiments of the present invention will be described in detail with reference to examples and drawings. The preferred embodiments described below are all inclusive or specific examples. The numerical values, shapes, materials, elements, and arrangement and connection configurations of the elements shown in the following preferred embodiments are merely examples, and are not intended to limit the present invention. Among the elements in the following preferred embodiments, constituent elements that are not described in the independent claims will be described as arbitrary or optional elements. Also, the sizes or size ratios of the elements illustrated in the drawings are not necessarily strict. In addition, in the drawings, the same reference characters are used for the same or substantially the same configurations, and descriptions thereof may be omitted or simplified. A surface acoustic wave element is used as an element of a surface acoustic wave filter to filter and output an input high-frequency signal, for example. FIG.1includes a plan view and a cross-sectional view schematically illustrating a surface acoustic wave element10according to a preferred embodiment of the present invention.FIG.2is a cross-sectional view illustrating details of a substrate101and an IDT electrode121of the surface acoustic wave element10. Hereinafter, a direction in which a surface acoustic wave is propagated along a principal surface101aof the substrate101is defined as a T direction, a direction perpendicular or substantially perpendicular to the principal surface101ais defined as a V direction, and a direction perpendicular or substantially perpendicular to both the T direction and the V direction is defined as an H direction. As illustrated inFIG.1, the surface acoustic wave element10includes the substrate101, a first dielectric layer122provided on the substrate101, the IDT electrode121provided on the first dielectric layer122, a second dielectric layer103, and a protective layer104. The surface acoustic wave element10of the present preferred embodiment propagates a high-frequency signal on the substrate101using a Rayleigh wave generated on the substrate101when the high-frequency signal is input to the IDT electrode121. The substrate101preferably includes, for example, a 127.5° Y-cut X-propagation LiNbO3piezoelectric single crystal, and has a structure in which a high-frequency signal is propagated using a Rayleigh wave. In order to utilize the Rayleigh wave, a Y-cut angle of the LiNbO3piezoelectric single crystal may preferably be at least a predetermined Y-cut angle within a range equal to or larger than about 100° and equal to or smaller than about 160°, for example. More preferably, the Y-cut angle of the LiNbO3piezoelectric single crystal may be a predetermined Y-cut angle within a range equal to or larger than about 120° and equal to or smaller than about 130°, for example. Further, the substrate101may be a substrate having piezoelectricity in at least a portion thereof. For example, the substrate101may include a piezoelectric thin film (piezoelectric body) on a surface, and be a multilayer body including a film having a different acoustic velocity from that of the piezoelectric thin film, a supporting substrate, and the like, for example. The substrate101may have piezoelectricity over the entire substrate. In this case, the substrate101is a piezoelectric substrate including one layer of piezoelectric layer. As illustrated inFIG.1, the IDT electrode121includes a pair of comb-shaped electrodes121aand121bthat face each other. Each of the comb-shaped electrodes121aand121bincludes a plurality of electrode fingers that are parallel or substantially parallel to each other and a busbar electrode that connects the plurality of electrode fingers. The plurality of electrode fingers extend along the H direction orthogonal or substantially orthogonal to the T direction. In the surface acoustic wave element10, a wave length of an acoustic wave to be excited is defined by design parameters, such as a repetition period λ of the plurality of electrode fingers, a duty ratio W/(W+S), and the like. As illustrated inFIG.2, the IDT electrode121is formed by laminating a metal film211, a metal film212, a metal film213, a metal film214, and a metal film215in this order from a side of the substrate101. The metal film211is preferably an adhesion film to improve close contact to the substrate101, and is made of, for example, a NiCr material having a thickness of about 10 nm. The metal film212is preferably a main electrode to confine energy of an acoustic wave, and is made of, for example, a Pt material having a thickness of about 40 nm. The metal film213is preferably a barrier film to reduce or prevent mutual diffusion between the metal film212and the metal film214, and is made of, for example, a Ti material having a thickness of about 10 nm. The metal film214is preferably a conductive film to improve conductivity of the electrode fingers, and is made of, for example, an AlCu alloy material having a thickness of about 159 nm and a low resistance value. The metal film215is preferably an adhesion film to improve adhesion to the second dielectric layer103, and is made of, for example, a Ti material having a thickness of about 10 nm. The metal film212made of, for example, Pt is the highest-density metal film among the plurality of metal films211to215of the multilayer body. Further, the metal films211,213,214and215define metal films other than the metal film212having the highest density. The first dielectric layer122adjusts an electromechanical coupling coefficient and is disposed between the substrate101and the IDT electrode121. The first dielectric layer122is preferably, for example, a silicon oxide layer having a thickness of about 1 nm, and is formed by sputtering. The second dielectric layer103improves the temperature coefficient of frequency and protects the IDT electrode121from an external environment. The second dielectric layer103is preferably, for example, a silicon oxide layer having a thickness of about 30 nm, and is provided on the first dielectric layer122so as to cover the IDT electrode121. The first dielectric layer122preferably has a thickness less than that of the second dielectric layer103. The thickness of the first dielectric layer122is approximately 1/30 of the thickness of the second dielectric layer103. A total thickness of the IDT electrodes121is preferably the same or substantially the same as the thickness of the second dielectric layer103. The protective layer104adjusts a frequency and protects the IDT electrode121from the external environment. The protective layer104is preferably, for example, a SiN layer, and is provided on the second dielectric layer103. Note that the configurations of the surface acoustic wave element10illustrated inFIGS.1and2are merely examples, and are not limited thereto. The number and length of the electrode fingers of the IDT electrode121are not limited thereto. The IDT electrode121may be a single layer of a metal film instead of a multilayer structure including metal films. Moreover, materials of each of the metal films and each of the protective layers are not limited to those described above. The IDT electrode121may be made of, for example, a metal such as Ti, Al, Cu, Pt, Au, Ag, Pd or the like or an alloy, or may include a plurality of multilayer bodies made of the metal or alloy described above. The configurations of the second dielectric layer103, the protective layer104, and the first dielectric layer122are not limited to the above-described configurations, and may be made of, for example, a dielectric or an insulator such as SiO2, SiN, AlN, polyimide, or a multilayer body thereof. The surface acoustic wave element10having the above configuration is able to reduce or prevent the variation in the band width ratio when compared to a comparative example that is a surface acoustic wave element using a Love wave. To facilitate this understanding, the configuration of the surface acoustic wave element in the comparative example will be described. FIG.3is a cross-sectional view illustrating details of a substrate501and the IDT electrode121of a surface acoustic wave element510in the comparative example. The surface acoustic wave element510in the comparative example uses a Love wave generated on the substrate501when a high-frequency signal is input to the IDT electrode121, to propagate the high-frequency signal on the substrate501. The substrate501includes a −4° Y-cut X-propagation LiNbO3piezoelectric single crystal, and has a structure in which the high-frequency signal is propagated using the Love wave. Other configurations are the same or substantially the same as those of the present preferred embodiment, and descriptions thereof will be omitted. FIGS.4A and4Binclude diagrams illustrating vibration distributions (simulation results) of cross sections of the surface acoustic wave elements, andFIG.4Ais the vibration distribution when a Rayleigh wave is generated (the present preferred embodiment), andFIG.4Bis the vibration distribution when a Love wave is generated (the comparative example). A signal having power of about 1 W is input to each of the surface acoustic wave elements. Each of the vibration distributions inFIGS.4A and4Bindicate a magnitude of an amplitude at each coordinate point. The amplitude inFIG.4Aindicating a Rayleigh wave that includes vibration components in the V direction and the T direction, and the amplitude inFIG.4Bindicating a Love wave that includes a vibration component in the H direction. InFIG.4A, a region having a large amplitude is represented by hatching with oblique lines at a narrow pitch, and a region having a small amplitude is represented by hatching with oblique lines at a wide pitch. A generation position of a maximum amplitude a1is not located inside the IDT electrode121but is located inside the second dielectric layer103. A difference between the maximum amplitude a1and a minimum amplitude a2is approximately 5 nm. InFIG.4B, a region having a large amplitude is represented by hatching with oblique lines at a narrow pitch, and a region having a small amplitude is represented by hatching with oblique lines at a wide pitch. A generation position of a maximum amplitude a3is located in the electrode finger. A difference between the maximum amplitude a3and a minimum amplitude a4is approximately 12 nm. In the surface acoustic wave element510in the comparative example, as illustrated inFIG.4B, the region having the large amplitude is located in a vicinity of the substrate501and the first dielectric layer122. Therefore, the variation in the band width ratio is likely to occur in the plurality of surface acoustic wave elements510after manufacturing due to the variation in the thickness of the first dielectric layers122occurring in the manufacturing process of the surface acoustic wave elements510. In contrast, in the surface acoustic wave element10according to the present preferred embodiment, as illustrated inFIG.4A, the region having the large amplitude is located above the IDT electrode121, and is spaced away from the substrate101and the first dielectric layer122, when compared to the surface acoustic wave element510. Therefore, even when the variation in the thickness of the first dielectric layers122occurs during manufacturing, it is unlikely to be affected by the variation in the thickness, and the variation in the band width ratio of the plurality of surface acoustic wave elements10after manufacturing is reduced or prevented. That is, the surface acoustic wave element10according to the present preferred embodiment includes the substrate101including the LiNbO3piezoelectric single crystal, the first dielectric layer122provided on the substrate101, and the IDT electrode121provided on the first dielectric layer122, and uses the Rayleigh wave generated on the substrate101when a high-frequency signal is input to the IDT electrode121, to propagate the high-frequency signal on the substrate101. In the surface acoustic wave element10that propagates the high-frequency signal using the Rayleigh wave as described above, the region having the large amplitude in the signal propagation is spaced away from the substrate101and the first dielectric layer122. Therefore, even when the variation in the thickness of the first dielectric layers122occurs during manufacturing, the variation in the band width ratio is able to be reduced or prevented in the plurality of surface acoustic wave elements10after manufacturing. Although the surface acoustic wave element according to the preferred embodiments of the present invention has been described above, the present invention is not limited to the above preferred embodiments. Other preferred embodiments that are achieved by combining any of the elements in the above preferred embodiments and variations obtained by making various modifications to the above preferred embodiments that may be conceived by those skilled in the art without departing from the gist of the present invention with respect to the above preferred embodiments, and various filters and devices incorporating a surface acoustic wave element according to preferred embodiments of the present invention are also included in the present invention. Preferred embodiments of the present invention are widely applicable to an acoustic wave filter, a multiplexer, a high-frequency front-end circuit, a communication device, and the like as a surface acoustic wave element having a small variation in the band width ratio. While preferred embodiments of the present invention have been described above, it is to be understood that variations and modifications will be apparent to those skilled in the art without departing from the scope and spirit of the present invention. The scope of the present invention, therefore, is to be determined solely by the following claims. | 14,059 |
11863156 | DETAIL DESCRIPTION OF PREFERRED EMBODIMENTS Exemplary Embodiment 1 FIG.1Ais a schematic top view of acoustic wave device1in accordance with Exemplary Embodiment 1 of the present invention.FIG.1Bis a schematic cross-sectional view of acoustic wave device1at line1B-1B shown inFIG.1A. Acoustic wave device1includes interdigital transducer (IDT) electrode21including plural electrode fingers3.FIG.1Bshows a cross section of device1along direction3Q perpendicular to extension direction3P in which electrode fingers3extend. Acoustic wave device1includes piezoelectric substrate2, IDT electrode21including plural electrode fingers3disposed on upper surface2A of piezoelectric substrate2, dielectric film4disposed above upper surface2A of substrate2to cover plural electrode fingers3, and dielectric film6disposed on upper surfaces3A of plural electrode fingers3and between dielectric film4and each of electrode fingers3. Electrode fingers3are configured to excite a surface wave, such as Rayleigh wave, as a main acoustic wave. Dielectric film4is made of oxide while dielectric film6is made of non-oxide. Dielectric film4contacts upper surface2A of piezoelectric substrate2at positions between electrode fingers3adjacent to each other. Conventional acoustic wave device101shown inFIG.22includes dielectric film104made of oxide. When dielectric film104is formed on electrode fingers103, this oxide may provide electrode fingers103with corrosion. Acoustic wave device1includes dielectric film6made of non-oxide and disposed on upper surfaces3A of electrode fingers3, thereby preventing electrode fingers3from being oxidized or corroding due to dielectric film4disposed above upper surfaces3A of electrode fingers3. In acoustic wave device1in accordance with Embodiment 1, dielectric film4contacts piezoelectric substrate2at positions between electrode fingers3adjacent to each other. This structure prevents variation in frequency of the main acoustic wave caused by dielectric film6, and prevents the characteristics of acoustic wave device1from degrading caused by the frequency variation. Piezoelectric substrate2allows, for instance, a Rayleigh wave to propagate through piezoelectric substrate2as the main acoustic wave; however, piezoelectric substrate2may allow other acoustic waves, such as a Shear Horizontal (SH) wave or a bulk wave as the main acoustic wave, to propagate through piezoelectric substrate2. The effect on the frequency variation obtained by dielectric film4contacting substrate2at the positions between electrode fingers3can be produced remarkably in the case that piezoelectric substrate2allows the Rayleigh wave to propagate through piezoelectric substrate2as the main acoustic wave. Characteristics of acoustic wave device1in accordance with Embodiment 1 will be demonstrated below. A sample of acoustic wave device1is prepared. In this sample, piezoelectric substrate2is made of a lithium niobate (LiNbO3)-based substrate having cut angles and a propagation direction of the main acoustic wave expressed as an Euler angle (φ,θ,ψ)=(0°,38°,0°. In this context, angles φ and θ represent the cut angle of piezoelectric substrate2, and angle ψ represents the propagation direction of the main acoustic wave exited by IDT electrode21. Electrode fingers3of IDT electrode21are arranged at pitches P3each of which is a half of wavelength λ of the main acoustic wave. Pitches P3of this sample are 2 μm. IDT electrode21is made of molybdenum. A distance from a lower surface of electrode finger3to upper surface3A, namely a film thickness of electrode finger3is 0.055λ. Dielectric film4is made of silicon dioxide (SiO2). A height from upper surface2A of piezoelectric substrate2to an upper surface of dielectric film4, namely film thickness H4of dielectric film4is 0.3λ. Dielectric film6is made of silicon nitride (SiN). FIG.2is a cross-sectional view of Comparative Example 1, namely, acoustic wave device501. InFIG.2, components identical to those of acoustic wave device1shown inFIGS.1A and1Bare denoted by the same reference numerals. Acoustic wave device501further includes dielectric film5made of SiN disposed between electrode fingers3adjacent to each other and between piezoelectric substrate2and dielectric film4. Film thickness H4of dielectric film4is a height from the upper surface of dielectric film4to a boundary between dielectric films4and5. Conditions other than this are the same as acoustic wave device1shown inFIGS.1A and1B. FIG.3shows profile P101of an amount of change in frequency of the main acoustic wave of acoustic wave device1in accordance with Embodiment 1 and profile P501of that of Comparison Example 1, namely, acoustic wave device501. InFIG.3, the vertical axis represents the amount of change in frequency of the main acoustic wave, and the horizontal axis represents a film thickness of dielectric film6. To be more specific, profiles P101and P501shown inFIG.3represent the amount of change in the frequency of main acoustic wave from a reference acoustic wave device including none of dielectric films5and6. Acoustic wave device1does not include dielectric film5between electrode fingers3adjacent to each other and between piezoelectric substrate2and dielectric film4, and has dielectric film4contact upper surface2A of substrate2. As shown inFIG.3, in the case that piezoelectric substrate2is a substrate that propagates a Rayleigh wave as a main acoustic wave, acoustic wave device1in accordance with Embodiment 1 suppresses the frequency change of the main acoustic wave comparing with acoustic wave device501of Comparative Example 1. In the case that piezoelectric substrate2is a substrate that propagates a Rayleigh wave as the main acoustic wave, piezoelectric substrate2is made of a lithium niobate (LiNbO3)-based substrate having cut angles and a propagation direction of the main acoustic wave expressed as an Euler angle (φ,θ,ψ) satisfying: −10°≤φ≤10°, 33°≤θ≤43°, and −10°≤ψ≤10°. Piezoelectric substrate2may be a quartz-based substrate having cut angles and a propagation direction of the main acoustic wave expressed as Euler angle (φ,θ,ψ) satisfying: −1°≤φ≤1°, 113°≤θ≤135°, and −5°≤ψ≤5°. Piezoelectric substrate2may be a lithium tantalite (LiTaO3)-based substrate having cut angles and a propagation direction of the main acoustic wave expressed as Euler angle (φ,θ,ψ) satisfying −7.5°≥φ≤2.5°, 111°≤θ≤121°, and −2.5°≤ψ≤7.5°. Piezoelectric substrate2may be made of a piezoelectric medium other than the above substrates, such as the quartz-based substrate, the lithium niobate (LiNbO3)-based substrate, or the lithium tantalite (LiTaO3)-based substrate, or a thin film as long as the medium satisfies an Euler angle other than the above Euler angles. In this context, angles φ and θ represent the cut-angles of piezoelectric substrate2, and angle ψ represents the propagation direction of the main acoustic wave. For instance, piezoelectric substrate2may be a lithium niobate substrate that propagates an SH wave or a Love wave and has a rotation Y-cut of −25° to +25°, or a lithium tantalite substrate that propagates an SH wave or a Love wave and has rotation Y-cut of 25° to 50°. As shown inFIG.1A, IDT electrode21is disposed on upper surface2A of piezoelectric substrate2and includes a pair of comb-shaped electrodes each of which includes plural electrode fingers3interdigitating with each other viewed above acoustic wave device1, and constitutes a resonator. Each of electrode fingers3is made of single metal, such as aluminum, copper, silver, gold, titanium, tungsten, molybdenum, platinum, or chrome, or an alloy mainly made of at least one of these metals, or a laminated structure of these metals. In the case that electrode finger3of IDT electrode21has the laminated structure, for instance, electrode finger3includes a Mo electrode layer mainly made of molybdenum and an Al electrode layer mainly made of aluminum disposed on the Mo electrode layer. The Mo electrode layer is thus located closer to upper surface2A of piezoelectric substrate2than the Al electrode layer is. The Mo electrode layer has a higher density than the Al layer, hence confining the main acoustic wave at the surface of acoustic wave device1and reducing a resistance of electrode fingers3due to the Al electrode layer. The Mo electrode layer may contain an additive, such as silicon, and the Al electrode layer can contain an additive, such as magnesium, copper, or silicon. These additives mixtures increase withstanding electric-power properties of electrode fingers3of IDT electrode21. A total film thickness of electrode finger3expressed by where a total density Db of electrode finger3and a density Da of aluminum is preferably not smaller than 0.05λ×Db/Da and not larger than 0.15λ×Db/Da. This condition allows the main acoustic wave to concentrate at the surface of acoustic wave device1. Dielectric film4is an inorganic insulating film made of oxide, and may be made of any medium allowing a transverse wave to propagate through the medium at a speed lower than a speed of a Rayleigh wave excited by comb-shaped electrode3. For instance, dielectric film4is made of the medium mainly containing silicon dioxide (SiO2). SiO2has a temperature coefficient of frequency (TCF) having a sign opposite to that of piezoelectric substrate2. Dielectric film4made of SiO2improves the frequency-temperature characteristics of acoustic wave device1. In the case that dielectric film4is made of SiO2, the film thickness is determined such that an absolute value of an amount of change in frequency of the main acoustic wave excited by electrode fingers3of IDT electrode21with respect to a temperature is not larger than a predetermined value (40 ppm/° C.). According to Embodiment 1, the film thickness of dielectric film4is a distance from the upper surface of dielectric film4to a boundary between upper surface2A of substrate2and dielectric film4disposed between finger electrodes3adjacent to each other of the IDT electrode. The thickness of dielectric film4satisfying the above predetermined value and made of silicon dioxide is not smaller than 0.2λ, and not larger than 0.5λ. Dielectric film6is an inorganic insulating film made of non-oxide. Dielectric film6prevents electrode fingers3from being oxidized or corroding when dielectric film4is formed above electrode fingers3. Dielectric film6, namely inorganic insulating film particularly made of nitride, such as silicon nitride, or carbide, such as silicon carbide, produce this effect remarkably. Dielectric film6is made of a medium allowing a transverse wave to propagate through the medium at a speed higher than a speed of the main acoustic wave excited by electrode fingers3of IDT electrode21, or is made of a medium allowing a transverse wave to propagate the medium at a speed higher than a speed of a transverse wave propagating through dielectric film4. The medium can be mainly made of, for instance, diamond, silicon nitride, silicon nitride oxide, aluminum nitride, or aluminum oxide. FIG.4is a schematic cross-sectional view of another acoustic wave device1C in accordance with Embodiment 1. InFIG.4, components identical to those of acoustic wave device1shown inFIG.1Bare denoted by the same reference numerals. Acoustic wave device1C includes dielectric film56instead of dielectric film6of acoustic wave device1shown inFIG.1B. Dielectric film56is made of the same material as that of dielectric film6. Each of electrode fingers3has lower surface3B facing upper surface2A of piezoelectric substrate2, upper surface3A opposite to lower surface3B, and side surface3C connected to upper surface3A and lower surface3B. Dielectric film56includes portion156disposed on upper surface3A of each electrode finger3and portion256disposed on side surface3C of each electrode finger3. In other words, dielectric film56contacts side surface3C of electrode finger3. Portion256of dielectric film56extends downward from portion156to upper surface2A of piezoelectric substrate2as not to expose side surface3C. This structure allows dielectric film56to protect electrode finger3more effectively, thereby preventing electrode finger3from being oxidized. A film thickness of portion156of dielectric film56disposed on upper surface3A of electrode finger3is preferably larger than that of portion256disposed on side surface3C of electrode finger3. This structure allows dielectric film56to protect electrode finger3effectively, and to suppress effectively the frequency variation of the main acoustic wave. FIG.5is a schematic cross-sectional view of still another acoustic wave device1D in accordance with Embodiment 1. InFIG.5components identical to those of acoustic wave device1C shown inFIG.4are denoted by the same reference numerals. Acoustic wave device1D includes dielectric film66instead of dielectric film56of acoustic wave device1C shown inFIG.4. Dielectric film66is made of the same material as that of dielectric film56. Dielectric film66includes portion266instead of portion256of dielectric film56. Portion266is disposed on side surface3C such that a part of side surface3C is exposed from portion266. Portion266of dielectric film66extends downward from portion156to partially cover side surface3C, but does not reach upper surface2A of piezoelectric substrate2. Portion266is thus located away from upper surface2A. This structure allows dielectric film66to suppress more effectively the frequency variation of the main acoustic wave, and to prevent electrode finger3from being oxidized. FIG.6Ais a schematic cross-sectional view of further acoustic wave device1E in accordance with Embodiment 1. InFIG.6A, components identical to those of acoustic wave device1shown inFIG.1Bare denoted by the same reference numerals. Acoustic wave device1E further includes dielectric film7including plural portions7P disposed between upper surface2A of the piezoelectric substrate and respective ones of lower surfaces3B of electrode fingers3. Dielectric film7is disposed on upper surface2A of piezoelectric substrate2. Each of electrode fingers3is disposed on respective one of upper surfaces of portions7P of dielectric film7. Dielectric film7is made of medium allowing a transverse wave to propagate through the medium at a speed higher than a speed of the main acoustic wave propagating through piezoelectric substrate2. This structure allows an electromechanical coupling coefficient of acoustic wave device1E to be adjusted more precisely while reducing a loss caused by dielectric film7. Acoustic wave device1E thus can be used in a filter that has a pass bandwidth appropriate for a communication system, such as portable phones. In this case, if piezoelectric substrate2is made of a quartz-based substrate, a lithium niobate (LiNbO3)-based substrate, a lithium tantalite (LiTaO3)-based substrate, or a potassium niobate (KNbO3)-based substrate, dielectric film7is preferably made of dielectric material, such as aluminum oxide (Al2O3), diamond, silicon nitride, silicon nitride oxide, aluminum nitride, or titanium nitride. A dielectric constant of dielectric film7is smaller than that of piezoelectric substrate2, hence allowing an electromechanical coupling coefficient of acoustic wave device1E to be adjusted precisely. Acoustic wave device1E thus can be used in a filter that has a pass bandwidth appropriate for a communication system, such as portable phones. In this case, if piezoelectric substrate2is made of a lithium niobate (LiNbO3)-based substrate, a lithium tantalite (LiTaO3)-based substrate, or a potassium niobate (KNbO3)-based substrate, dielectric film7is preferably made of dielectric material, such as aluminum oxide (Al2O3), diamond, silicon nitride, silicon nitride oxide, or aluminum nitride. A speed of a transverse wave propagating through dielectric film7(Al2O3) is higher than the speed of an acoustic wave (e.g. main acoustic wave) propagating through piezoelectric substrate2. This arrangement allows an energy of the acoustic wave propagating through substrate2to concentrate at dielectric film4(SiO2), and improves the temperature characteristics of a resonator constituted by acoustic wave device1E. FIG.6Bis a schematic cross-sectional view of further acoustic wave device1F in accordance with Embodiment 1. InFIG.6Bcomponents identical to those of acoustic wave device1E shown inFIG.6Aare denoted by the same reference numerals. As shown inFIG.1A, electrode fingers3extend along extension direction3P and in parallel to upper surface2A of piezoelectric substrate2. In acoustic wave device1F shown inFIG.6B, portion7P of dielectric film7has a width larger that of electrode finger3in direction3Q perpendicular to extension direction3P in which electrode fingers3extend. This structure allows an electromechanical coefficient of acoustic wave device1F to be adjusted precisely. FIG.7is a circuit block diagram of antenna duplexer10including acoustic wave device1(1C to1F) in accordance with Embodiment 1. Antenna duplexer10includes filter11having a first pass-band and filter12having a second pass-band higher than the first pass-band. InFIG.7, antenna duplexer10acts as, for instance, an antenna duplexer for band8of the Universal Mobile Telecommunications System (UMTS). Filter11functions as a transmitting filter and has the first pass-band (from 880 MHz to 915 MHz). Filter12functions as a receiving filter and has the second pass-band (from 925 MHz to 960 MHz) having a lowest frequency higher than a highest frequency in the first pass-band. Filter11is connected between input terminal14and antenna terminal15, and receives a transmission signal at input terminal14and outputs the signal from antenna terminal15. Filter11includes plural series resonators13, plural parallel resonators17having a resonant frequency lower than anti-resonant frequency of series resonators13. Series resonators13and parallel resonators17are connected in a ladder shape. Ground20of parallel resonators17is connected to ground terminal19. Filter11further includes inductor18connected between ground terminal19and ground terminal20. Filter12includes resonator91and filter22both connected between antenna terminal15and output terminals16as balanced terminals. Filter12receives a signal at antenna terminal15and outputs the received signal from output terminals16. Filter22is a vertical-mode coupling type filter. Antenna duplexer10further includes phase shifter23connected between filters11and12. Phase shifter23has low impedance in one of the first pass-band and the second pass-band, and high impedance in another of the first pass-band and the second pass-band, thereby improving isolation between filters11and12. Acoustic wave device1(1C to1F) in accordance with Embodiment 1 used in filter11or12prevents corrosion of electrode fingers of the IDT electrode of antenna duplexer10, and also prevent deterioration of characteristics of antenna duplexer10caused by a frequency variation of main acoustic waves in filters11and12. FIG.8is a block diagram of electronic apparatus50, such as a portable phone, including acoustic wave device1(1C to1F) in accordance with Embodiment 1. Electronic apparatus50includes acoustic wave device1(1C to1F), semiconductor device30connected to acoustic wave device1, and reproducing apparatus40connected to semiconductor device30. Reproducing apparatus40includes a display, such as a liquid crystal display, and an audio reproducer, such as a loudspeaker. Acoustic wave device1(1C to1F) in accordance with Embodiment 1 used in electronic apparatus50improves communication quality of apparatus50. Exemplary Embodiment 2 FIG.9is a schematic cross-sectional view of acoustic wave device201in accordance with Exemplary Embodiment 2 of the present invention. InFIG.9, components identical to those of acoustic wave device1in accordance with Embodiment 1 shown inFIGS.1A and1Bare denoted by the same reference numerals. Acoustic wave device201further includes dielectric film207disposed between piezoelectric substrate2and each of plural electrode fingers3. To be more specific, dielectric film207is disposed on upper surface2A of piezoelectric substrate2while plural electrode fingers3are disposed on upper surface207A of dielectric film207. Dielectric film207is made of medium allowing a transverse wave to propagate through the medium at a speed higher than that of a main acoustic wave propagating through piezoelectric substrate2. This structure allows an electromechanical coefficient of acoustic wave device201to be adjusted precisely while reducing a loss caused by dielectric film207, hence providing a filter having a pass bandwidth appropriate for a communication system, such as portable phones. Dielectric film4adjoins upper surface207A of dielectric film207at positions between electrode fingers3adjacent to each other. In acoustic wave device201, dielectric film6made of non-oxide prevents corrosion or oxidation of electrode fingers3, similarly to acoustic wave device1according to Embodiment 1. In acoustic wave device201, dielectric film4disposed between electrode fingers3adjacent to each other contacts upper surface207A of dielectric film207. This structure suppresses the frequency variation, caused by the presence of dielectric film6, in the main acoustic wave, thus preventing deterioration of characteristics caused by this frequency variation of acoustic wave device201. Piezoelectric substrate2allows a Rayleigh wave to propagate through piezoelectric substrate2as the main acoustic wave; however, piezoelectric substrate2can be a piezoelectric substrate allowing another acoustic wave, such as a Shear Horizontal (SH) wave or a bulk wave, to propagate as the main acoustic wave. The frequency variation of the main acoustic wave can be remarkably reduced by the structure in which dielectric film4contacts dielectric film207at positions between electrode fingers3particularly in the case that piezoelectric substrate2allows a Rayleigh wave to propagate through piezoelectric substrate2as the main acoustic wave. Dielectric film207is made of medium allowing a transverse wave propagates at a speed higher than that of the main acoustic wave propagating through piezoelectric substrate2. This structure allows an electromechanical coefficient of acoustic wave device201to be adjusted precisely while reducing a loss caused by dielectric film207, thus providing a filter having a pass bandwidth appropriate for a communication system, such as portable phones. In the case that piezoelectric substrate2is made of a quartz-based substrate, a lithium niobate (LiNbO3)-based substrate, a lithium tantalite (LiTaO3)-based substrate, or a potassium niobate (KNbO3)-based substrate, dielectric film207is preferably made of dielectric material, such as aluminum oxide (Al2O3), diamond, silicon nitride, silicon nitride oxide, aluminum nitride, or titanium nitride. Dielectric film207may be made of medium having a dielectric constant smaller than that of piezoelectric substrate2, thereby allowing an electromechanical coupling coefficient of acoustic wave device201to be adjusted precisely. Acoustic wave device201thus can be used in a filter that has a pass bandwidth appropriate for a communication system, such as portable phones. In the case that piezoelectric substrate2is made of a lithium niobate (LiNbO3)-based substrate, a lithium tantalite (LiTaO3)-based substrate, or a potassium niobate (KNbO3)-based substrate, dielectric film207is preferably made of dielectric material, such as aluminum oxide (Al2O3), diamond, silicon nitride, silicon nitride oxide, or aluminum nitride. FIG.10is a schematic cross-sectional view of Comparative Example 2, acoustic wave device60. InFIG.10, components identical to those of acoustic wave device201in accordance with Embodiment 2 shown inFIG.9are denoted by the same reference numerals. Acoustic wave device60further includes dielectric film5made of non-oxide disposed on upper surface207A of dielectric film207at positions between electrode fingers3adjacent to each other. Dielectric film4covers dielectric films5and6. Acoustic wave device201in accordance with Embodiment 2 prevents deterioration of characteristics caused by the frequency variation comparing to Comparison Example 2, acoustic wave device60. FIG.11is a schematic cross-sectional view of another acoustic wave device70in accordance with Embodiment 2. InFIG.11, components identical to those of acoustic wave device201shown inFIG.9and acoustic wave device1C in accordance with Embodiment 1 shown inFIG.4are dented by the same reference numerals. Acoustic wave device70includes dielectric film56instead of dielectric film6, similarly to acoustic wave device1C shown inFIG.4. Dielectric film56has portions156disposed on upper surfaces3A of electrode fingers3and portions256disposed on side surfaces3C of electrode fingers3. This structure increases protective effect of dielectric film56to electrode fingers3, thereby preventing electrode fingers3from being oxidized. A film thickness of portion156of dielectric film56disposed on upper surface3A of electrode finger3is preferably larger than that of portion256disposed on side surface3C of electrode finger3. This structure allows dielectric film56to protect electrode finger3more effectively and yet, to prevent electrode fingers3from being oxidized as well as to prevent effectively the frequency variation of the main acoustic wave. FIG.12is a schematic cross-sectional view of still another acoustic wave device80in accordance with Embodiment 2. InFIG.12, components identical to those of acoustic wave device70shown inFIG.11and acoustic wave device1D in accordance with Embodiment 1 shown inFIG.5are denoted by the same reference numerals. Acoustic wave device80includes dielectric film66instead of dielectric film56, similarly to acoustic wave device1D shown inFIG.5. Dielectric film66has portions156and portions266. Portions156are disposed on upper surfaces3A of electrode fingers3. Portions266are disposed on side surfaces3C of electrode fingers3. Portion266exposes a part of side surface3C of electrode finger3. Dielectric film66reduces more effectively the frequency variation of the main acoustic wave, and prevents electrode fingers3from being oxidized. Acoustic wave devices70,80, and201in accordance with Embodiment 2 may be employed in antenna duplexer10and electronic apparatus50shown inFIGS.7and8, similarly to acoustic wave devices1, and1C to1F in accordance with Embodiment 1. Acoustic wave devices70,80, and201provide effects similar to those of acoustic wave devices1and1C to1F. Exemplary Embodiment 3 FIG.13Ais a schematic plan view of acoustic wave device301in accordance with Exemplary Embodiment 3 of the present invention.FIG.13Bis a cross-sectional view of acoustic wave device301at line13B-13B shown inFIG.13A. Acoustic wave device301includes piezoelectric substrate303, interdigital transducer (IDT) electrode321disposed on upper surface303A of piezoelectric substrate303, and dielectric film304disposed above upper surface303A of piezoelectric substrate303to cover IDT electrode321. IDT electrode321includes plural electrode fingers302extending in extension direction302P. IDT electrode321includes comb-shape electrodes facing each other and disposed on upper surface303A. Dielectric film304covers upper surface303A of piezoelectric substrate303and IDT electrode321. Upper surface304A of dielectric film304has projections305located above electrode fingers302. Electrode fingers302extend in extension direction302P parallel with upper surface303A of substrate303. A main acoustic wave propagates in propagation direction302Q perpendicular to extension direction302P. FIG.14is an enlarged cross-sectional view of acoustic wave device301for schematically illustrating electrode finger302formed by stacking dielectric films304. Each of electrode fingers302includes electrode layer308disposed on upper surface303A of substrate303and electrode layer307disposed on upper surface308A of electrode layer308. Electrode layer308has lower surface308B facing upper surface303A of piezoelectric substrate303, and has side surface308C connected to upper surface308A and lower surface308B. Electrode layer307has lower surface307B facing upper surface308A of electrode layer308, upper surface307A opposite to lower surface307B, and side surface307C connected to upper surface307A and lower surface307B. Angle θ1formed by side surface307C and plane309parallel to upper surface303A of substrate303inside electrode layer307is an acute angle. In other words, side surface307C faces upward. Angle θ1of electrode layer307is larger than angle θ2that is the largest angle among angles formed by upper surface305A of projection305and plane310parallel to upper surface303A of substrate303. Angle θ1being an acute angle decreases projection305formed on dielectric film304. Small projection305decreases angle θ2, and consequently, allows angle θ1to be larger than angle θ2. Angle θ1of electrode layer307is preferably close to 90 degrees to increase a reflectivity. Angle θ1is thus preferably larger than angle θ2. The above structure does not excessively decrease angle θ1and thus, provides a certain reflectivity of the acoustic wave. This structure decreases projections305formed on upper surface304A of dielectric film304, and prevents the reflectivity from decreasing. As a result, the reflectivity of acoustic wave device301can be greater. In acoustic wave device301having a Rayleigh wave propagating through as a main acoustic wave, projections305scatter the acoustic wave, thereby reducing a reflectivity and increasing an insertion loss. Smaller projections305improve the reflectivity. Piezoelectric substrate303may be made of lithium niobate, lithium tantalite, lithium tetraborate, or quartz. Piezoelectric substrate303of acoustic wave device301having a Rayleigh wave propagating as the main acoustic wave is preferably made of a rotation Y-cut X-propagation substrate having an Euler angle (φ,θ,ψ) satisfying −5°≤φ≤5°, −5≤θ≤5°, and −5°≤ψ≤5°. A cross section of upper surface305A of projection305perpendicular to extension direction302P is downward convex. This shape can be formed by laminating dielectric films304by a biased-sputtering method. The downward convex shape of upper surface305A of projection305reduces a volume of projection305, and prevents the reflectivity of acoustic wave device301from decreasing. In the case that the cross section of upper surface305A of projection305is downward convex, the largest angle θ2among angles formed by upper surface305A and plane310parallel with upper surface303A of piezoelectric substrate303is located at top305B. In this case, angle θ1is larger than angle θ2in acoustic wave device301. Projection305has a small volume, and angle θ1is larger than angle θ2at top305B, hence providing electrode layer307with a large volume. This structure allows electrode fingers302to provide a certain reflectivity of the acoustic wave since angle θ1is not excessively small. This structure prevents positively the reflectivity from lowering since projection305on upper surface304A of dielectric film304is small. In electrode layer308disposed between electrode layer307and piezoelectric substrate303, side surface308C of electrode layer308and plane311parallel with upper surface303A of substrate303form angle θ3larger than angle θ1. Angle θ3preferably ranges from 85° to 95°, and more preferably, is 90°. In other words, side surface308C of electrode layer308may preferably be substantially perpendicular to upper surface303A of piezoelectric substrate303. A ladder filter or a Double Mode Saw (DMS) filter that includes acoustic wave device301requires a relatively large reflectivity of the acoustic wave for each electrode finger302in order to obtain preferable filter characteristics. A small reflectivity of each electrode finger302may increase an insertion loss in the pass-band of the filter, or may produce ripples having the characteristics deteriorate. A desirable reflectivity per one electrode finger302is determined by the desirable pass-band width of the filter. Reflectivity γ per one electrode finger302and stop-bandwidth SBW of the filter is expressed as formula 1 with a center frequency f0of the reflection characteristics. 2γπ≈SBWf0(Formula1) If stop-bandwidth SBW is narrower than the pass-bandwidth of the filter, acoustic wave device301cannot confine the acoustic wave completely therein at frequencies in the pass-band, and produces a leakage of the acoustic wave, thereby having the pass-band characteristics deteriorate. Angle θ3of electrode layer308ranging from 85° to 95° provides large reflectivity γ. Since this reflectivity γ has a linear relation as shown in formula 1 with respect to stop-bandwidth SBW, large reflectivity γ reduces the insertion loss as well as prevents characteristics from deteriorating due to ripples. The range of angle θ3from 85° to 95° is a range that does not decrease reflectivity γ of the acoustic wave in acoustic wave device301. The shape of the side surfaces (307C,308C) of electrode finger302can be formed by controlling the manufacturing process. On the cross section of electrode finger302perpendicular to extension direction302P, width L1of lower surface307B of electrode layer307in propagation direction302Q of the main acoustic wave is smaller than width L2of upper surface308A of electrode layer308in propagation direction302Q. On a cross section of electrode finger302perpendicular to extension direction302P, this structure reduces a width of upper surface307A of electrode layer307in propagation direction302Q. As a result, this structure further decreases projection305formed on upper surface304A of dielectric film304, thereby preventing more positively the reflectivity γ from decreasing. Electrode fingers302are made of single metal, alloy or a laminated structure. Electrode layer308is made of material having an average density larger than that of electrode layer307. Electrode layer307is made of metal having a small resistivity to reduce a resistance loss caused by electrode layer307. Electrode layer308is preferably made mainly of molybdenum (Mo) or tungsten (W). Electrode layer307is mainly made of aluminum (Al) or copper (Cu). A preferable height range of electrode layer307or308changes depending on the metal materials used for these layers. For instance, in the case of a filter having a center frequency of 897 MHz corresponding to a transmission filter for Band8specified by the Universal Mobile Telecommunications System (UMTS), for wavelength λ (=4.0 μm) of an acoustic wave determined by pitches P302of electrode fingers302, the preferable height of electrode layer308ranges from 170 nm to 230 nm, and more preferably, the height is about 200 nm if electrode layer308is mainly made of Mo. If electrode layer308is mainly made of W, the preferable height of electrode layer308ranges from 60 nm to 120 nm, and more preferably, the height is about 90 nm. If electrode layer307is mainly made of Al, the preferable height of electrode layer307ranges from 250 nm to 350 nm, and more preferably, the height is about 320 nm. Electrode layer307is thus preferably thicker than electrode layer308. Electrode layer307out of electrode layers307and308having a density smaller than that of electrode layer308has a volume larger than that of electrode layer308. This structure reduces a resistance of electrode fingers302, and reducing a resistance loss accordingly. Dielectric film304may be made of silicon nitride (SiN), aluminum nitride (AlN), tantalum pentoxide (Ta2O5), tellurium oxide (TeO2, TeO3), or silicon dioxide (SiO2). Dielectric material, such as SiO2, having a temperature coefficient of frequency (TCF) has a sign opposite to that of piezoelectric substrate303is preferably used for dielectric film304, thereby improving the TCF of acoustic wave device301. A method for manufacturing acoustic wave device301will be detailed below. In this description, electrode layer307is made of Al, and electrode layer308is made of Mo.FIGS.15A to15Fare cross-sectional views of acoustic wave device301for illustrating processes for manufacturing acoustic wave device301. First, piezoelectric substrate303made of mono-crystal lithium niobate of rotary Y-cut and X propagation is prepared, as shown inFIG.15A. Then, as shown inFIG.15B, electrode layer314made of Mo is formed on upper surface303A of piezoelectric substrate303, and electrode layer313made of Al is formed on an upper surface of electrode layer314. Electrode layers313and314may be formed by depositing these materials by, e.g. a sputtering method. Next, as shown inFIG.15C, resist315having a predetermined pattern is formed on an upper surface of electrode layer313by a photolithography technique. Resist315has a positive pattern located at a place on electrode layers313and314to be IDT electrode321. Then, as shown inFIGS.15D and15E, electrode fingers302(IDT electrode321) are formed by dry etching. FIG.15Dshows a process of forming electrode layer307mainly made of Al. The dry-etching of Al or Al alloy mainly made of Al employs chlorine as a reactive gas. Chlorine gas for dry-etching Al produces residue made of mixture of, e.g. the resist and Al. This residue suppresses the reacting of the dry etching and provides an extremely low etching rate, so that the residue functions as masking. During the dry-etching of electrode layer307mainly made of Al, the residue is attached onto side surface307C of electrode layer307. Therefore, on the cross section perpendicular to extension direction302P of electrode fingers302, the width of electrode layer307in propagation direction302Q of the main acoustic wave increases as approaching piezoelectric substrate303. The amount of the residue depends on a bias electric power during the dry etching. The larger the bias electric power is, the larger the amount of the residue is, so that the masking effect can be increased. As a result, electrode layer307has a large width in propagation direction302Q of the main acoustic wave, thus being thick. The bias electric power is controlled to adjust the amount of etching electrode layer307, thereby controlling angle θ1of electrode layer307made of Al. FIG.15Eshows a process for forming electrode layer308mainly made of Mo. A reacting gas made of, for instance, a mixture of chlorine and oxygen allows side surface308C of electrode layer308to be perpendicular to upper surface308A. The amount of the residue produced during the dry-etching of Al is adjusted to allow the residue to function as a mask for allowing a width of upper surface308A in propagation direction302Q of the main acoustic wave to be larger than a width of lower surface307B in propagation direction302Q. As shown inFIG.14, the cross section of electrode layer307flares downward, and side surface308C of electrode layer308is perpendicular to upper surface308A and lower surface308B. Thus, electrode fingers302(IDT electrode321) has the laminated structure in which width L1of lower surface307B of electrode layer307in propagation direction302Q is smaller than width L2of upper surface308A of electrode layer308in the direction302Q. Next, as shown inFIG.15F, dielectric film304covering upper surface303A of piezoelectric substrate303and IDT electrode321is formed. Dielectric film304is formed by depositing fine particles of material, such as SiO2, by a sputtering method or an evaporation method. During this process, the fine particles of the material are blocked by electrode fingers302, hence being prevented from filling spaces between electrode fingers302. The fine particles insufficiently filling the spaces produce voids or sparse regions in dielectric film304. The voids or sparse regions in dielectric film304produce propagation loss of the acoustic wave, and have the electric characteristics of acoustic wave device301deteriorate. However, the width of lower surface307B of electrode layer307in propagation direction302Q smaller than a width of upper surface308A of electrode layer308in direction302Q allows the fine particles of the material for dielectric film304to sufficiently fill the spaces between electrode fingers302. Angle θ1of electrode layer307is smaller than angle θ3of electrode layer308, in other words, the cross section of electrode layer307flares downward. This shape allows the fine particles of the material of dielectric film304to more sufficiently fill the spaces between electrode fingers302. This prevents the electric characteristics of acoustic wave device301including IDT electrode321from deteriorating due to the voids or the sparse regions in dielectric film304.FIG.16is an enlarged cross-sectional view of acoustic wave device301. As shown inFIG.16, joint section320which is connected smoothly to side surface307C and upper surface308A may be formed at a part of side surface307C of electrode layer307that is bonded to upper surface308A of electrode layer308. In this case, angle θ1is defined as an angle formed by plane309and a portion of side surface307C other than joint section320. Angle θ1of electrode layer307which is an acute angle reduces the height of projections305formed above electrode fingers302during the lamination of dielectric film304. Since the width of lower surface307B of electrode layer307in propagation direction302Q of the main acoustic wave is smaller than that of upper surface308A of electrode layer308in the direction302Q, the height of projections305formed above electrode fingers302during the lamination of dielectric film304can be reduced. Conventional acoustic wave device601shown inFIGS.23A to23Chas projections610on an upper surface of dielectric film603, and projections610protrude by a height of electrode finger602above electrode fingers602. Projections610are formed during the forming of dielectric film603on piezoelectric substrate604to cover electrode fingers602of IDT electrode602A. Projections610of dielectric film603decreases a reflectivity of acoustic wave device601. To improve the function of acoustic wave device601, it is necessary to grind projections610to have a smaller size after the forming of dielectric film603in the manufacturing process of acoustic wave device601. In acoustic wave device301in accordance with Embodiment 3, projections305produced when dielectric film304is formed, scatters the acoustic wave. A higher projection305may reduce the reflectivity, and increase the insertion loss due to the scattering of the acoustic wave in the acoustic wave device allowing the Rayleigh wave to propagate as the main acoustic wave. In ordinary manufacturing processes, since a section having the IDT electrode therein has a height different from a height of a section having no IDT electrode therein, the dielectric film formed on the IDT electrode produces projections on the upper surface of the dielectric film above the electrode fingers. It is necessary to grind these projections to prevent the characteristics of the acoustic wave device from deteriorating. Acoustic wave device301in accordance with Embodiment 3 decreases the heights of projections305formed above electrode fingers302, thereby preventing the acoustic wave from scattering. Even if the process of grinding projections305is omitted, the reflectivity and the insertion loss of acoustic wave device301can be improved. As a result, the cost of acoustic wave device301can be reduced. Meanwhile, to further improve the characteristics of device301, projections305may be ground to have smaller sizes. FIG.17is a schematic circuit diagram of antenna duplexer323employing acoustic wave device301. Antenna duplexer323includes filter326having a first pass-band and filter327having a second pass-band that is higher than the first pass-band. The lowest frequency of the second pass-band is higher than the highest frequency of the first pass-band. Acoustic wave device301is used as filter326. In A stop-bandwidth of acoustic wave device301is important for antenna duplexer323having plural pass bands. Filter326includes plural series arm resonators and plural parallel arm resonators connected together in a ladder shape. A stop-bandwidth of the series arm resonators influences the passing characteristics of filter327. Wavelength λ of a main acoustic wave of the series arm resonators of filter326is about 4.0 μm. A narrow stop-bandwidth produces ripples in the pass-band of high frequencies, hence having the filter characteristics deteriorate. The stop-bandwidth is required to be wide. In filter326used for lower frequencies, the stop-bandwidth of the series arm resonators is preferably twice or greater than twice as wide as the pass-band width. FIG.18shows a relation between height H305of projection305from upper surface304A of dielectric film304and stop-bandwidth SBW. Dielectric film304is made of SiO2and provided to acoustic wave device301functioning as a filter corresponding to a transmission filter for Band8and having a center frequency of 897 MHz. Wavelength λ of the main acoustic wave is 4.0 μm. InFIG.18, the vertical axis represents the stop-bandwidth SBW (MHz), and the horizontal axis represents height H305of projection305. The height of electrode finger302is fixed at 520 nm. Height H305of projection305is changed as a parameter to find stop-bandwidth SBW. Height H305of projection305in this case is a distance measured, on the cross section perpendicular to extension direction302P of electrode finger302, from the closest point to upper surface303A of piezoelectric substrate303to top305B of projection305. Height H302which is a film thickness of electrode finger302is measured from the lower surface of electrode finger302to the upper surface of electrode finger302in the direction perpendicular to upper surface303A of piezoelectric substrate303. Stop-bandwidth SBW is found as 68.2 MH, 68.6 MHz, and 71.4 MHz corresponding to heights H305of 289 nm, 277 nm, and 206 nm, respectively. Therefore, the lower height H305of projection305is, the larger stop-bandwidth SBW of acoustic wave device301is. Normalized heights H305/λ obtained by dividing heights H305of projection305by wavelength X are 0.072, 0.069, and 0.052. Stop-bandwidth SBW/fs normalized by respective resonance frequencies fs (893 MHz, 891 MHz, and 890 MHz) are 0.0764, 0.0770, and 0.0801. Normalized height H305/λ normalized by wavelength k, and normalized stop-bandwidth SBW/fs normalized by resonance frequencies fs do not depend on wavelength λ and resonance frequency fs, respectively, so that the filter is not necessarily limited to a particular filter. FIG.19shows a relation between normalized height H305/λ and normalized stop-bandwidth SBW/fs. InFIG.19, the horizontal axis represents normalized height H305/λ of projection305, and the vertical axis represents normalized stop-bandwidth SBW/fs. Normalized stop-bandwidth SBW/fs is expressed as formula 2. SBWfs=-0.179H305λ+0.0894(Formula2) In acoustic wave device301used as the filter working for lower frequencies of antenna duplexer323, the normalized height H305/λ of projection305preferably satisfies formula 3 with a relative bandwidth w obtained by dividing the bandwidth by the center frequency. 0<H305λ≤10.179(0.0894-2.0w)(Formula3) For instance, when antenna duplexer323is used as an antenna duplexer for Band8, the pass-band frequency of filter326ranges from 880 MHz to 915 MHz, and center frequency is 897.5 MHz, thus providing the relative bandwidth w of 0.0390. The normalized stop-bandwidth is thus preferably not smaller than 0.0780. The normalized height H305/λ of projection305preferably ranges from 0 to 0.0637. Height H302of electrode finger302is 520 nm and is normalized similarly to height H305. Normalized height H302/λ normalized by wavelength X is 0.13. Height H305of projection305is thus preferably larger than 0% and not larger than 49.0% of height H302of electrode finger302. If an acoustic wave device that does not satisfy formula 3 is used as a filter working on the lower frequencies of the antenna duplexer, the ripples of the series arm resonators overlap the pass-band of the higher frequencies, hence causing the passing characteristics of the antenna duplexer to deteriorate. Acoustic wave device301in accordance with Embodiment 3 satisfying formula 3 prevents the passing characteristics on the higher frequencies of antenna duplexer323from deteriorating. Acoustic wave device301can be used in a ladder type filter in which the stop-bandwidth of the parallel resonator influences the passing characteristics of the filter. Wavelength λ of the main acoustic wave of the parallel resonator is about 4.2 μm. In the ladder filter, a stop-bandwidth is preferably not narrower than 1.5 times of the pass bandwidth. The normalized height H305/λ of acoustic wave device301preferably satisfies formula 4 with respect to relative bandwidth w. 0<H305λ≤10.179(0.0894-1.5w)(Formula4) If an acoustic wave device that does not satisfy formula 4 is used in the ladder filter, the ripples of the parallel resonators overlap the pass-band of the filter, hence causing the passing characteristics of the filter to deteriorate. Acoustic wave device301in accordance with Embodiment 3 satisfying formula 4 prevents the passing characteristics of the ladder filter from degrading. The height of projection305of acoustic wave device301is determined based on particular relative bandwidth w to widen the stop-bandwidth and to improve the characteristics of acoustic wave device301. FIG.20is a schematic cross-sectional view of another electrode finger362of acoustic wave device301in accordance with Embodiment 3. InFIG.20, components identical to those of electrode finger302shown inFIG.13Bare denoted by the same reference numerals. While electrode finger302shown inFIG.13Bincludes two layers, namely, electrode layers307and308, electrode finger362shown inFIG.20includes electrode layer307disposed on upper surface303A of piezoelectric substrate303, namely, electrode finger362has a single-layer structure. FIG.21is a schematic cross-sectional view of still another electrode finger372of acoustic wave device301in accordance with Embodiment 3. InFIG.21, components identical to those of electrode finger302shown inFIG.13Bare denoted by the same reference numerals. Electrode finger372shown inFIG.21includes electrode layer398A disposed on upper surface303A of piezoelectric substrate303, electrode layer398B disposed on an upper surface of electrode layer398A, electrode layer398C disposed on an upper surface of electrode layer398B, and electrode layer307disposed on an upper surface of electrode layer398C. Electrode finger372thus can include plural layers more than two layers. In the case that electrode372includes plural electrode layers, namely, electrode layers307and398A to398C, electrode layer307is preferably disposed on the top of these layers. This structure allows electrode finger372to be tapered in propagation direction302Q of the main acoustic wave on an upper surface of the cross section perpendicular to extension direction302P in which electrode finger372extends while electrode finger372maintains its volume. As a result, the size of projection305formed on dielectric film304can be further reduced. This structure thus maintains the reflectivity of the acoustic wave excited by electrode finger372, and yet, the size of projection305formed on the upper surface of dielectric film304can be reduced, thereby preventing more positively the reflectivity from decreasing. Acoustic wave device301in accordance with Embodiment 3 can be employed in antenna duplexer10or electronic apparatus50shown inFIG.7or8similarly to acoustic wave devices1and1C to1F in accordance with Embodiment 1. In Embodiments 1 to 3, terms, such as “upper surface”, “lower surface”, “above”, and “below”, indicating directions indicate relative directions defined only by relative positional relations of structural elements, such as the piezoelectric substrate, the IDT electrode, and the dielectric film, of the acoustic wave devices, and do not indicate absolute directions, such as a vertical direction. | 52,108 |
11863157 | DESCRIPTION OF EXEMPLARY EMBODIMENTS Hereinafter, the vibrator device according to the present application example will be described in detail based on the embodiment illustrated in the attached drawings. First Embodiment FIG.1is a cross-sectional view illustrating a vibrator device according to a first embodiment.FIG.2is a block diagram illustrating a circuit included in the vibrator device ofFIG.1.FIG.3is a plan view illustrating a vibrator element included in the vibrator device ofFIG.1.FIG.4is a plan view illustrating a semiconductor circuit substrate included in the vibrator device ofFIG.1.FIG.5is a cross-sectional view illustrating a modification example of the vibrator device ofFIG.1. For convenience of the description, the three axes orthogonal to each other are illustrated as the X-axis, the Y-axis, and the Z-axis in each drawing. The tip side of the arrow on the Z-axis is also referred to as “upper”, and the base side is also referred to as “lower”. The plan view along the thickness direction of a semiconductor substrate5, that is, the Z-axis, is also simply referred to as “plan view”.FIG.1is a cross-sectional view taken along the line IV-IV inFIG.4. A vibrator device1illustrated inFIG.1is used as an oscillator, for example, and can be incorporated in various electronic devices, moving objects, and the like. However, the vibrator device1may be used as a device other than the oscillator, for example, various sensors such as an acceleration sensor and an angular velocity sensor. As illustrated inFIG.1, the vibrator device1includes a package2having an accommodation space S inside, and a vibrator element9accommodated in the accommodation space S. The package2includes a semiconductor circuit substrate4as a substrate and a lid3bonded to the semiconductor circuit substrate4. Semiconductor Circuit Substrate4 As illustrated inFIG.1, the semiconductor circuit substrate4includes a semiconductor substrate5and a circuit6provided on the semiconductor substrate5. The semiconductor substrate5is a silicon substrate. The semiconductor substrate5is a P-type silicon substrate having P-type conductivity, and the substrate potential becomes ground. However, the semiconductor substrate5may be a semiconductor substrate other than the silicon substrate, for example, various semiconductor substrates made of germanium, gallium arsenide, gallium arsenide phosphorus, gallium nitride, silicon carbide and the like. The semiconductor substrate5may be an N-type silicon substrate having N-type conductivity. The semiconductor substrate5has a plate shape having an upper surface51as a first surface and a lower surface52as a second surface positioned on the opposite side of the upper surface51. The semiconductor substrate5has an insulating film50formed on the front surface thereof. The insulating film50is made of silicon oxide (SiO2) and is formed, for example, by thermally oxidizing the front surface of the semiconductor substrate5. The circuit6electrically coupled to the vibrator element9is provided at the lower surface52of the semiconductor substrate5. By providing the circuit6on the semiconductor substrate5, the space of the semiconductor substrate5can be effectively utilized. The lower surface52of the semiconductor substrate5is provided with a laminated body60in which an insulating layer61, a wiring layer62, an insulating layer63, a passivation film64, and a terminal layer65are laminated. A plurality of active elements (not illustrated) formed at the lower surface52are electrically coupled to each other via the wiring included in the wiring layer62to form the circuit6. That is, the circuit6is integrally formed with the semiconductor substrate5. A plurality of terminals651are formed on the terminal layer65, and the plurality of terminals651include a power terminal coupled to a power source, a ground terminal coupled to the ground, a terminal to which a signal is output from the circuit6, and the like. In particular, in the following, the terminal from which the signal from the circuit6is output is also referred to as an output terminal651A. The insulating layers61and63are made of silicon oxide (SiO2), and the wiring layer62and the terminal layer65are made of a conductive material, such as aluminum (Al), copper (Cu), conductive polysilicon, or tungsten (W). However, the constituent materials of each of these portions are not particularly limited. In the illustrated configuration, the laminated body60includes one wiring layer62, but the present disclosure is not limited thereto, and a plurality of wiring layers62may be laminated via the insulating layer63. That is, the wiring layer62and the insulating layer63may be alternately laminated a plurality of times between the insulating layer61and the passivation film64. As illustrated inFIG.2, the circuit6includes an oscillation circuit66that oscillates the vibrator element9to generate a frequency of a reference signal such as a clock signal, a fractional N-PLL circuit67, and an output circuit68. The oscillation circuit66is a circuit for oscillating the vibrator element9by amplifying the signal output from the vibrator element9and feeding the signal back to the vibrator element9. As the circuit configured of the vibrator element9and the oscillation circuit66, for example, a Pierce oscillation circuit, an inverter type oscillation circuit, a Colpitts oscillation circuit, a Hartley oscillation circuit, or the like can be used. The fractional N-PLL circuit67(fractional division PLL circuit) is a PLL circuit in which a division ratio of a fraction can be set by switching the division ratio of an integer to make a division ratio of a fraction (decimal) on average. Accordingly, it is possible to output a signal of any frequency. The signal output from the fractional N-PLL circuit67is output from the output terminal651A via the output circuit68. In particular, according to the fractional N-PLL circuit67, the following effects can be exhibited. In a general oscillator, after accommodating the vibrator element in a package, a part of the electrode of the vibrator element is removed by laser irradiation to adjust the frequency of the vibrator element. However, in the vibrator device1, the lid3is made of silicon, and after accommodating the vibrator element9in the package2, it is difficult to irradiate the vibrator element9with a laser, and there is a case where it is difficult to adjust the frequency of the vibrator element9. Even in such a case, when the fractional N-PLL circuit67is provided, it is possible to output a signal of any frequency from the circuit. The fractional N-PLL circuit67includes a phase comparator671to which a reference frequency signal output from the oscillation circuit66is input, a charge pump circuit675, a low-pass filter672, a voltage-controlled oscillator673to which a DC signal from the low-pass filter672is input, and a divider674to which a frequency signal output from the voltage-controlled oscillator673is input, and the frequency signal divided by the divider674is input to the phase comparator671. The phase comparator671detects a phase difference between the reference frequency signal and the divided frequency signal, and outputs the detection result as a pulse voltage to the charge pump circuit675. The charge pump circuit675converts the pulse voltage output by the phase comparator671into a current, and outputs the current to the low-pass filter672. The low-pass filter672removes a high-frequency component from the output signal from the charge pump circuit675, converts the signal into a voltage, and outputs the voltage as a DC signal for controlling the voltage-controlled oscillator673. The divider674can realize fractional division by switching the division ratio of an integer to make a division ratio of a fraction on a time average. The voltage-controlled oscillator673uses an LC oscillation circuit including an inductor673A and a capacitor673B. As illustrated inFIG.1, the semiconductor substrate5is formed with a pair of through holes53and54that penetrate the semiconductor substrate5in the thickness direction. The through holes53and54are filled with a conductive material, and accordingly, through electrodes530and540are formed. As illustrated inFIGS.1,3and4, the upper surface51of the semiconductor substrate5is provided with a pair of wirings73and74electrically coupled to the vibrator element9. The wiring73is electrically coupled to the circuit6via the through electrode530, and the wiring74is electrically coupled to the circuit6via the through electrode540. As illustrated inFIGS.1,3and4, at the upper surface51of the semiconductor substrate5, a bonding layer75as a metal layer used for bonding to the lid3is provided. The bonding layer75includes: a frame-shaped bonding region Q1which is provided along the outer edge of the semiconductor substrate5, and is used for bonding to the lid3; and a non-bonding region Q2which is positioned on the inside of the bonding region Q1, and faces the accommodation space S. The bonding layer75is insulated from the wirings73and74, and the non-bonding region Q2is provided as wide as possible at the upper surface51as long as the non-bonding region Q2is not in contact with the wirings73and74. As illustrated inFIG.1, at the part that overlaps the bonding region Q1, the insulating film50is removed from the upper surface51, and the bonding layer75is electrically coupled to the silicon substrate part of the semiconductor substrate5. Accordingly, the bonding layer75is coupled to the ground similarly to the semiconductor substrate5. Therefore, the bonding layer75, particularly the non-bonding region Q2, can function as a shield layer that suppresses magnetic connection between the respective portions in the vibrator device1. This effect will be described later. The bonding layer75is collectively formed by the same process as that of the wirings73and74. Specifically, for example, a metal film is formed at the upper surface51of the semiconductor substrate5by sputtering, and the metal film is patterned by using a photolithography technique and an etching technique, and thereby the wirings73and74and the bonding layer75can be formed collectively. Accordingly, it becomes easy to form the semiconductor circuit substrate4. The configurations of the wirings73and74and the bonding layer75are not particularly limited, but can be, for example, a laminated body including an underlayer made of titanium (Ti), tungsten (W), a titanium/tungsten alloy, or the like, and a coating layer made of gold (Au). Accordingly, the wirings73and74and the bonding layer75having excellent adhesion to the semiconductor substrate5and electrical conductivity are obtained. The configuration of the bonding layer75is not particularly limited, and the non-bonding region Q2may be omitted. The bonding region Q1and the non-bonding region Q2may be formed separately. The semiconductor circuit substrate4having such a configuration is provided with the wirings73and74at the upper surface51side of the semiconductor substrate5and the output terminal651A at the lower surface52side. Therefore, the wirings73and74and the output terminal651A can be separated from each other as much as possible on the semiconductor circuit substrate4, and the electromagnetic connection therebetween can be effectively suppressed. Therefore, it is not easily affected by integer boundary spurious, and phase noise or phase jitter can be effectively suppressed in the circuit6. In particular, in the present embodiment, the semiconductor substrate5is configured of a P-type silicon substrate and is electrically coupled to the ground terminal. Accordingly, the semiconductor substrate5is coupled to the ground when the vibrator device1is driven. Therefore, the semiconductor substrate5positioned between the wirings73and74and the output terminal651A functions as a shield layer, and the electromagnetic connection between the wirings73and74and the output terminal651A can be more effectively suppressed. In a plan view, the wirings73and74and the output terminal651A are provided so as not to overlap each other. Accordingly, the wirings73and74and the output terminal651A can be separated from each other as much as possible on the semiconductor circuit substrate4, and the electromagnetic connection therebetween can be more effectively suppressed. Therefore, it is not easily affected by integer boundary spurious, and phase noise or phase jitter can be more effectively suppressed in the circuit6. In particular, in the present embodiment, since the circuit6is provided on the semiconductor substrate5, the electrical path between the vibrator element9and the circuit6can be shortened more than that when an integrated circuit (IC) separate from the substrate on which the vibrator element9is disposed is used as the circuit, and it is also possible to shorten the wiring length of the output terminal651A. Therefore, the electromagnetic connection between the wirings73and74and the output terminal651A can be suppressed more effectively. In a plan view, the inductor673A included in the fractional N-PLL circuit67and the wirings73and74are configured so as not to overlap each other. Accordingly, the wirings73and74and the inductor673A can be separated from each other as much as possible on the semiconductor circuit substrate4, and the electromagnetic connection therebetween can be effectively suppressed. Therefore, it is not easily affected by integer boundary spurious, and phase noise or phase jitter can be effectively suppressed in the circuit6. However, the present disclosure is not limited thereto, and the inductor673A and the wirings73and74may overlap each other in a plan view. In a plan view, the inductor673A included in the fractional N-PLL circuit67and the output terminal651A are configured so as not to overlap each other. Accordingly, the output terminal651A and the inductor673A can be separated from each other as much as possible on the semiconductor circuit substrate4, and the electromagnetic connection therebetween can be effectively suppressed. Therefore, it is not easily affected by integer boundary spurious, and phase noise or phase jitter can be effectively suppressed in the circuit6. However, the present disclosure is not limited thereto, and the inductor673A and the output terminal651A may overlap each other in a plan view. As described above, at the upper surface51of the semiconductor substrate5, the bonding layer75coupled to the ground is provided. The bonding layer75is disposed close to the wirings73and74, and further, is positioned between the output terminal651A and the vibrator element9. Therefore, the bonding layer75can effectively suppress the magnetic connection between the wirings73and74and the output terminal651A and the magnetic connection between the vibrator element9and the output terminal651A. In the present embodiment, the bonding layer75overlaps the inductor673A in a plan view along the Z-axis. Therefore, the bonding layer75and the inductor673A are likely to be disposed close to each other, and there is a concern that an eddy current is generated to reduce the inductance value or increase the loss (Q value). Therefore, in the present embodiment, the circuit6is provided at the lower surface52side of the semiconductor substrate5to maintain a sufficiently large separation distance between the bonding layer75and the inductor673A. Accordingly, the influence of the eddy current can be suppressed to be smaller than that when the circuit6is provided on the upper surface51. The inductor673A is built in the wiring layer62included in the circuit6. In the present embodiment, the wiring layer62is one layer, but as described above, when there are a plurality of wiring layers62, it is preferable that the inductor673A is formed in at least the lowest layer, that is, the wiring layer62other than the wiring layer62on the semiconductor substrate5side, preferably, in the wiring layer62positioned on a side closest to the surface layer, that is, distal to the semiconductor substrate5. Accordingly, a larger separation distance between the inductor673A and the bonding layer75can be secured, and the above-described effect becomes more remarkable. Since the semiconductor substrate5is also coupled to the ground, there is a case where an eddy current is generated similarly to the bonding layer75to reduce the inductance value or increase the loss. Therefore, by forming the semiconductor substrate5with a high-resistance silicon substrate, the influence of the eddy current can be suppressed to be small. Vibrator Element9 As illustrated inFIG.3, the vibrator element9has a vibration substrate91and an electrode disposed at the front surface of the vibration substrate91. The vibration substrate91has a thickness sliding vibration mode, and is formed of an AT cut quartz crystal substrate in the present embodiment. Since the AT cut quartz crystal substrate has a third-order frequency-temperature characteristic, the AT cut quartz crystal substrate becomes the vibrator element9having an excellent temperature characteristic. The electrode includes an excitation electrode921disposed at the upper surface of the vibration substrate91, an excitation electrode922disposed at the lower surface facing the excitation electrode921, one pair of terminals923and924disposed at the lower surface of the vibration substrate91, a wiring925that electrically couples the terminal923and the excitation electrode922, and a wiring926that electrically couples the terminal924and the excitation electrode921. The configuration of the vibrator element9is not limited to the above-described configuration. For example, the vibrator element9may have a mesa type in which the vibration region sandwiched between the excitation electrodes921and922protrudes from the surroundings, or conversely, the vibrator element9may have an inverted mesa type in which the vibration region is recessed from the surroundings. Bevel processing for grinding the surroundings of the vibration substrate91and convex processing for making the upper surface and the lower surface convex curved surfaces may be performed. The vibrator element9is not limited to one that vibrates in the thickness sliding vibration mode, and may be, for example, a vibrator element in which a plurality of vibrating arms perform flexural vibration in the in-plane direction. That is, the vibration substrate91is not limited to the one formed of the AT cut quartz crystal substrate, and may be formed of a quartz crystal substrate other than the AT cut quartz crystal substrate, for example, an X cut quartz crystal substrate, a Y cut quartz crystal substrate, a Z cut quartz crystal substrate, a BT cut quartz crystal substrate, an SC cut quartz crystal substrate, an ST cut quartz crystal substrate, or the like. In the present embodiment, the vibration substrate91is made of quartz crystal, but the present disclosure is not limited thereto, and may be formed of, for example, a piezoelectric single crystal such as lithium niobate, lithium tantalate, lithium tetraborate, langasite crystal, potassium niobate, or gallium phosphate, and may be formed of a piezoelectric single crystal other than these. Furthermore, the vibrator element9is not limited to the piezoelectric drive type vibrator element, and may be an electrostatic drive type vibrator element that uses electrostatic force. As illustrated inFIG.3, the vibrator element9is fixed to the pair of wirings73and74by conductive bonding members B1and B2. The bonding member B1electrically couples the wiring73and the terminal923, and the bonding member B2electrically couples the wiring74and the terminal924. Accordingly, the vibrator element9and the circuit6are electrically coupled. The bonding members B1and B2are not particularly limited as long as the bonding members have both conductivity and bondability, and for example, various metal bumps such as gold bumps, silver bumps, copper bumps, or solder bumps, and a conductive adhesive or the like in which a conductive filler such as a silver filler is dispersed in various adhesives such as polyimide-based, epoxy-based, silicone-based, or acrylic-based adhesive can be used. When the former metal bumps are used as the bonding members B1and B2, the generation of gas from the bonding members B1and B2can be suppressed, and the environmental change of the accommodation space S, particularly the increase in pressure, can be effectively suppressed. Meanwhile, when the latter conductive adhesive is used as the bonding members B1and B2, the bonding members B1and B2are softer than the metal bumps, and stress is less likely to be transmitted from the package2to the vibrator element9. Lid3 Similar to the semiconductor substrate5, the lid3is a silicon substrate. Accordingly, the linear expansion coefficients of the semiconductor substrate5and the lid3become equal, the generation of thermal stress due to thermal expansion is suppressed, and the vibrator device1having an excellent vibration characteristic is obtained. Since the vibrator device1can be formed by a semiconductor process, the vibrator device1can be manufactured with high accuracy and can be miniaturized. However, the lid3is not particularly limited, and may be a semiconductor substrate other than silicon, for example, a semiconductor substrate formed of germanium, gallium arsenide, gallium arsenide phosphorus, gallium nitride, silicon carbide and the like. As illustrated inFIG.1, the lid3has a bottomed recess portion31which is open to a lower surface30thereof and accommodates the vibrator element9inside. The lid3is bonded to the upper surface51of the semiconductor substrate5at the lower surface30. Accordingly, the accommodation space S for accommodating the vibrator element9is formed between the lid3and the semiconductor substrate5. The accommodation space S is airtight and is in a reduced pressure state, preferably a state closer to vacuum. Accordingly, the oscillation characteristics of the vibrator element9are improved. However, the atmosphere of the accommodation space S is not particularly limited, and may be, for example, an atmosphere in which an inert gas such as nitrogen or Ar is sealed, and may be in an atmospheric pressure state or a pressurized state instead of a reduced pressure state. As illustrated inFIG.1, a bonding layer33is provided at the lower surface30of the lid3. The lid3and the semiconductor substrate5are bonded to each other by bonding the bonding layer33and the bonding layer75provided at the upper surface51of the semiconductor substrate5. In the present embodiment, the semiconductor substrate5and the lid3are bonded to each other by using diffusion bonding that uses diffusion between metals among bonding methods. However, the method of bonding the semiconductor substrate5and the lid3is not particularly limited. The configuration of the bonding layer33is not particularly limited, but can be the same as that of the bonding layer75. The lid3is electrically coupled to the semiconductor substrate5via the bonding layers75and33. That is, the lid3is coupled to the same potential, that is, ground, as that of the semiconductor substrate5when the vibrator device1is driven. Accordingly, the lid3functions as a shield layer that shields disturbance, and noise can be suppressed from being mixed into the vibrator element9. However, the present disclosure is not limited thereto, the lid3may not have to be electrically coupled to the semiconductor substrate5. The configuration of the bonding layer33provided on the lid3is not particularly limited, and for example, as illustrated inFIG.5, the configuration may be provided at the inner surface of the recess portion31. Accordingly, the bonding layer33functions as a shield layer together with the lid3. The vibrator device1has been described above. As described above, the vibrator device1includes: the vibrator element9; the semiconductor substrate5having the upper surface51as a first surface on which the vibrator element9is disposed and the lower surface52as a second surface positioned on an opposite side of the upper surface51; the fractional N-PLL circuit67disposed at the lower surface52; the wirings73and74that are disposed at the upper surface51and electrically couple the vibrator element9and the fractional N-PLL circuit67; and the output terminal651A that is disposed at the lower surface52side of the semiconductor substrate5, is electrically coupled to the fractional N-PLL circuit67, and outputs a signal from the fractional N-PLL circuit67. Then, the output terminal651A does not overlap the wirings73and74in a plan view along the thickness direction of the semiconductor substrate5, that is, along the Z-axis. Accordingly, the wirings73and74and the output terminal651A can be separated from each other as much as possible on the semiconductor circuit substrate4. Therefore, as compared with a case where the wirings73and74and the output terminal651A overlap each other in a plan view, these electromagnetic connections can be suppressed more effectively. Therefore, it is not easily affected by integer boundary spurious, and phase noise or phase jitter can be more effectively suppressed in the circuit6. As described above, the vibrator device1further includes: the terminal651as a ground terminal coupled to a ground, and the semiconductor substrate5is electrically coupled to the ground terminal, and has the bonding layer75as a metal layer that is disposed at the upper surface51and electrically coupled to the semiconductor substrate5. Accordingly, the bonding layer75can effectively suppress the magnetic connection between the wirings73and74and the output terminal651A and the magnetic connection between the vibrator element9and the output terminal651A. As described above, the vibrator device1has the lid3bonded to the upper surface51of the semiconductor substrate5so as to cover the vibrator element9. The lid3is electrically coupled to the semiconductor substrate5. Accordingly, the lid3functions as a shield layer that shields disturbance, and noise can be suppressed from being mixed into the vibrator element9. As described above, the fractional N-PLL circuit67has the voltage-controlled oscillator673as an oscillator including the inductor673A. In a plan view, the output terminal651A does not overlap the inductor673A. Accordingly, the output terminal651A and the inductor673A can be separated from each other as much as possible on the semiconductor circuit substrate4, and the electromagnetic connection therebetween can be effectively suppressed. Therefore, it is not easily affected by integer boundary spurious, and phase noise or phase jitter can be effectively suppressed in the circuit6. As described above, in a plan view, the wirings73and74do not overlap the inductor673A. Accordingly, the wirings73and74and the inductor673A can be separated from each other as much as possible on the semiconductor circuit substrate4, and the electromagnetic connection therebetween can be effectively suppressed. Therefore, it is not easily affected by integer boundary spurious, and phase noise or phase jitter can be effectively suppressed in the circuit6. Although the vibrator device of the present disclosure has been described above based on the illustrated embodiment, the present disclosure is not limited thereto, and the configuration of each portion can be any configuration having the same function. Any other components may be added to the present disclosure. Moreover, each embodiment may be combined with each other suitably. In the above-described embodiment, the vibrator device1is applied to the oscillator, but the present disclosure is not limited thereto. For example, by using the vibrator element9as a physical quantity sensor element capable of detecting an angular velocity or acceleration, the vibrator device1can be applied to various physical quantity sensors such as an angular velocity sensor or an acceleration sensor. In the above-described embodiment, the lid3has the recess portion31, but the lid3is not limited thereto. For example, the semiconductor substrate5of the semiconductor circuit substrate4may have a bottomed recess portion that opens at the upper surface51, and the lid3may have a flat plate shape. In this case, the vibrator element9may be fixed to the bottom surface of the recess portion of the semiconductor substrate5. | 28,441 |
11863158 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Preferred embodiments of the present invention will be described below with reference to the drawings to clarify the present invention. Note that each preferred embodiment described herein is merely illustrative and the configurations can be partly replaced or combined with each other in different preferred embodiments. FIG.1is a plan view of an acoustic wave resonator according to a first preferred embodiment of the present invention. An acoustic wave resonator1includes a piezoelectric plate2which defines and functions as a piezoelectric body. An IDT electrode3is disposed on the piezoelectric plate2. A reflector4and a reflector5are disposed on respective sides of the IDT electrode3in an acoustic wave propagation direction. The acoustic wave resonator1is a single-port acoustic wave resonator.FIG.6is a front sectional view of the acoustic wave resonator. The IDT electrode3and the reflectors4and5are disposed on the piezoelectric plate2, but may be disposed above the piezoelectric plate2with a layer, such as an insulating layer, for example, interposed between the piezoelectric plate2and the IDT electrode3and between the piezoelectric plate2and the reflectors4and5. The piezoelectric plate2is made of an appropriate piezoelectric material such as a piezoelectric single crystal, which may preferably be LiNbO3, LiTaO3, or the like, or piezoelectric ceramics, for example. In place of the piezoelectric plate2, for example, a piezoelectric substrate in which a piezoelectric film is stacked on or above a semiconductor layer or an insulating layer may be used. In the case of the piezoelectric substrate, the piezoelectric film corresponds to the piezoelectric body. Withdrawal weighting is performed on the IDT electrode3. The IDT electrode3includes a first region31to a third region33, as a plurality of regions arranged in the acoustic wave propagation direction. In the first region31to the third region33of the IDT electrode3, periodicities of withdrawal weighting are different from one another. When a portion in which, for example, one electrode finger is withdrawn per nine electrode fingers is used as an example, being periodic means that this portion is repeated two or more times, that is, for two or more periods. Having different periodicities means that this periodic withdrawal is different. For example, weighting in which one of nine electrode fingers is withdrawn and weighting in which one of six electrode fingers is withdrawn have different periodicities. FIG.2is an enlarged partial cutout plan view for describing withdrawal weighting in the first region31. A first busbar3A and a second busbar3B of the IDT electrode3extend in the acoustic wave propagation direction. One end of each of a plurality of first electrode fingers6ais linked to the first busbar3A. One end of each of a plurality of second electrode fingers6bis linked to the second busbar3B. The plurality of first electrode fingers6aand the plurality of second electrode fingers6binterdigitate with each other. Dummy electrode fingers6care separate from tips of the respective first electrode fingers6awith respective gaps therebetween. The dummy electrode fingers6care linked to the second busbar3B. Dummy electrode fingers6dare separate from tips of the respective second electrode fingers6bwith respective gaps therebetween. The dummy electrode fingers6dare linked to the first busbar3A. Note that the dummy electrode fingers6cand6dmay be omitted. In the first region31, the electrode fingers are withdrawn at a rate of one of nine in the acoustic wave propagation direction. Wide electrode fingers7aand7bare disposed in the respective portions in which the electrode fingers have been withdrawn. The term “width-direction dimension” of an electrode finger refers to a dimension in the acoustic wave propagation direction. As described above, the first electrode fingers6aor the second electrode fingers6bare withdrawn at the rate of one of nine. A plurality of portions31ato31ein which the electrode fingers are withdrawn at the rate of one of nine are arranged periodically in the acoustic wave propagation direction. Thus, withdrawal weighting is performed periodically in the first region31. The wide electrode fingers7alinked to the first busbar3A each have a shape in which a region between the first electrode finger6aand the first electrode finger6athat are closest to each other in the acoustic wave propagation direction is metallized. The wide electrode fingers7blinked to the second busbar3B each have a shape in which a region between the second electrode finger6band the second electrode finger6bthat are closest to each other in the acoustic wave propagation direction is metallized. Six first and second electrode fingers6aand6bin total are disposed between each of the wide electrode fingers7aand the closest wide electrode finger7b. Note that wide dummy electrode fingers8alinked to the second busbar3B and wide dummy electrode fingers8blinked to the first busbar3A respectively oppose the wide electrode fingers7aand the wide electrode fingers7bwith respective gaps therebetween. FIG.3is an enlarged partial cutout plan view for describing withdrawal weighting in the second region32. In the second region32, the electrode fingers are withdrawn at a rate of one of ten in the acoustic wave propagation direction. In this manner, withdrawal weighting is performed. The second region32is configured in the same or substantially the same manner as the first region31except for the periodicity of this withdrawal weighting. Each region between the second electrode fingers6bthat are closest to each other in the acoustic wave propagation direction is metallized. Thus, wide electrode fingers9are disposed. Seven first and second electrode fingers6aand6bin total are disposed between the wide electrode fingers9that are adjacent to each other. The wide electrode fingers9are linked to the second busbar3B. Wide dummy electrode fingers10are disposed to oppose the respective wide electrode fingers9with respective gaps therebetween. The second region32includes a plurality of portions32ato32earranged in the acoustic wave propagation direction. Each of the portions32a,32b,32c,32d, and32eis a portion in which the electrode fingers are withdrawn at the rate of one of ten. Thus, withdrawal weighting is also performed periodically in the second region32. FIG.4is a partial cutout plan view for describing withdrawal weighting in the third region33. In the third region33, weighting is performed in each of portions33ato33esuch that the electrode fingers are withdrawn at a rate of one of eleven. The remaining configuration is the same or substantially the same as those of the first region31and the second region32. As described above, the electrode fingers are withdrawn at the rate of one of eleven. Thus, wide electrode fingers11alinked to the first busbar3A and wide electrode fingers11blinked to the second busbar3B are disposed. Eight first and second electrode fingers6aand6bin total are disposed between each of the wide electrode fingers11aand the closest wide electrode finger11b. Wide dummy electrode fingers12aare linked to the second busbar3B. Wide dummy electrode fingers12bare linked to the first busbar3A. The wide dummy electrode fingers12aand12brespectively oppose the wide electrode fingers11aand11bwith respective gaps therebetween. As illustrated inFIGS.2to4, the electrode fingers are withdrawn at an equal or substantially equal interval in the acoustic wave propagation direction in each of the first region31to the third region33. That is, periodic withdrawal weighting is performed. When a portion in which, for example, one electrode finger is withdrawn per nine electrode fingers is used as an example, being periodic means that this portion is repeated two or more times, that is, for two or more periods. On the other hand, as described above, the periodic withdrawal weighting in the first region31, the periodic withdrawal weighting in the second region32, and the periodic withdrawal weighting in the third region33are different from one another. That is, the periodicity of the periodic withdrawal weighting in the first region31, the periodicity of the periodic withdrawal weighting in the second region32, and the periodicity of the periodic withdrawal weighting in the third region33are different from one another. Referring back toFIG.1, the reflectors4and5are ordinary grating reflectors. As illustrated inFIG.5, both ends of a plurality of electrode fingers are short-circuited in the reflector4. The IDT electrode3and the reflectors4and5are made of an appropriate metal or alloy such as AlCu alloy, for example. In addition, a multilayer metal film in which a plurality of metal films are stacked may be used. In the acoustic wave resonator1, the IDT electrode3includes the first region31to the third region33in the acoustic wave propagation direction, periodic withdrawal weighting is performed in each of the first region31to the third region33, and periodicities of the withdrawal weighting in the first region31to the third region33are different from one another. However, the present invention is not limited to the configuration in which the periodicities of withdrawal weighting in a plurality of regions are different from one another, and it is sufficient that periodic withdrawal weighting in at least one of the regions is different from periodic withdrawal weighting in at least another one of the regions. In addition, the number of regions is not limited to three, and it is sufficient that there are a plurality of regions. The IDT electrode3preferably has asymmetrical withdrawal weighting on respective sides of the center of the IDT electrode3in the acoustic wave propagation direction. In such a case, both the characteristics in the pass band and the characteristics outside the pass band can be improved more effectively. When a bandpass filter includes the acoustic wave resonator1according to the present preferred embodiment, the acoustic wave resonator achieves improved characteristics in the pass band and can also reduce or prevent ripples outside the pass band, that is, can improve the characteristics outside the pass band. This will be described with reference toFIGS.7to12. InFIG.7, a solid line represents impedance characteristics of an acoustic wave resonator according to a first example of a preferred embodiment of the present invention, and a broken line represents impedance characteristics of an acoustic wave resonator according to a first comparative example. InFIG.8, a solid line represents return loss characteristics of the acoustic wave resonator according to the first example, and a broken line represents return loss characteristics of the acoustic wave resonator according to the first comparative example. Design parameters of the acoustic wave resonator according to the first example are as follows. In the IDT electrode3, withdrawal weighting was performed on the electrode fingers at a rate of one of eleven in the first region31, withdrawal weighting was performed on the electrode fingers at a rate of one of twelve in the second region32, and withdrawal weighting was performed on the electrode fingers at a rate of one of thirteen in the third region33. The above-described withdrawal weighting was repeated for fifteen periods in each of the first region31to the third region33. Other design parameters of the IDT electrode3are as follows.Piezoelectric material of the piezoelectric plate2=LiTaO3Material of the IDT electrode3and material of the reflectors4and5=Ti and AlCuFilm thickness of the IDT electrode3and film thickness of the reflectors4and5=about 12 nm and about 145 nmWave length λ determined by the pitch of the electrode fingersabout 2.04 μmNumber of electrode fingers=540Width of wide electrode fingers=about 1.53 μmOverlap width=about 45 μmNumber of electrode fingers of reflectors=21 The first comparative example was configured to be the same or substantially the same as the acoustic wave resonator according to the first example except that withdrawal weighting was performed on the entire IDT electrode at a rate of one of twelve electrode fingers. That is, periodic withdrawal weighting was performed entirely in the acoustic wave resonator according to the first comparative example. As is apparent fromFIGS.7and8, large ripples appear at around 1600 MHz and around 2260 MHz in the first comparative example. In contrast, such ripples are successfully reduced or prevented with the acoustic wave resonator according to the first example. In addition, as in the first comparative example, resonant resistance in the first example is sufficiently low in the impedance characteristics illustrated inFIG.7. This indicates that the ripple at frequencies lower than the resonant frequency and the ripple at frequencies higher than the anti-resonant frequency can be effectively reduced or prevented while the resonance characteristics are maintained. An acoustic wave resonator according to a second comparative example was prepared. The second comparative example was configured to be the same or substantially the same as the first example except that the withdrawal weighting described above was not performed. Thus, no withdrawal weighting was performed on an IDT electrode of the acoustic wave resonator according to the second comparative example. A solid line inFIG.9represents the impedance characteristics of the acoustic wave resonator according to the second comparative example. A solid line inFIG.10represents the return loss characteristics of the acoustic wave resonator according to the second comparative example. For comparison, a broken line inFIG.9and a broken line inFIG.10respectively represent the impedance characteristics and the return loss characteristics of the acoustic wave resonator of the first comparative example described above. As is apparent fromFIGS.9and10, neither the ripple at around 1600 MHz nor the ripple at around 2260 MHz are caused with the acoustic wave resonator according to the second comparative example that includes the IDT electrode on which withdrawal weighting is not performed. However, as is apparent fromFIG.9, a frequency difference Δf between the resonant frequency and the anti-resonant frequency of the acoustic wave resonator according to the second comparative example is larger than a frequency difference Δf for the acoustic wave resonator according to the first comparative example. Thus, when the acoustic wave resonator according to the second comparative example is used, it is difficult to increase the steepness of the filter characteristics of a ladder filter, for example. Therefore, when the acoustic wave resonator according to the second comparative example is used, it is difficult to obtain good filter characteristics. Then, an acoustic wave resonator on which withdrawal weighting is randomly performed so that withdrawal is not periodic was prepared as an acoustic wave resonator according to a third comparative example. In this case, the electrode fingers were withdrawn at a rate of one of twelve but portions in which the electrode fingers were withdrawn are randomly arranged in the acoustic wave propagation direction. That is, withdrawal weighting was performed on the IDT electrode so that withdrawal is not periodic. InFIG.11, a dot-dash line represents the impedance characteristics of the acoustic wave resonator according to the third comparative example, and a broken line represents the impedance characteristics of the acoustic wave resonator according to the first comparative example. In addition, inFIG.12, a dot-dash line represents the return loss characteristics of the acoustic wave resonator according to the third comparative example, and a broken line represents the return loss characteristics of the acoustic wave resonator according to the first comparative example. As is apparent fromFIG.11, the ripple at around 1600 MHz and the ripple at around 2260 MHz, which appear for the acoustic wave resonator according to the first comparative example, do not appear for the acoustic wave resonator according to the third comparative example. Although it is not necessarily apparent fromFIG.11, the impedance at the resonant frequency of the acoustic wave resonator according to the first comparative example is about −10 dB, whereas the impedance at the resonant frequency of the acoustic wave resonator according to the third comparative example is about −8 dB. That is, the impedance at the resonant frequency of the acoustic wave resonator according to the third comparative example deteriorates. This indicates that in the third comparative example in which withdrawal weighting was randomly performed on the entire IDT electrode, the resonant resistance deteriorates and good characteristics in the band were not obtained, compared to the acoustic wave resonator according to the first comparative example, that is, the acoustic wave resonator in which withdrawal weighting was periodically performed on the entire IDT electrode. As described above, as is apparent fromFIGS.7to12, in the case where periodic withdrawal weighting was performed in each of the first region31to the third region33and the periodicities of withdrawal weighting in the first region31to the third region33are different from one another, deterioration in resonant resistance is less likely to occur and the frequency difference between the resonant frequency and the anti-resonant frequency is decreased. Thus, when the acoustic wave resonator is used in a bandpass filter, the characteristics in the pass band of the bandpass filter are improved. In addition, ripples that appear outside a frequency range between the resonant frequency and the anti-resonant frequency are reduced or prevented. Thus, with a bandpass filter including the acoustic wave resonator according to the first example, the characteristics outside the pass band are improved. Therefore, the characteristics in a pass band of another bandpass filter that is connected to the bandpass filter in common and that has a different pass band are improved. As described above, in preferred embodiments of the present invention, it is sufficient that the periodicity of withdrawal weighting in at least one region among a plurality of regions is different from the periodicity of withdrawal weighting in at least another one of the regions. The periods of withdrawal weighting in the first region31to the third region33, which are the plurality of regions, need not be different from one another as in the preferred embodiment described above. However, the periodicities of withdrawal weighting in the plurality of regions are preferably different from one another as in the preferred embodiment described above. In such a case, both the characteristics in the pass band and the characteristics outside the pass band are improved more effectively. In the first preferred embodiment, withdrawal weighting is performed so that the IDT electrode3includes the wide electrode fingers7a,7b,9,11a, and11b. Alternatively, withdrawal weighting may be performed by providing a floating electrode finger50in one region of the IDT electrode as illustrated inFIG.22. The floating electrode finger50is provided in at least one of the portions in which the first electrode fingers or the second electrode fingers are located, in place of the corresponding first electrode finger or the corresponding second electrode finger. That is, withdrawal weighting on the IDT electrode is not limited to withdrawal weighting with the wide electrode fingers, and may be withdrawal weighting using floating electrode fingers. Also in such a case, it is sufficient that the periodicity of withdrawal weighting in at least one region among a plurality of regions is different from the periodicity of withdrawal weighting in at least another one of the regions. Withdrawal weighting is performed on the IDT electrode3in the above-described manner in the acoustic wave resonator1. Thus, when a bandpass filter, for example, a ladder acoustic wave filter includes the acoustic wave resonator1, both the characteristics in the pass band and the characteristics outside the pass band are improved. This will be clarified through description of a preferred embodiment of a multiplexer illustrated inFIGS.13and14. FIG.13is a schematic circuit diagram of a multiplexer according to a second preferred embodiment of the present invention.FIG.14is a diagram illustrating a specific circuit configuration of the multiplexer41illustrated inFIG.13. The multiplexer41includes a common terminal42, which is a terminal closest to an antenna. One end of each of a first bandpass filter43to a fourth bandpass filter46, which are a plurality of bandpass filters, is connected in common to the common terminal42. The multiplexer41is a quadplexer including the first bandpass filter43, the second bandpass filter44, the third bandpass filter45, and the fourth bandpass filter46. An inductor L1is connected between the common terminal42and a ground potential. The inductor L1is provided to achieve impedance matching. As illustrated inFIG.14, each of the first bandpass filter43to the fourth bandpass filter46is a ladder acoustic wave filter including a plurality of series-arm resonators and a plurality of parallel-arm resonators. The series-arm resonators and the parallel-arm resonators are defined by acoustic wave resonators. The first bandpass filter43is a Band1 transmission filter, for example. The second bandpass filter44is a Band1 reception filter, for example. The third bandpass filter45is a Band3 transmission filter, for example. The fourth bandpass filter46is a Band3 reception filter, for example. The pass band of the Band1 transmission filter is about 1920 MHz to about 1980 MHz, for example. The pass band of the Band1 reception filter is about 2110 MHz to about 2170 MHz, for example. The pass band of the Band3 transmission filter is about 1710 MHz to about 1785 MHz, for example. The pass band of the Band3 reception filter is about 1805 MHz to about 1880 MHz, for example. Thus, the pass bands of the first bandpass filter43to the fourth bandpass filter46are different from one another. In the multiplexer41, withdrawal weighting is performed on the IDT electrodes of the acoustic wave resonators defining the first bandpass filter43to the fourth bandpass filter46as described in the preferred embodiment above. The first bandpass filter43is connected between a Band1 transmission terminal51and the common terminal42. Series-arm resonators S1to S4are connected between the transmission terminal and the common terminal42. In addition, parallel-arm resonators P1to P4are connected between the series arm and the ground potential. Note that each of the series-arm resonators S1, S2, and S3is divided into two resonators. The series-arm resonator S4is divided into three resonators. An inductor L2is connected in parallel with the series-arm resonator S1. In the second bandpass filter44, series-arm resonators S11to S15are connected between a Band1 reception terminal52and the common terminal42. Parallel-arm resonators P11to P17are connected between the series arm and the ground potential. An inductor L3is connected between the parallel-arm resonator P12and the ground potential. An inductor L4is connected between the parallel-arm resonator P14and the ground potential. An inductor L5is connected between the parallel-arm resonator P15and the ground potential. An inductor L6is connected between the parallel-arm resonator P17and the ground potential. The series-arm resonator S11is divided into three resonators. The third bandpass filter45is connected between a Band3 transmission terminal53and the common terminal42. Series-arm resonators S21, S22, S23, and S24are disposed sequentially from a side closer to the transmission terminal53. Each of the series-arm resonators S21and S24is divided into three resonators, and each of the series-arm resonators S22and S23is divided into two resonators. An inductor L7is connected between the transmission terminal53and the series-arm resonator S21. Parallel-arm resonators P21to P25are connected between the series arm and the ground potential. An inductor L8is connected between the parallel-arm resonator P21and the ground potential. One end of the parallel-arm resonator P22and one end of the parallel-arm resonator P24are connected in common and are connected to the ground potential with an inductor L9interposed between the ground potential and the parallel-arm resonators P22and P24. An inductor L10is connected between the parallel-arm resonator P25and the ground potential. The fourth bandpass filter46is connected between a Band3 reception terminal54and the common terminal42. Series-arm resonators S31to S35are disposed sequentially from a side closer to the common terminal42. Each of the series-arm resonator S31and S34is divided into two resonators. Parallel-arm resonators P31to P34are connected between the series arm and the ground potential. An inductor L11is connected between an end portion of the parallel-arm resonator P34closer to the ground potential and the ground potential. Design parameters of the first bandpass filter43to the fourth bandpass filter46according to a second example which corresponds to the second preferred embodiment are set as shown in Table 1 to Table 4 below. TABLE 1First bandpass filter 43 (B1Tx)S1P1S2P2S3P3, P4S4IDT wave length (μm)1.9242.0031.932.0111.9372.0111.922REF wave length (μm)1.9242.0031.932.0111.9372.0111.922Overlap width (μm)29371256812732Number of pairs of electrode fingers of IDT14020020011812082140Number of pairs of electrode fingers of REF10101010101010Duty0.50.50.50.50.50.50.5 TABLE 2Second bandpass filter 44 (B1Rx)P11,P13,P16,S11P12S12P14S13P15S14P17S15IDT wave length (μm)1.7451.8421.8191.8981.8081.8961.8051.8721.825REF wave length (μm)1.7451.8421.8191.8981.8081.8961.8051.8721.825Overlap width (μm)222323192119202433Number of pairs of electrode fingers of IDT12023580110150110100245165Number of pairs of electrode fingers of REF101010101010101010Duty0.50.50.50.50.50.50.50.50.5 TABLE 3Third bandpass filter 45 (B3Tx)S21P21S22P22S23P23, P24S24P25IDT wave length (μm)2.1412.2522.1732.2542.1752.2562.1572.268REF wave length (μm)2.1412.2522.1732.2542.1752.2562.1572.268Overlap width (μm)26426811732654627.6Number of pairs of electrode fingers of IDT2003329360230130230200Number of pairs of electrode fingers of REF1010101010101010Duty0.50.50.50.50.50.50.50.5 TABLE 4Fourth bandpass filter 46 (B3Rx)S31P31S32P32S33P33S34P34S35IDT wave length (μm)2.0272.1212.0332.1262.0452.1232.0352.1252.048REF wave length (μm)2.0272.1212.0332.1262.0452.1232.0352.1252.048Overlap width (μm)263140571845194336Number of pairs of electrode fingers of IDT12020090200120150180150165Number of pairs of electrode fingers of REF101010101010101010Duty0.50.50.50.50.50.50.50.50.5 In addition, for comparison, a multiplexer according to a fourth comparative example is prepared, which is configured to be the same or substantially the same as in the second example except that the IDT electrodes of the acoustic wave resonators defining the first bandpass filter43to the fourth bandpass filter46are configured as in the acoustic wave resonators according to the first comparative example described above. A solid line inFIG.15represents the bandpass characteristics of the Band3 transmission filter, that is, the third bandpass filter45, of the multiplexer41according to the second example. A broken line inFIG.15represents the bandpass characteristics of the Band3 transmission filter of the multiplexer according to the fourth comparative example. InFIG.16, a solid line represents the return loss characteristics on a side, closer to the common terminal42, of the Band3 transmission filter, that is, the third bandpass filter45of the multiplexer41according to the second example, and a broken line represents the return loss characteristics on a side, closer to the common terminal, of the Band3 transmission filter according to the fourth comparative example. InFIG.16, ripples indicated by arrows A1and A2appear in a range from about 2110 MHz to about 2170 MHz, which is the pass band of the Band1 reception filter, in the fourth comparative example. A portion enclosed by a circle A inFIG.15is illustrated inFIG.17in an enlarged manner. Large ripples indicated by arrows A1and A2appear in the fourth comparative example. In contrast, these ripples are sufficiently reduced or prevented in the second example. According to the second example, attenuation characteristics in the portion in which these ripples appear are successfully improved by about 3 dB as a result of reducing or preventing these ripples, compared to the fourth comparative example. InFIG.18, a solid line represents the bandpass characteristics of the Band3 reception filter according to the second example, that is, the fourth bandpass filter46, and a broken line represents the bandpass characteristics of the Band3 reception filter according to the fourth comparative example. In addition, inFIG.19, a solid line represents the return loss characteristics on a side, closer to the common terminal42, of the fourth bandpass filter46according to the second example, and a broken line represents the return loss characteristics on a side, closer to the common terminal, of the Band3 reception filter according to the fourth comparative example. As is apparent fromFIGS.18to19, a ripple indicated by an arrow A3appears in the fourth comparative example in a range from about 2110 MHz to about 2170 MHz, which is the pass band of the Band1 reception filter. In contrast, this large ripple is reduced or prevented in the second example. This indicates that a ripple that is caused outside the pass band is able to be reduced or prevented while the characteristics in the pass band are maintained in the fourth bandpass filter46which is the Band3 reception filter. InFIG.20, a solid line represents the isolation characteristics from the Band3 transmission filter to the Band1 reception filter in the second example, and a broken line represents the isolation characteristics according to the fourth comparative example.FIG.21illustrates the bandpass characteristics from a side closer to the antenna to a side closer to the Band1 reception filter. A solid line represents a result for the second example, and a broken line represents a result for the fourth comparative example. As is apparent fromFIG.20, a large peak of about 12 dB appears in a range from about 2090 MHz to about 2100 MHz in the fourth comparative example. In contrast, such a peak does not appear in the second example. That is, the isolation characteristics are effectively improved. In addition, as is apparent fromFIG.21, a large ripple of about 0.2 dB appears in a band around 2130 MHz in the bandpass characteristics in the fourth comparative example. In contrast, such a ripple does not appear in the second example. This indicates that the characteristics are improved and the loss is reduced or prevented in the pass band of the Band1 reception band. As described above, the use of the first bandpass filter43to the fourth bandpass filter46in the multiplexer according to the present preferred embodiment of the present invention makes it possible to improve the characteristics outside the pass band of each of the bandpass filters43to46while maintaining the characteristics in the pass band. Thus, the characteristics in the pass bands of the other bandpass filters that are connected in common are improved. In the preferred embodiments described above, the quadplexer including the first bandpass filter43to the fourth bandpass filter46is described. However, the multiplexer according to the present invention is not limited to the quadplexer. The multiplexer may be a duplexer, a triplexer, or a multiplexer in which five or more bandpass filters are connected in common. In addition, the pass bands of the plurality of bandpass filters need not be different from one another, and it is sufficient that the pass band of at least one of the bandpass filters is different from the pass band of at least another one of the bandpass filters. While preferred embodiments of the present invention have been described above, it is to be understood that variations and modifications will be apparent to those skilled in the art without departing from the scope and spirit of the present invention. The scope of the present invention, therefore, is to be determined solely by the following claims. | 32,722 |
11863159 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, the present invention will be described in detail with reference to preferred embodiments and the accompanying drawings. The following preferred embodiments are general or specific examples. Details, such as values, shapes, materials, components, and arrangements and connection configurations of the components in the following preferred embodiments, are provided merely as examples and should not be construed as limiting the present invention. Of the components in the following preferred embodiments, those not mentioned in an independent claim are described as optional components. The sizes and the relative proportions of the components illustrated in the drawings are not necessarily to scale. Redundant description of the same or corresponding components, which are denoted by the same reference signs in the drawings, will be omitted or described in brief. The expression “connected to” in the description of the following preferred embodiments includes not only direct connection but also electrical connection through another element or the like. Preferred Embodiment 1 1-1 Basic Configuration of Acoustic Wave Filter The following describes a basic configuration of an acoustic wave filter according to Preferred Embodiment 1 of the present invention with reference toFIG.1. FIG.1is a circuit configuration diagram of an acoustic wave filter10according to Preferred Embodiment 1. The acoustic wave filter10includes a series-arm resonator110s, series-arm resonators121sto124s, parallel-arm resonators121pto124p, and inductors121L and122L. The series-arm resonators110sand121sto124sare disposed on a path connecting a first terminal Port1and the second terminal Port2. The parallel-arm resonators121pto124pare disposed between the path and a reference terminal (i.e., ground). The series-arm resonator110sis hereinafter also referred to as a first series-arm resonator110s. The series-arm resonators different from the first series-arm resonator110sare hereinafter referred to as second series-arm resonators121s,122s,123s, and124s. The first series-arm resonator110sand the second series-arm resonators121sto124sare connected in series on the path (series arm) connecting the first terminal Port1and the second terminal Port2. The second series-arm resonator121s, the second series-arm resonator122s, the first series-arm resonator110s, the second series-arm resonator123s, and the second series-arm resonator124sare connected in series in the stated order in the direction from the first terminal Port1to the second terminal Port2. The parallel-arm resonators121pto124pare connected in parallel and disposed on respective paths (parallel arms) each of which connects the reference terminal and the corresponding one of points at which the first series-arm resonator110sand the second series-arm resonators121sto124sare connected to each other. Specifically, one end of the parallel-arm resonator121pis connected to a node between the series-arm resonators121sand122s, and the other end of the parallel-arm resonator121pis connected directly to the reference terminal. One end of the parallel-arm resonator122pis connected to a node between the series-arm resonators122sand110s, and the other end of the parallel-arm resonator122pis connected to the reference terminal with the inductor121L therebetween. One end of the parallel-arm resonator123pis connected to a node between the series-arm resonators110sand123s, and the other end of the parallel-arm resonator123pis connected to the reference terminal with the inductor121L therebetween. One end of the parallel-arm resonator124pis connected to a node between the series-arm resonators123sand124s, and the other end of the parallel-arm resonator124pis connected to the reference terminal with the inductor122L therebetween. The first series-arm resonator110s, the second series-arm resonators121sto124s, and the parallel-arm resonators121pto124pare connected as described above to define the acoustic wave filter10that is, for example, a ladder band-pass filter. The resonant frequency of the first series-arm resonator110sand the resonant frequencies of the second series-arm resonators121sto124sare within the pass band of the acoustic wave filter10. The second series-arm resonators121sto124smay include respective number of electrode finger pairs, respective intersecting widths, and respective resonant frequencies. The anti-resonant frequency of the first series-arm resonator110sis lower than any of the anti-resonant frequencies of the second series-arm resonators121sto124s. That is, the first series-arm resonator110shas a lower anti-resonant frequency than any other series-arm resonator (i.e., the series-arm resonators121sto124s) included in the acoustic wave filter10. The position of the first series-arm resonator110sis not limited to the point between the second series-arm resonators122sand123s. The first series-arm resonator110smay be disposed between the second series-arm resonators121sand122sor may be disposed between the second series-arm resonators123sand124s, for example. Alternatively, the first series-arm resonator110smay be disposed between the first terminal Port1and the second series-arm resonator121sor may be disposed between the second series-arm resonator124sand the second terminal Port2, for example. Instead of including one first series-arm resonator (i.e., the first series-arm resonator110s), the acoustic wave filter10may include two or more first series-arm resonators. The acoustic wave filter10does not necessarily include four second series-arm resonators (i.e., the second series-arm resonators121sto124s) and four parallel-arm resonators (i.e., the parallel-arm resonators121pto124p. The acoustic wave filter10includes at least one second series-arm resonator and at least one parallel-arm resonator. 1-2 Basic Structures of Resonators The following describes basic structures of the resonators (i.e., the series-arm resonators and the parallel-arm resonators) of the acoustic wave filter10. The resonators are preferably surface acoustic wave (SAW) resonators, for example. FIG.2includes a schematic plan view and a schematic sectional view of a resonator of the acoustic wave filter10. The resonator illustrated inFIG.2represents a typical structure of the resonators described above. Details such as the number and the length of electrode fingers of each electrode may be changed. When viewed in plan as inFIG.2, the resonator includes a pair of comb teeth-shaped electrodes and a pair of resonators. Electrodes of the pair of comb teeth-shaped electrodes face each other and are denoted by32aand32b, respectively. Reflectors of the pair of reflectors are denoted by32cand are adjacent to the comb teeth-shaped electrodes32aand32bin a propagation direction of an acoustic wave. The pair of comb teeth-shaped electrodes, or more specifically, the comb teeth-shaped electrodes32aand32bdefine an interdigital transducer (IDT) electrode. Either of the two reflectors32cmay be omitted when constraints arise from, for example, the mounting layout. The comb teeth-shaped electrode32aincludes electrode fingers322a, offset electrode fingers323a, and a busbar electrode321a. The electrode fingers322aand the offset electrode fingers323aare disposed in parallel. The busbar electrode321aconnects first ends e1of the electrode finger322ato each other and also connects first ends e1of the offset electrode fingers323ato each other. The comb teeth-shaped electrode32bincludes electrode fingers322b, offset electrode fingers323b, and a busbar electrode321b. The electrode fingers322band the offset electrode fingers323bare parallel or substantially parallel to each other. The busbar electrode321bconnects first ends e1of the electrode fingers322bto each other and also connects first ends e1of the offset electrode fingers323bto each other. The electrode fingers322a, the electrode fingers322b, the offset electrode fingers323a, and the offset electrode fingers323bextend in a direction orthogonal or substantially orthogonal to the propagation direction of the acoustic wave (i.e., in a direction orthogonal or substantially orthogonal to the X-axis). Each of the electrode fingers322afaces the corresponding one of the offset electrode fingers323bin the direction orthogonal or substantially orthogonal to the propagation direction of the acoustic wave, and each of the electrode fingers322bfaces the corresponding one of the offset electrode fingers323ain the direction orthogonal or substantially orthogonal to the propagation direction of the acoustic wave. The direction in which second ends e2of the electrode fingers322a(i.e., end portions that are not connected to the busbar electrode321a) are aligned with each other is denoted by D and crosses the propagation direction of the acoustic wave at a predetermined angle. The direction in which second ends e2of the electrode fingers322b(i.e., end portions that are not connected to the busbar electrode321b) are aligned with each other is denoted by D and crosses the propagation direction of the acoustic wave at the predetermined angle. The direction in which second ends e2of the offset electrode fingers323a(i.e., end portions that are not connected to the busbar electrode321a) are aligned with each other is denoted by D and crosses the propagation direction of the acoustic wave at the predetermined angle. The direction in which second ends e2of the offset electrode fingers323b(i.e., end portions that are not connected to the busbar electrode321b) are aligned with each other is denoted by D and crosses the propagation direction of the acoustic wave at the predetermined angle. That is, the first series-arm resonator110s, the second series-arm resonators121sto124s, and the parallel-arm resonators121pto124peach includes an inclined IDT electrode whose electrode fingers extend in a direction crossing the propagation direction of the acoustic wave. Meanwhile, when a one-port SAW resonator including a piezoelectric layer is included in an acoustic wave filter, a transverse mode ripple may appear between the resonant frequency and the anti-resonant frequency of the resonator and can cause degradation of transmission characteristics in the pass band of the filter. To address this problem, the acoustic wave filter10according to the present preferred embodiment includes resonators whose IDT electrodes are inclined IDTs. Another feature of the acoustic wave filter10according to the present preferred embodiment is that the second ends e2of the electrode fingers322a, the second ends e2of the electrode fingers322b, the second ends e2of the offset electrode fingers323a, and the second ends e2of the offset electrode fingers323beach have an atypical shape, or more specifically, the second ends e2are preferably T-shaped (seeFIGS.3A and3B), for example. The atypical shape will be described in detail later. The pair of reflectors (i.e., the reflectors32c) are adjacent to the pair of comb teeth-shaped electrodes (i.e., the comb teeth-shaped electrodes32aand32b) in the direction D. Specifically, the reflectors32care disposed with the comb teeth-shaped electrodes32aand32btherebetween in the direction D. The reflectors32ceach include reflector electrode fingers parallel or substantially parallel to each other and reflector busbar electrodes connecting the reflector electrode fingers to each other. The reflector busbar electrodes of each reflector32cextend in the direction D. When viewed in a cross-section as inFIG.2, the IDT electrode including the electrode fingers322a, the electrode fingers322b, the offset electrode fingers323a, the offset electrode fingers323b, and the busbar electrodes321aand321bhas a multilayer structure including an adhesive layer324and a main electrode layer325. The structure of each reflector32cviewed in a cross-section is the same as or similar to the structure of the IDT electrode viewed in cross-section and will not be further described here. The adhesive layer324improves the adhesion between a piezoelectric layer327and the main electrode layer325and is preferably made of, for example, Ti. The main electrode layer325is preferably made mainly of Al and has a Cu content of about 1%, for example. The IDT electrode is covered with a protective layer326. The protective layer326is provided, for example, to protect the main electrode layer325from the external environment, to adjust the frequency-temperature characteristics, and to improve the moisture resistance. The protective layer326is preferably made mainly of, for example, silicon dioxide. The materials of the adhesive layer324, the main electrode layer325, and the protective layer326are not limited to the materials described above. It is not required that the IDT electrode have the multilayer structure. The IDT electrode may be made of a metal such as Ti, Al, Cu, Pt, Au, Ag, or Pd or may be made of an alloy, for example. The IDT electrode may include multilayer bodies made of these metals or alloys. The protective layer326is optional. The IDT electrode and the reflectors32care disposed on a main surface of a substrate320, which will be described below. The following describes a multilayer structure of the substrate320. As illustrated in the lower section ofFIG.2, the substrate320includes a high-acoustic-velocity support substrate329, a low-acoustic-velocity film328, and the piezoelectric layer327. The high-acoustic-velocity support substrate329, the low-acoustic-velocity film328, and the piezoelectric layer327are stacked on top of each other in the stated order. The piezoelectric layer327is, for example, a piezoelectric film. The IDT electrode and the reflectors32care disposed on a main surface of the piezoelectric layer327. The piezoelectric layer327is preferably made of, for example, a θ°-rotated Y cut X SAW propagation LiTaO3piezoelectric single crystal or θ°-rotated Y cut X SAW propagation LiTaO3piezoelectric ceramics obtained by cutting a lithium tantalate single crystal or ceramics along a plane whose normal line is an axis rotated from a Y-axis by θ° with an X-axis as the central axis. The surface acoustic wave propagates in the X-axis direction through a single crystal or ceramics. The piezoelectric layer327preferably has a thickness of, for example, about 3.5λ or less, where λ denotes the wavelength of the acoustic wave and is determined by the electrode-to-electrode pitch of the IDT electrode. For example, the piezoelectric layer327preferably has a thickness of about 600 nm. The high-acoustic-velocity support substrate329supports the low-acoustic-velocity film328, the piezoelectric layer327, and the IDT electrode. The acoustic velocity of a bulk wave propagating through the high-acoustic-velocity support substrate329is higher than the acoustic velocity of an acoustic wave such as a surface acoustic wave or a boundary wave propagating though the piezoelectric layer327. The high-acoustic-velocity support substrate329confines the surface acoustic wave in the portion where the piezoelectric layer327is stacked on the low-acoustic-velocity film328, and the surface acoustic wave is thus reduced or prevented from leaking to underneath the high-acoustic-velocity support substrate329. The high-acoustic-velocity support substrate329is preferably, for example, a silicon substrate having a thickness of about 125 μm. Examples of the material of the high-acoustic-velocity support substrate329include: (1) piezoelectric materials such as aluminum nitride, aluminum oxide, silicon carbide, silicon nitride, silicon, sapphire, lithium tantalate, lithium niobate, and quartz; (2) various ceramics such as alumina, zirconia, cordierite, mullite, steatite, and forsterite; (3) magnesia diamond; (4) materials containing any of the above materials as a principal component; and (5) materials containing a mixture of the above materials as a principal component. The acoustic velocity of a bulk wave propagating through the low-acoustic-velocity film328is lower than the velocity of an acoustic wave propagating through the piezoelectric layer327. The low-acoustic-velocity film328is disposed between the piezoelectric layer327and the high-acoustic-velocity support substrate329. Energy of an acoustic wave inherently concentrates in a low-acoustic-velocity medium. Together with this property, the above structure helps eliminate or reduce the possibility that energy of the surface acoustic wave will leak out of the IDT electrode. The low-acoustic-velocity film328preferably includes, for example, silicon dioxide as a principal component. The low-acoustic-velocity film328preferably has a thickness of, for example, about 2λ or less, where λ denotes the wavelength of the acoustic wave and is determined by the electrode-to-electrode pitch of the IDT electrode. For example, the low-acoustic-velocity film328preferably has a thickness of about 670 nm. The Q-factor at the resonant frequency and the Q-factor at the anti-resonant frequency of the resonator on the substrate320, that is, on the multilayer structure described above may be much higher than the corresponding Q-factors of a resonator on a known structure including a single piezoelectric substrate. That is, the multilayer structure may be used to obtain a SAW resonator with a high Q-factor, and the SAW resonator may be used to obtain an acoustic wave filter having a small insertion loss. The high-acoustic-velocity support substrate329may be a multilayer structure including a support substrate and a high-acoustic-velocity film stacked on the support substrate, where the acoustic velocity of a bulk wave propagating through the high-acoustic-velocity film is higher than the acoustic velocity of an acoustic wave such as a surface acoustic wave or a boundary wave propagating through the piezoelectric layer327. In this case, examples of the material of the support substrate include: piezoelectric materials such as sapphire, lithium tantalate, lithium niobate, and quartz; various ceramics such as alumina, magnesia, silicon nitride, aluminum nitride, silicon carbide, zirconia, cordierite, mullite, steatite, and forsterite; a dielectric material such as glass; a semiconductor such as silicon and gallium nitride; and resin. Examples of the material of the high-acoustic-velocity film include: aluminum nitride, aluminum oxide, silicon carbide, silicon nitride, silicon oxynitride, a diamond-like carbon (DLC) film, and diamond; mediums containing the above materials as a principal component; mediums containing a mixture of the above materials as a principal component; and other various high-acoustic-velocity materials. Although the θ°-rotated Y cut X SAW propagation LiTaO3single crystal is used as the piezoelectric layer327in the present preferred embodiment, the cut-angle of the single crystal material is not limited to the specified angle. The multilayer structure, the material, and the thickness of the substrate may be changed as appropriate in accordance with, for example, the bandpass characteristics required of the acoustic wave filter device concerned. A SAW filter including a LiTaO3piezoelectric substrate or a LiNbO3piezoelectric substrate having a cut-angle different from the specified angle may produce the same or substantially the same advantageous effects described above. The following describes electrode parameters of the IDT electrode included in the SAW resonator. The wavelength of the SAW resonator is determined by the wavelength λ, which the repetition period of the electrode fingers322aor the electrode fingers322bincluded in the IDT electrode (see the middle section ofFIG.2). The electrode-to-electrode pitch is half the wavelength λ and is expressed as (W+S), where W denotes the line width of each of the electrode fingers322aincluded in the comb teeth-shaped electrode32aor each of the electrode fingers322bincluded in the comb teeth-shaped electrode32b, and S denotes the space width, or more specifically, the distance between the electrode finger322aand the electrode finger322badjacent to each other. The intersecting width of the pair of comb teeth-shaped electrodes (i.e., the comb teeth-shaped electrodes32aand32b) is denoted by L and is the length of an overlap between each electrode finger322aand each electrode finger322bviewed in the direction D (see the upper section ofFIG.2). The electrode duty ratio of each resonator refers to the proportion of the line width of the electrode fingers322aand322b, or more specifically, the ratio of the line width to the value obtained by adding the line width to the space width of the electrode fingers322aand322band is expressed as W/(W+S). Each electrode parameter will be specifically described later. 1-3 IDT Electrodes of Series-Arm Resonators The following describes the structure of the IDT electrode of the first series-arm resonator110sand the IDT electrode of each of the second series-arm resonators121sto124swith reference toFIGS.3A and3B, respectively. FIG.3Aillustrates the IDT electrode of the first series-arm resonator110sincluded in the acoustic wave filter10.FIG.3Billustrates the IDT electrode of the second series-arm resonator121sincluded in the acoustic wave filter10. The second series-arm resonator121sinFIG.3Bwill be described below as an example of the second series-arm resonators121sto124s. Referring toFIG.3A, the electrode fingers322band the offset electrode fingers323ain the first series-arm resonator110seach have an atypical shape, or more specifically, are preferably T-shaped, for example. Referring toFIG.3B, the electrode fingers322band the offset electrode fingers323ain the second series-arm resonator121seach have an atypical shape, or more specifically, are preferably T-shaped, for example. The structure of the second ends e2of the electrode fingers322aand the second ends e2of the offset electrode fingers323b(not illustrated in the enlarged views inFIGS.3A and3B) is the same or substantially the same as the structure of the second ends e2of the electrode fingers322band the second ends e2of the offset electrode fingers323a. That is, the electrode fingers322aand the offset electrode fingers323bin the first series-arm resonator110sand the electrode fingers322aand the offset electrode fingers323bin the second series-arm resonator121seach have an atypical shape, or more specifically, are preferably T-shaped, for example. Each electrode finger has an atypical shape. That is, the second end e2that is not connected to the busbar electrode is wider than the central portion of the electrode finger. Specifically, the electrode fingers322aand322beach include an electrode-finger central portion cp and a wide portion wp located at the second end e2and being wider than the electrode-finger central portion cp. The wide portion wp is preferably rectangular or substantially rectangular, for example. Alternatively, the wide portion wp may be substantially octagonal, cross-shaped, or convex, for example. The electrode-finger central portion cp refers to a portion of each electrode finger except for end portions opposite each other in the direction in which the electrode finger extends. Each electrode finger322aincludes the wide portion wp such that the gap in the X direction between the wide portion wp of the electrode finger322aand the electrode finger322badjacent to the wide portion wp is smaller than the gap between the electrode-finger central portions cp of the electrode fingers322aand322badjacent to each other. Each electrode finger322bincludes the wide portion wp such that the gap between the wide portion wp of the electrode finger322band the electrode finger322aadjacent to the wide portion wp is smaller than the gap between the electrode-finger central portions cp of the electrode fingers322band322aadjacent to each other. For example, when the electrode duty ratio is about 0.5, the gap between the electrode fingers322aand322badjacent to each other is preferably about 0.25λ, and the gap between the wide portion wp of the electrode finger322aand the electrode finger322badjacent to the wide portion wp is preferably not less than about 0.1λ and not more than about 0.2λ. In the present preferred embodiment, L1is greater than L2(L2<L1), where L1denotes the length of the wide portion wp of each of the electrode fingers322aand322bin the first series-arm resonator110sin the direction in which the electrode fingers extend, and L2denotes the length of the wide portion wp of each of the electrode fingers322aand322bin the second series-arm resonators121sto124sin the direction in which the electrode fingers extend. The same holds true for the offset electrode fingers. That is, L1is greater than L2, where L1denotes the length of the wide portion wp of each of the offset electrode fingers323aand323bin the first series-arm resonator110sin the direction in which the offset electrode fingers extend, and L2denotes the length of the wide portion wp of each of the offset electrode fingers323aand323bin the second series-arm resonators121sto124sin the direction in which the offset electrode fingers extend. That is, the wide portion wp of each electrode finger in the first series-arm resonator110sis longer than the wide portion wp of each electrode in any of the other series-arm resonators (i.e., the series-arm resonators121sto124s). The length L1of the wide portion wp of each electrode finger in the first series-arm resonator110sis preferably, for example, not less than about 0.1λ and not more than about 0.4λ, where λ denotes the wavelength of the acoustic wave filter10. The intersecting width L of the electrode fingers322aand322bis preferably not greater than about 20λ, for example. In the present preferred embodiment, the offset electrode fingers323aand323beach include an electrode-finger central portion cp and a wide portion wp located at the second end e2and being wider than the electrode-finger central portion cp. The length L1of the wide portion wp of each of the offset electrode fingers323aand323bis equal to the length L1of the wide portion wp of each of the electrode fingers322aand322b. 1-4 Example 1 According to Preferred Embodiment 1 The following describes characteristics of the first series-arm resonator110saccording to Example 1 of Preferred Embodiment 1 with reference to Table 1 andFIGS.4to6B. Table 1 shows the fractional bandwidth (%) and the return loss (dB) of the first series-arm resonator110swith variations in the length L1of the wide portion wp of each of the electrode fingers (i.e., the electrode fingers322a, the electrode fingers322b, the offset electrode fingers323a, and the offset electrode fingers323b) in the first series-arm resonator110s. The values of the fractional bandwidth are given byFIG.5, and the values of the return loss are given byFIGS.6A and6B.FIGS.5to6Bwill be described later. TABLE 1Length L1 ofFractionalReturnWide PortionBandwidth (%)Loss (dB)0λ (Without3.910.96Wide Portion)0.1λ3.820.720.2λ3.740.740.3λ3.640.750.4λ3.560.800.5λ3.500.890.6λ3.470.930.7λ3.460.92 Conditions in Example 1 were as follows: the wavelength λ of the acoustic wave as determined by the electrode-to-electrode pitch of the IDT electrode of the first series-arm resonator110swas about 2.1 μm; the intersecting width L was about 12λ; the number of electrode finger pairs was 200; and the electrode duty ratio was about 0.5. Referring to Table 1, 0λ provided as the length L1of the wide portion wp indicates that none of the electrode fingers included the wide portion wp and that the width of the electrode-finger central portion cp of each electrode finger was equal or substantially equal to the width of the second end e2of each electrode finger. Dividing the difference between the anti-resonant frequency and the resonant frequency by the resonant frequency and by multiplying the quotient by 100 gives the fractional bandwidth. FIG.4is a graphical representation of the impedance characteristics of the first series-arm resonator110saccording to Example 1.FIG.4illustrates the impedance characteristics of the series-arm resonator110swith variations in the length L1of the wide portion wp within a range of 0 to about 0.7λ. FIG.4indicates that as the length L1of the wide portion wp of each electrode finger in the first series-arm resonator110sincreased gradually within a range of 0 to about 0.7λ, the anti-resonant frequency of the first series-arm resonator110swas shifted toward the lower frequency side. FIG.5is a graphical representation of the fractional bandwidth of the first series-arm resonator110saccording to Example 1.FIG.5illustrates the fractional bandwidth of the series-arm resonator110swith variations in the length L1of the wide portion wp within a range of 0 to about 0.7λ. FIG.5indicates that when the length L1of the wide portion wp was within a range of about 0.1 to about 0.4λ, the fractional bandwidth decreased constantly as the length L1of the wide portion wp increased.FIG.5also indicates that when the length L1of the wide portion wp was not less than about 0.5λ, the fractional bandwidth decreased gradually at a rate lower than the rate of change in fractional bandwidth with the length L1within a range of 0.1 to about 0.4λ. As can be seen fromFIGS.4and5, increasing the length L1of the wide portion wp provides a reduction in the fractional bandwidth and causes a shift of the anti-resonant frequency toward the low frequency side. The attenuation slope in a frequency range higher than the pass band of the acoustic wave filter10becomes steeper accordingly. When the length L1of the wide portion wp is unduly large, spurious waves can be generated in the pass band of the acoustic wave filter as will be described below. FIG.6Ais a graphical representation of the return loss of the first series-arm resonator according to Example 1, where each electrode finger in the first series-arm resonator included a wide portion having a length of 0λ, about 0.5λ, about 0.6λ, or about 0.7λ.FIG.6Bis a graphical representation of the return loss of the first series-arm resonator according to Example 1, where each electrode finger in the first series-arm resonator included a wide portion having a length of about 0.1λ, about 0.2λ, about 0.3λ, or about 0.4λ. The pass band of the acoustic wave filter10including the first series-arm resonator110sinFIGS.6A and6Bwas in a range of about 1,730 MHz to about 1,850 MHz. FIG.6Aindicates that when the length L1of the wide portion wp was 0λ (i.e., when none of the electrode fingers included the wide portion wp) or when the length L1of the wide portion wp was not less than about 0.5λ, spurious waves were generated in the pass band of the acoustic wave filter10, resulting in an increase in return loss. Meanwhile,FIG.6Bindicates that when the length L1of the wide portion wp was not less than about 0.1λ and not more than about 0.4λ, spurious waves were reduced or prevented, and the return loss of the first series-arm resonator110swas reduced accordingly. More specifically, when the length L1of the wide portion wp was not less than about 0.1λ and not more than about 0.4λ, the return loss of the first series-arm resonator110swas not more than about 0.8 dB, that is, did not exceed the level that would adversely affect the bandpass characteristics of the acoustic wave filter10. This indicates that setting the length L1of the wide portion wp to any value from about 0.1λ to about 0.4λ enables a reduction in the return loss in the pass band. Referring toFIGS.6A and6B, the return loss observed at or around a frequency of about 1,940 MHz outside the pass band was an excitation mode in a waveguide on the wide portion wp and presumably had no direct impact on the pass band of the acoustic wave filter10. The following describes the bandpass characteristics of the acoustic wave filter10according to Example 1 of Preferred Embodiment 1 with reference toFIG.7. FIG.7is a graphical representation of the bandpass characteristics of the acoustic wave filter10according to Example 1.FIG.7shows the insertion loss of the acoustic wave filter according to Example 1 and the insertion loss of an acoustic wave filter according to Comparative Example 1. The acoustic wave filter10according to Example 1 differed from the acoustic wave filter according to Comparative Example 1 in that the length L1of the wide portion wp of each electrode finger in the first series-arm resonator110swas greater than the length L2of the wide portion wp of each electrode finger in the second series-arm resonators121sto124s. More specifically, the length L1of the wide portion wp of each electrode finger in the first series-arm resonator110swas about 0.4λ, and the length L2of the wide portion wp of each electrode finger in the second series-arm resonators121sto124swas about 0.2λ. Further, electrode fingers in any of the series-arm resonators110sand121sto124sof the acoustic wave filter according to Comparative Example 1 all included wide portions wp that were of the same length. More specifically, the wide portion wp of each electrode finger in the series-arm resonators110sand121sto124shad a length of about 0.2λ. Referring toFIG.7, in a frequency range higher than the pass band, the attenuation slope of the acoustic wave filter10according to Example 1 is steeper than the attenuation slope of the acoustic wave filter according to Comparative Example 1. More specifically, the spacing (difference) between the frequency with an insertion loss of about 3 dB and the frequency with an insertion loss of about 55 dB in Comparative Example 1 was about 13.72 MHz, and the corresponding spacing (difference) in Example 1 was about 11.17 MHz. That is, a reduction of about 2.55 MHz was observed in Example 1. As to the acoustic wave filter10according to Example 1, no return loss causing potential problems was observed in the pass band of the acoustic wave filter, or more specifically, in a frequency range of about 1,710 MHz to about 1,785 MHz. The above design, in which the length L1of the wide portion wp of each electrode finger in the first series-arm resonator110sis greater than the length L2of the wide portion wp of each electrode finger in the second series-arm resonators121sto124s, enables a reduction in the return loss in the pass band while enabling the attenuation slope in a frequency range higher than the pass band to become steeper. 1-5 Example 2 According to Preferred Embodiment 1 The following describes characteristics of the first series-arm resonator110saccording to Example 2 of Preferred Embodiment 1 with reference toFIG.8. FIG.8is a graphical representation of the relationship between the fractional bandwidth and the intersecting width L of the IDT electrode of the first series-arm resonator110saccording to Example 2.FIG.8shows how the fractional bandwidth (%) changed when the intersecting width L was varied with the capacitance of the IDT electrode being fixed, that is, with the area determined by multiplying the intersecting width by the number of electrode finger pairs being fixed. FIG.8indicates that when the intersecting width L was not less than about 7.5λ and not more than about 20λ, the fractional bandwidth (%) decreased as the intersecting width L was reduced.FIG.8also indicates that the fractional bandwidth did not change much when the intersecting width L was more than about 20λ. This indicates that setting the intersecting width L to about 20λ or less with no or substantially no change in the area of the IDT electrode enables a reduction in the fractional bandwidth. The attenuation slope in a frequency range outside the pass band of the acoustic wave filter10becomes steeper accordingly. The following describes the bandpass characteristics of the acoustic wave filter10according to Example 2 of Preferred Embodiment 1 with reference toFIG.9. FIG.9is a graphical representation of the bandpass characteristics of the acoustic wave filter10according to Example 2.FIG.9shows the insertion loss of the acoustic wave filter according to Example 2 and the insertion loss of an acoustic wave filter according to Comparative Example 2. The acoustic wave filter10according to Example 2 differed from the acoustic wave filter according to Comparative Example 2 in that the length L1of the wide portion wp of each electrode finger in the first series-arm resonator110swas about 0.4λ, and the length L2of the wide portion wp of each electrode finger in the second series-arm resonators121sto124swas about 0.2λ. The intersecting width L of the IDT electrode of the first series-arm resonator110swas about 12λ, and the intersecting width L of each of the IDT electrodes of the second series-arm resonators121sto124swas also about 12λ. Further, none of the electrode fingers in any of the series-arm resonators110sand121sto124sof the acoustic wave filter according to Comparative Example 2 included the wide portion wp, and the electrode-finger central portions cp and the second ends e2of the electrode fingers had the same or substantially the same width. The intersecting width L of each of the IDT electrodes of the series-arm resonators110sand121sto124swas about 30λ. Referring toFIG.9, in a frequency range higher than the pass band, the attenuation slope of the acoustic wave filter10according to Example 2 is steeper than the attenuation slope of the acoustic wave filter according to Comparative Example 2. More specifically, the spacing (difference) between the frequency with an insertion loss of about 3 dB and the frequency with an insertion loss of about 55 dB in Comparative Example 2 was about 15.09 MHz, and the corresponding spacing (difference) in Example 2 was about 11.17 MHz. That is, a reduction of about 3.92 MHz was observed in Example 2. As to the acoustic wave filter10according to Example 2, no return loss causing potential problems was observed in the pass band of the acoustic wave filter, or more specifically, in a frequency range of about 1,710 MHz to about 1,785 MHz. Example 2 had an advantage over Comparative Example 2. That is, the above design, in which the length L1of the wide portion wp of each electrode finger in the first series-arm resonator110sis greater than the length L2of the wide portion wp of each electrode finger in the second series-arm resonators121sto124sand the intersecting width L of each IDT electrode is not more than about 20λ, prevents the return loss in the pass band from reaching a level causing potential problems while enabling the attenuation slope in a frequency range higher than the pass band to become steeper. Preferred Embodiment 2 The following describes a basic configuration of an acoustic wave filter according to Preferred Embodiment 2 of the present invention with reference toFIGS.10and11. The acoustic wave filter according to Preferred Embodiment 2 includes a third series-arm resonator130sin place of the second series-arm resonator123sin Preferred Embodiment 1. FIG.10is a circuit configuration diagram of an acoustic wave filter10A according to Preferred Embodiment 2. The acoustic wave filter10A includes series-arm resonators121s,122s,110s,130s, and124s, parallel-arm resonators121pto124p, and inductors121L and122L. The series-arm resonators121s,122s,110s,130s, and124sare disposed on a path connecting a first terminal Port1and the second terminal Port2. The parallel-arm resonators121pto124pare disposed between the path and a reference terminal (i.e., ground). The series-arm resonator121s,122s,110s,130s, and124sare connected in series on the path (series arm) connecting the first terminal Port1and the second terminal Port2. The second series-arm resonator121s, the second series-arm resonator122s, the first series-arm resonator110s, the third series-arm resonator130s, and the second series-arm resonator124sare connected in series in the stated order in the direction from the first terminal Port1to the second terminal Port2. The parallel-arm resonators121pto124pare connected in parallel and disposed on respective paths (parallel arms) each of which connects the reference terminal and a corresponding one of points at which the series-arm resonators121s,122s,110s,130s, and124sare connected to each other. The series-arm resonators121s,122s,110s,130s, and124sand the parallel-arm resonators121pto124pare connected as described above to define the acoustic wave filter10A that is, for example, a ladder band-pass filter. The resonant frequency of the first series-arm resonator110s, the resonant frequencies of the second series-arm resonators121s,122s, and124s, and the resonant frequency of the third series-arm resonator130sare within the pass band of the acoustic wave filter10A. The anti-resonant frequency of the first series-arm resonator110sis lower than any of the anti-resonant frequencies of the second series-arm resonators121s,122s, and124s. The anti-resonant frequency of the third series-arm resonator130sis lower than any of the anti-resonant frequencies of the second series-arm resonators121s,122s, and124sand is higher than the anti-resonant frequency of the first series-arm resonator110s. The third series-arm resonator130smay be disposed between the second series-arm resonator122sand the first series-arm resonator110s. That is, the third series-arm resonator130sis connected directly to the first series-arm resonator110s. FIG.11illustrates an IDT electrode of the third series-arm resonator130sincluded in the acoustic wave filter10A according to Preferred Embodiment 2. The IDT electrode of the third series-arm resonator130sis the same or substantially the same as the IDT electrode of the first series-arm resonator110s. That is, L3is greater than L2, where L3denotes the length of the wide portion wp of each of the electrode fingers in the third series-arm resonator130s, and L2denotes the length of the wide portion wp of each of the electrodes of the second series-arm resonators121s,122s, and124s. The length L3is not more than the length L1of the wide portion wp of each of the electrode fingers in the first series-arm resonator110s(L2<L3L1). The acoustic wave filter10A according to Preferred Embodiment 2 includes the third series-arm resonator130sdisposed on the path. The first series-arm resonator110sand the third series-arm resonator130sare connected in series. The third series-arm resonator130sincludes an IDT electrode including a pair of comb teeth-shaped electrodes (i.e., a comb teeth-shaped electrode32aand a comb teeth-shaped electrode32b) provided on a substrate including a piezoelectric layer. The comb teeth-shaped electrode32aof the third series-arm resonator130sincludes electrode fingers322aand a busbar electrode321a, and the comb teeth-shaped electrode32bof the third series-arm resonator130sincludes electrode fingers322band a busbar electrode321b. The electrode fingers322aand322bextend in a direction orthogonal or substantially orthogonal to the propagation direction of the acoustic wave. The busbar electrode321aconnects first ends e1of the electrode fingers322ato each other, and the busbar electrode321bconnects first ends e1of the electrode fingers322bto each other. The direction in which second ends e2of the electrode fingers322aare aligned with each other and second ends e2of the electrode fingers322bare aligned with each other is denoted by D and crosses the propagation direction of the acoustic wave. The electrode fingers of the IDT electrode of the third series-arm resonator130seach include an electrode-finger central portion cp and a wide portion wp located at the second end e2and being wider than the electrode-finger central portion cp. L3is greater than L2, where L3denotes the length of the wide portion wp of each of the electrode fingers322aand322bin the third series-arm resonator130sin the direction in which the electrode fingers extend, and L2denotes the length of the wide portion wp of each of the electrode fingers322aand322bin the second series-arm resonators121s,122s, and124sin the direction in which the electrode fingers extend. As in Preferred Embodiment 1, the acoustic wave filter10A according to Preferred Embodiment 3 enables a reduction in the return loss in the pass band of the acoustic wave filter10A while enabling the attenuation slope in a frequency range higher than the pass band to become steeper. Preferred Embodiment 3 Preferred Embodiment 1 describes that the acoustic wave filter10includes only a ladder filter structure. In some preferred embodiments of the present invention, the filter includes a longitudinally coupled filter structure in addition to the ladder filter structure. A filter according to Preferred Embodiment 3 of the present invention includes these structures as will be described below. FIG.12is a circuit configuration diagram of an acoustic wave filter10B according to Preferred Embodiment 3. As illustrated inFIG.12, the acoustic wave filter10B includes a second series-arm resonator121s, a first series-arm resonator110s, parallel-arm resonators121pand124p, and a longitudinally coupled resonator150. That is, the acoustic wave filter10B includes the longitudinally coupled resonator150in addition to the ladder filter structure. The longitudinally coupled resonator150has a longitudinally coupled filter structure disposed between a first terminal Port1and a second terminal Port2. The longitudinally coupled resonator150in the present preferred embodiment is preferably closer than the first series-arm resonator110sto the second terminal Port2and includes, for example, two reflectors and nine IDTs disposed between the reflectors. In some preferred embodiments of the present invention, the longitudinally coupled resonator150may be disposed between the second series-arm resonator121sand the first series-arm resonator110s. The longitudinally coupled resonator150does not necessarily include nine IDTs and may include three or more IDTs, for example. As in the above preferred embodiments, the acoustic wave filter10B enables a reduction in the return loss in the pass band of the acoustic wave filter10B while enabling the attenuation slope in a frequency range higher than the pass band to become steeper. As described above, the acoustic wave filter10according to Preferred Embodiment 1 includes the first series-arm resonator110sand the second series-arm resonators121sto124s. The first series-arm resonator110sand the second series-arm resonators121sto124sare disposed on the path connecting the first terminal Port1and the second terminal Port2. The first series-arm resonator110shas a lower anti-resonant frequency than the any other series-arm resonator included in the acoustic wave filter10. The first series-arm resonator110sand the second series-arm resonators121sto124seach include an IDT electrode including a pair of comb teeth-shaped electrodes (i.e., the comb teeth-shaped electrodes32aand32b) provided on the substrate320including the piezoelectric layer327. Electrodes of the pair of comb teeth-shaped electrodes (i.e., the comb teeth-shaped electrodes32aand32b) of the first series-arm resonator110sand electrodes of the pair of comb teeth-shaped electrodes (i.e., the comb teeth-shaped electrodes32aand32b) of each of the second series-arm resonator121sto124seach include the electrode fingers322a, the electrode fingers322b, and the busbar electrode321a, and the busbar electrode321b. The electrode fingers322aand322bextend in the direction orthogonal or substantially orthogonal to the propagation direction of the acoustic wave. The busbar electrode321aconnects the first ends e1of the electrode fingers322ato each other, and the busbar electrode321bconnects the first ends e1of the electrode fingers322bto each other. The direction D in which the second ends e2of the electrode fingers322aare aligned with each other and the second ends e2of the electrode fingers322bare aligned with each other crosses the propagation direction of the acoustic wave. The electrode fingers of the IDT electrode of the first series-arm resonator110sand the electrode fingers of the IDT electrodes of the second series-arm resonators121sto124seach include the electrode-finger central portion cp and the wide portion wp located at the second end e2and being wider than the electrode-finger central portion cp. L1is greater than L2, where L1denotes the length of the wide portion wp of each of the electrode fingers322aand322bin the first series-arm resonator110sin the direction in which the electrode fingers extend, and L2denotes the length of the wide portion wp of each of the electrode fingers322aand322bin the second series-arm resonators121sto124sin the direction in which the electrode fingers extend. The above design, in which the electrode fingers322aand322binclude the respective wide portions wp and the length L1of the wide portion wp of each electrode finger in the first series-arm resonator110sis greater than the length L2of the wide portion wp of each electrode finger in the second series-arm resonators121sto124s, enables a reduction in the return loss in the pass band of the acoustic wave filter10while enabling the attenuation slope in a frequency range higher than the pass band to become steeper. The length L1of the wide portion wp of each electrode finger in the first series-arm resonator110smay preferably be, for example, not less than about 0.1λ and not more than about 0.4λ, where λ denotes the wavelength of the acoustic wave filter10. The return loss in the pass band of the acoustic wave filter10may thus be prevented from reaching a level that causes potential problems. The intersecting width L of the IDT electrode of the first series-arm resonator110smay preferably be not more than about 20λ, for example. The fractional bandwidth of the first series-arm resonator110smay thus be reduced while the area determined by multiplying the intersecting width of the IDT electrode by the number of electrode finger pairs is fixed. This prevents the return loss in the pass band from reaching a level causing potential problems while enabling the attenuation slope in a frequency range higher than the pass band of the acoustic wave filter10to become steeper with the area being fixed. The substrate320may include the piezoelectric layer327, the high-acoustic-velocity support substrate329, and the low-acoustic-velocity film328disposed between the high-acoustic-velocity support substrate329and the piezoelectric layer327. The piezoelectric layer327includes two main surfaces, and the IDT electrode may be disposed on one of the two main surfaces of the piezoelectric layer327. The acoustic velocity of the bulk wave propagating through the high-acoustic-velocity support substrate329is higher than the acoustic velocity of the acoustic wave propagating through the piezoelectric layer327. The acoustic velocity of the bulk wave propagating through the low-acoustic-velocity film328is lower than the acoustic velocity of the acoustic wave propagating through the piezoelectric layer327. The Q-factor at the resonant frequency and the Q-factor at the anti-resonant frequency of a resonator having the structure mentioned above may be much higher than the corresponding Q-factors of a resonator having a known structure including a single piezoelectric substrate. That is, the multilayer structure may be used to obtain a SAW resonator with a high Q-factor, and the SAW resonator may be used to obtain an acoustic wave filter having a small insertion loss. The acoustic wave filter10A according to Preferred Embodiment 2 also includes the third series-arm resonator130sdisposed on the path. The first series-arm resonator110sand the third series-arm resonator130sare connected in series. The anti-resonant frequency of the third series-arm resonator130sis lower than any of the anti-resonant frequencies of the second series-arm resonators121s,122s, and124s. The third series-arm resonator130sincludes an IDT electrode including a pair of comb teeth-shaped electrodes (i.e., the comb teeth-shaped electrodes32aand32b) on the substrate320including the piezoelectric layer327. The comb teeth-shaped electrode32aof the third series-arm resonator130sincludes the electrode fingers322aand the busbar electrode321a, and the comb teeth-shaped electrode32bof the third series-arm resonator130sincludes the electrode fingers322band the busbar electrode321b. The electrode fingers322aand322bextend in the direction orthogonal or substantially orthogonal to the propagation direction of the acoustic wave. The busbar electrode321aconnects the first ends e1of the electrode fingers322ato each other, and the busbar electrode321bconnects the first ends e1of the electrode fingers322bto each other. The direction D in which the second ends e2of the electrode fingers322aare aligned with each other and the second ends e2of the electrode fingers322bare aligned with each other crosses the propagation direction of the acoustic wave. The electrode fingers of the IDT electrode of the third series-arm resonator130seach include the electrode-finger central portion cp and the wide portion wp located at the second end e2and being wider than the electrode-finger central portion cp. L3is greater than L2, where L3denotes the length of the wide portion wp of each of the electrode fingers322aand322bin the third series-arm resonator130sin the direction in which the electrode fingers extend, and L2denotes the length of the wide portion wp of each of the electrode fingers322aand322bin the second series-arm resonators121s,122s, and124sin the direction in which the electrode fingers extend. The above design, in which L3is greater than L2and the third series-arm resonator130sis connected directly to the first series-arm resonator110s, enables a reduction in the return loss in the pass band of the acoustic wave filter10A while enabling the attenuation slope in a frequency range higher than the pass band to become much steeper. The acoustic wave filters according to Preferred Embodiments 1, 2, and 3 of the present invention have been described above. Although the present invention has been described with reference to preferred embodiments, the present invention also includes other preferred embodiments provided by varying combinations of components of the aforementioned preferred embodiments, other modifications achieved through various alterations to the preferred embodiments that may be conceived by those skilled in the art within a range not departing from the spirit of the present invention, and various types of apparatuses including the acoustic wave filters according to preferred embodiments of the present invention. The preferred embodiments described above each include the series-arm resonators including offset electrode fingers. In some preferred embodiments, however, none of the series-arm resonators includes the offset electrode fingers. In the preferred embodiments described above, the length L1of the wide portion wp of each electrode finger in the first series-arm resonator110sis greater than the length L2of the wide portion wp of each electrode finger in the second series-arm resonators121sto124s. In some preferred embodiments, however, the length L1of the wide portions wp of, for example, about 50% or more of the electrode fingers in the first series-arm resonator110sis greater than the length L2. The acoustic wave filter10may be used as a transmitting filter or a receiving filter. The acoustic wave filter10may be used as a transmitting filter in the following manner: a transmission wave generated by a transmitting circuit, such as a radio-frequency integrated circuit (RFIC), for example, and input to the acoustic wave filter10through the second terminal Port2is filtered in a predetermined transmission pass band, and the resultant wave is output to the first terminal Port1. The acoustic wave filter10may be used as a receiving filter in the following manner: a reception wave input to the acoustic wave filter10through the first terminal Port1is filtered in a predetermined reception pass band, and the resultant wave is output to the second terminal Port2. The first terminal Port1may be an input terminal or an output terminal. Similarly, the second terminal Port2may be an input terminal or an output terminal. When the first terminal Port1is an input terminal, the second terminal Port2may be an output terminal. When the second terminal Port2is an input terminal, the first terminal Port1may be an output terminal. Preferred embodiments of the present invention may be included, for example, in multiplexers including acoustic wave filters, front-end circuits, and communication devices and thus have wide applicability to communication apparatuses, such as mobile phones, for example. While preferred embodiments of the present invention have been described above, it is to be understood that variations and modifications will be apparent to those skilled in the art without departing from the scope and spirit of the present invention. The scope of the present invention, therefore, is to be determined solely by the following claims. | 57,194 |
11863160 | Throughout this description, elements appearing in figures are assigned three-digit or four-digit reference designators, where the two least significant digits are specific to the element and the one or two most significant digit is the figure number where the element is first introduced. An element that is not described in conjunction with a figure may be presumed to have the same characteristics and function as a previously-described element having the same reference designator. DETAILED DESCRIPTION Description of Apparatus The Transversely-Excited Film Bulk Acoustic Resonator (XBAR) is a new resonator structure for use in acoustic filters for filtering microwave signals. The XBAR is described in U.S. Pat. No. 10,491,291, titled TRANSVERSELY EXCITED FILM BULK ACOUSTIC RESONATOR, which is incorporated herein by reference in its entirety. An XBAR resonator comprises a conductor pattern having an interdigital transducer (IDT) formed on a thin floating layer or diaphragm of a piezoelectric material. The IDT has two busbars which are each attached to a set of fingers and the two sets of fingers are interleaved on the diaphragm over a cavity formed in a substrate upon which the resonator is mounted. The diaphragm spans the cavity and may include front-side and/or back-side dielectric layers. A microwave signal applied to the IDT excites a shear primary acoustic wave in the piezoelectric diaphragm, such that the acoustic energy flows substantially normal to the surfaces of the layer, which is orthogonal or transverse to the direction of the electric field generated by the IDT. XBAR resonators provide very high electromechanical coupling and high frequency capability. Acoustic filters are typically required to match a system impedance, such as 50 ohms. The system impedance and operating frequency dictate a required equivalent capacitance C0for a filter using a conventional ladder circuit. C0is inversely proportional to frequency. XBAR resonators have low capacitance per unit area compared to other acoustic resonators. Thus, ladder filter circuits using XBAR resonators may be much larger than comparable filters using other types of acoustic resonators. The following describes a filter circuit architecture that allows low frequency filters to be implemented with small XBAR resonators. It also describes improved XBAR resonators, filters and fabrication techniques that reduce static capacitance in radio frequency filters having sub-filters connected in parallel between two ports where the sub-filters have XBARs on different substrates of different die. The sub-filter XBARs have a piezoelectric plate with a back surface attached to the different substrates and portions of the plate forming diaphragms spanning cavities in the substrates. Interleaved fingers of IDTs are on the diaphragms and the thicknesses of the piezoelectric plate portions may be different. FIG. 1 shows a simplified schematic top view, orthogonal cross-sectional views, and a detailed cross-sectional view of a transversely-excited film bulk acoustic resonator (XBAR)100. XBAR resonators such as the resonator100may be used in a variety of RF filters including band-reject filters, band-pass filters, duplexers, and multiplexers. XBARs are particularly suited for use in filters for communications bands with frequencies above 3 GHz. The matrix XBAR filters described in this patent are also suited for frequencies above 1 GHz. The XBAR100is made up of a thin film conductor pattern formed on a surface of a piezoelectric plate110having parallel front and back surfaces112,114, respectively. The piezoelectric plate is a thin single-crystal layer of a piezoelectric material such as lithium niobate, lithium tantalate, lanthanum gallium silicate, gallium nitride, or aluminum nitride. The piezoelectric plate is cut such that the orientation of the X, Y, and Z crystalline axes with respect to the front and back surfaces is known and consistent. The piezoelectric plate may be Z-cut (which is to say the Z axis is normal to the front and back surfaces112,114), rotated Z-cut, or rotated YX cut. XBARs may be fabricated on piezoelectric plates with other crystallographic orientations. The back surface114of the piezoelectric plate110is attached to a surface of the substrate120except for a portion of the piezoelectric plate110that forms a diaphragm115spanning a cavity140formed in the substrate. The portion of the piezoelectric plate that spans the cavity is referred to herein as the “diaphragm”115due to its physical resemblance to the diaphragm of a microphone. As shown inFIG.1, the diaphragm115is contiguous with the rest of the piezoelectric plate110around all of a perimeter145of the cavity140. In this context, “contiguous” means “continuously connected without any intervening item”. In other configurations, the diaphragm115may be contiguous with the piezoelectric plate around at least 50% of the perimeter145of the cavity140. The substrate120provides mechanical support to the piezoelectric plate110. The substrate120may be, for example, silicon, sapphire, quartz, or some other material or combination of materials. The back surface114of the piezoelectric plate110may be bonded to the substrate120using a wafer bonding process. Alternatively, the piezoelectric plate110may be grown on the substrate120or attached to the substrate in some other manner. The piezoelectric plate110may be attached directly to the substrate or may be attached to the substrate120via one or more intermediate material layers (not shown inFIG.1). “Cavity” has its conventional meaning of “an empty space within a solid body.” The cavity140may be a hole completely through the substrate120(as shown in Section A-A and Section B-B) or a recess in the substrate120under the diaphragm115. The cavity140may be formed, for example, by selective etching of the substrate120before or after the piezoelectric plate110and the substrate120are attached. The conductor pattern of the XBAR100includes an interdigital transducer (IDT)130. The IDT130includes a first plurality of parallel fingers, such as finger136, extending from a first busbar132and a second plurality of fingers extending from a second busbar134. The first and second pluralities of parallel fingers are interleaved. The interleaved fingers overlap for a distance AP, commonly referred to as the “aperture” of the IDT. The center-to-center distance L between the outermost fingers of the IDT130is the “length” of the IDT. The first and second busbars132,134serve as the terminals of the XBAR100. A radio frequency or microwave signal applied between the two busbars132,134of the IDT130excites a primary acoustic mode within the piezoelectric plate110. The primary acoustic mode of an XBAR is a bulk shear mode where acoustic energy propagates along a direction substantially orthogonal to the surface of the piezoelectric plate110, which is also normal, or transverse, to the direction of the electric field created by the IDT fingers. Thus, the XBAR is considered a transversely-excited film bulk wave resonator. The IDT130is positioned on the piezoelectric plate110such that at least the fingers of the IDT130are disposed on the diaphragm115of the piezoelectric plate which spans, or is suspended over, the cavity140. As shown inFIG.1, the cavity140has a rectangular shape with an extent greater than the aperture AP and length L of the IDT130. A cavity of an XBAR may have a different shape, such as a regular or irregular polygon. The cavity of an XBAR may have more or fewer than four sides, which may be straight or curved. The detailed cross-section view (Detail C) shows two IDT fingers136a,136bon the surface of the piezoelectric plate110. The dimension p is the “pitch” of the IDT and the dimension w is the width or “mark” of the IDT fingers. A dielectric layer150may be formed between and optionally over (see IDT finger136a) the IDT fingers. The dielectric layer150may be a non-piezoelectric dielectric material, such as silicon dioxide or silicon nitride. The dielectric layer150may be formed of multiple layers of two or more materials. The IDT fingers136aand136bmay be aluminum, copper, beryllium, gold, tungsten, molybdenum, alloys and combinations thereof, or some other conductive material. Thin (relative to the total thickness of the conductors) layers of other metals, such as chromium or titanium, may be formed under and/or over and/or as layers within the fingers to improve adhesion between the fingers and the piezoelectric plate110and/or to passivate or encapsulate the fingers and/or to improve power handling. The busbars of the IDT130may be made of the same or different materials as the fingers. For ease of presentation inFIG.1, the geometric pitch and width of the IDT fingers is greatly exaggerated with respect to the length (dimension L) and aperture (dimension AP) of the XBAR. A typical XBAR has more than ten parallel fingers in the IDT110. An XBAR may have hundreds of parallel fingers in the IDT110. Similarly, the thickness of the fingers in the cross-sectional views is greatly exaggerated. An XBAR based on shear acoustic wave resonances can achieve better performance than current state-of-the art surface acoustic wave (SAW), film-bulk-acoustic-resonators (FBAR), and solidly-mounted-resonator bulk-acoustic-wave (SMR BAW) devices. In particular, the piezoelectric coupling for shear wave XBAR resonances can be high (>20%) compared to other acoustic resonators. High piezoelectric coupling enables the design and implementation of microwave and millimeter-wave filters of various types with appreciable bandwidth. The basic behavior of acoustic resonators, including XBARs, is commonly described using the Butterworth Van Dyke (BVD) circuit model as shown inFIG.2A. The BVD circuit model consists of a motional arm and a static arm. The motional arm includes a motional inductance Lm, a motional capacitance Cm, and a resistance Rm. The static arm includes a static capacitance C0and a resistance R0. While the BVD model does not fully describe the behavior of an acoustic resonator, it does a good job of modeling the two primary resonances that are used to design band-pass filters, duplexers, and multiplexers (multiplexers are filters with more than 2 input or output ports with multiple passbands). The first primary resonance of the BVD model is the motional resonance caused by the series combination of the motional inductance Lmand the motional capacitance Cm. The second primary resonance of the BVD model is the anti-resonance caused by the combination of the motional inductance Lm, the motional capacitance Cm, and the static capacitance C0. In a lossless resonator (Rm=R0=0), the frequency Frof the motional resonance is given by Fr=12πLmCm(1) The frequency Faof the anti-resonance is given by Fa=Fr1+1γ(2) where γ=C0/Cmis dependent on the resonator structure and the type and the orientation of the crystalline axes of the piezoelectric material. FIG.2Bis a graph200of the magnitude of admittance of a theoretical lossless acoustic resonator. The acoustic resonator has a resonance212at a resonance frequency where the admittance of the resonator approaches infinity. The resonance is due to the series combination of the motional inductance Lmand the motional capacitance Cmin the BVD model ofFIG.2A. The acoustic resonator also exhibits an anti-resonance214where the admittance of the resonator approaches zero. The anti-resonance is caused by the combination of the motional inductance Lm, the motional capacitance Cm, and the static capacitance C0. In a lossless resonator (Rm=R0=0), the frequency Frof the resonance is given by Fr=12πLmCm(1) The frequency Faof the anti-resonance is given by Fa=Fr1+1γ(2) In over-simplified terms, the lossless acoustic resonator can be considered a short circuit at the resonance frequency212and an open circuit at the anti-resonance frequency214. The resonance and anti-resonance frequencies inFIG.2Bare representative, and an acoustic resonator may be designed for other frequencies. FIG.2Cshows the circuit symbol for an acoustic resonator such as an XBAR. This symbol will be used to designate each acoustic resonator in schematic diagrams of filters in subsequent figures. FIG.3Ais a schematic diagram of a matrix filter300using acoustic resonators. The matrix filter300includes an array310of n sub-filters320-1,320-2,320-nconnected in parallel between a first filter port (FP1) and a second filter port (FP2), where n is an integer greater than one. Each of the n sub-filters320-1,320-2,320-nis a bandpass filter having a bandwidth about 1/n times the bandwidth of the matrix filter300. The sub-filters320-1,320-2,320-nhave contiguous passbands such that the bandwidth of the matrix filter300is equal to the sum of the bandwidths of the constituent sub-filters. In the subsequent examples in this patent n=3. n can be less than or greater than 3 as necessary to provide the desired bandwidth for the matrix filter300. In some cases, the n sub-filters320-1,320-2,320-nmay include one or more XBARs. The filter300and/or sub-filters may be RF filters that pass frequency bands defined by the 5G NR standard. The array310of sub-filters is terminated at the FP1end by acoustic resonators XL1and XH1, which are preferably but not necessarily XBARs. The array310of sub-filters is terminated at the FP2end by acoustic resonators XL2and XH2, which are preferably but not necessarily XBARs. The acoustic resonators XL1, XL2, XH1, and XH2create “transmission zeros” at their respective resonance frequencies. A “transmission zero” is a frequency where the input-output transfer function of the filter300is very low (and would be zero if the acoustic resonators XL1, XL2, XH1, and XH2were lossless). The zero transmission may be caused by one or more of the acoustic resonators creating a very low impedance to ground and thus, in this configuration cause the sub-filters to be removed as filtering components as the acoustic resonators are basically short circuits to ground so that the sub-filters have no effect on the filter300during transmission zero frequencies. Typically, but not necessarily, the resonance frequencies of XL1and XL2are equal, and the resonance frequencies of XH1and XH2are equal. The resonant frequencies of the acoustic resonators XL1, XL2are selected to provide transmission zeros adjacent to the lower edge of the filter passband. XL1and XL2may be referred to as “low-edge resonators” since their resonant frequencies are proximate the lower edge of the filter passband. The acoustic resonators XL1and XL2also act as shunt inductances to help match the impedance at the ports of the filter to a desired impedance value. In the subsequent examples in this patent, the impedance at all ports of the filters is matched to 50 ohms. The impedance may be another value if desired, such as 20, 100 or 1000 ohms. The resonant frequencies of acoustic resonators XH1, XH2are selected to provide transmission zeros at or above the higher edge of the filter passband. XH1and XH2may be referred to as “high-edge resonators” since their resonant frequencies are proximate the higher edge of the filter passband. High-edge resonators XH1and XH2may not be required in all matrix filters, such as filters where high rejection above the passband is not required. FIG.3Bis a schematic diagram of a sub-filter350suitable for each of sub-filters320-1,320-2, and320-nof filter300. The sub-filter350includes three acoustic resonators XA, XB, XC connected in series between a first sub-filter port (SP1) which can be connected to FP1and a second sub-filter port (SP2) which can be connected to FP2. The acoustic resonators XA, XB, XC are preferably but not necessarily XBARs. The sub-filter350includes two coupling capacitors CA, CB, each of which is connected between ground and a respective node between two of the acoustic resonators. The inclusion of three acoustic resonators in the sub-filter350is exemplary. A sub-filter may have m acoustic resonators, where m is an integer greater than one. A sub-filter with m acoustic resonators includes m−1 coupling capacitors The m acoustic resonators of a sub-filter are connected in series between the two ports SP1and SP2of a sub-filter and each of the m−1 coupling capacitors is connected between ground and a node between a respective pair of acoustic resonators from the in acoustic resonators. Compared to other types of acoustic resonators, XBARs have very high electromechanical coupling (which results in a large difference between the resonance and anti-resonance frequencies), but low capacitance per unit area. The matrix filter architecture, as shown inFIG.3AandFIG.3B, takes advantage of the high electromechanical coupling of XBARs without requiring high resonator capacitance. Thus, this architecture improves high frequency bandpass filtering by passing a wider range of high frequency without requiring processing to form or the space to form high-edge resonators XH1and XH2. FIG.4is a schematic circuit diagram of an exemplary matrix filter400implemented with XBARs. The matrix filter400includes three sub-filters420-1,420-2,420-3connected in parallel between a first filter port (FP1) and a second filter port (FP2). The sub-filters420-1,420-2,420-3have contiguous passbands such that the bandwidth of the matrix filter300is equal to the sum of the bandwidths of the constituent sub-filters. Each sub-filter includes three XBARs connected in series and two coupling capacitors. For example, sub-filter420-1includes series XBARs X1A, X1B, and X1C and two coupling capacitors C1A, C1B each of which is connected between ground and a respective node between two of the acoustic resonators. Components of the other sub-filters420-2and420-3are similarly identified using 2's and 3's as those using 1's in sub-filter420-1. Low-edge XBARs XL1and XL2are connected between FP1and FP2, respectively, and ground. All of the capacitors within the sub-filters are connected to ground through a common inductor L1. The inclusion of the inductor L1improves the out-of-band rejection of the matrix filter400which improves filtering. The matrix filter400does not include high-edge resonators. The exemplary matrix filter400is symmetrical in that the impedances at FP1and FP2are both equal to 50 ohms. The impedance may be another value if desired, such as 20, 100 or 1000 ohms. The internal circuitry of the filter is also symmetrical, with XBARs X_A and X_C within each sub-filter being the same and low-edge resonators XL1and XL2being the same. Other matrix filters may be designed to have significantly different impedances at FP1and FP2, in which event the internal circuitry will not be symmetrical. FIG.5is a plan view of an exemplary matrix filter500which has the same schematic circuit diagram as the matrix filter400ofFIG.4. The exemplary matrix filter is an LTE band41bandpass filter with a passband from 2496 to 2690 MHz. The matrix filter500includes three Z-cut lithium tantalate piezoelectric plate thickness portions510-3for filter410-3,510-1for filter410-1, and510-2for filter410-2. The portions510-1,2and3may be three different piezoelectric diaphragm thicknesses. Any number of portions510-1,2and3can be on any number of plates; and any number of those plates can be on any number of substrates of different die or chips. Other matrix filters may use lithium niobate piezoelectric plates and other crystal orientations including rotated Z-cut and rotated Y-cut. For example, the portions510-1,2and3could be three separate plates each having a different thickness, or three portions of the same plate having different thicknesses created using a process multiple thicknesses on the same plate. Such a process may thin portions510-2and3from the thickness of portion510-1; and then further thin portion510-3from the thickness of portion510-2. The back surface of each plate is attached to one substrate. In another case it is attached to more than one substrate. The back surface of each of portions510-1,2and3is bonded to a substrate. Whether or not they are of the same or of separate plates, in a first case, the portions510-1,2and3are each bonded to a separate substrate of a different die (substrates and die not visible inFIG.5). In a second case, two of portions510-1,2and3are bonded to one substrate of one die; and the other portion is bonded to a second substrate of a separate die. In a third case, all three portions510-1,2and3are bonded to a single plate which is bonded to a single substrate of a single die. The thickness of the piezoelectric plate portion510-3between its front and back surfaces is the thinnest of all three portions510-1,2and3. The thickness of portion510-1is the thickest and that of portion510-2is in between that of the thickness of other two portions. The thickness of plate portion510-3is 730 nm. The thickness of plate portion510-2is 744 nm. The thickness of plate portion510-1is 762 nm. Each of these three thicknesses may be plus or minus 10 nm. The low-edge resonators XL1and XL2may be formed using or on the thickest plate portion, such as portion510-1. Any high-edge resonator will be formed using or on the thinnest plate portion, such as portion510-3. The matrix filter500includes eleven XBARs, such as the XBAR520. A cavity (not visible) is formed in the substrates under each XBAR. Each XBAR is shown as a rectangle with vertical hatching and is identified by the designator (XL1, X1A, . . . ) used in the schematic diagram ofFIG.4. The vertical hatching is representative of the direction of the IDT fingers of each XBAR but not to scale. Each XBAR has between 65 and 130 IDT fingers, one of which is shown as finger536. The IDT fingers are aluminum and 925 nm thick. The apertures AP (vertical direction as shown inFIG.5) of the overlap of the interleaved fingers of the XBARs range from 40 microns to 58 microns, and the lengths L (left-right direction as shown inFIG.5) range from 500 to 1000 microns. In other embodiments of XBAR matrix filters, the XBARs may be divided into sections to limit the length of the diaphragm within each XBAR. The pitch p of the IDTs of each XBAR is between 7.5 and 8.6 microns and the mark/pitch ratio of each XBAR is between 0.22 and 0.31. The XBARs are connected to each other by conductors such as conductor530that may also be formed on and connect between the substrates. Cross-hatched rectangles are metal-insulator-metal capacitors used as the sub-filter coupling capacitors in this example ofFIG.4, of which only capacitors540and542are identified. The identified capacitors540and542are C3A and C3B, respectively, in the schematic diagram ofFIG.4. The other sub-filter coupling capacitors C1A-B and C2A-B for filters410-1and410-2, respectively, are connected similarly to capacitors540and542. The sub-filter coupling capacitors C1A-B, C2A-B and C3A-B are shown formed on and/or from the same plate portion510-1,2or3of the corresponding filter410-1,2or3, respectively, such as shown in the figure. However, the coupling capacitors may be formed on any of the plate portions510-1,2or3; or on separate portions of any of the substrates. The capacitors C1A-B, C2A-B and C3A-B may be formed on the substrates420-1,2or3, respectively. The coupling capacitors may be separate from the substrates of portions510, such as by being discrete components or formed on a circuit card used to interconnect the resonators. Connections from the filter510and circuitry external to the filter are made by means of conductive pads indicated by shaded circles, such as conductive pad550. The conductive pads for Filter Port 1 (FP1), Filter Port 2 (FP2), and ground (GND) are labeled. The three other conductive pads L11, L21and L31are connect to ground through inductor L1(inFIG.4), which is located external to the filter510. The conductor pads may be connected using solder bumps or other connections to the pads. The conductor pads may be formed on and/or be connect between the substrates. As previously described, the sub-filters of a matrix filter have contiguous passbands that span the passband of the matrix filter. Within a matrix filter, the center frequency of the passband of each sub-filter is different from the center frequency of any other sub-filter. Consequentially, the resonance frequencies of the XBARs in one sub-filter are different from the resonance frequencies of the XBARs within any other sub-filter. The resonance frequency of an XBAR is primarily determined by the thickness of the diaphragm or piezoelectric plate portion of the diaphragm within the XBAR. The resonance frequency has a smaller dependence on IDT pitch and mark or finger width. U.S. Pat. No. 10,491,291 describes the use of a dielectric layer formed between the IDT fingers to adjust the resonance frequency of an XBAR. U.S. Pat. No. 10,998,877 describes the use of the plate diaphragm portion thicknesses to adjust the resonance frequency of an XBAR. FIG.6is a schematic cross-section view of the matrix filter500at a section plane D-D defined inFIG.5. The section plane D-D passes through one XBAR (X3A, X1A, X2A) from each of the three sub-filters in the matrix filter500. Each XBAR includes interleaved IDT fingers (of which only IDT finger630-1,2or3is identified) formed on a respective diaphragm spanning a respective cavity in a substrate620-1,2or3or a die (not visible). Substrates620-1,2and3may be one same, two different or three different substrates. Each substrate620-1,2or3may have other components of filter500. Each diaphragm includes a different piezoelectric plate portion510-1,2or3having a different thickness between its front and back surfaces. Each of the front and back surfaces may be planar or flat across the entire surface of the plate or plate portion510-1,2or3. The back surface of each plate portion may be attached to one or more substrates of one or more die. As in previous figures, the thickness of the piezoelectric plate portions510-1,2or3and the thickness, pitch, and finger width of the IDTs are greatly exaggerated for visibility. Drawn to scale, the thickness of the piezoelectric plate portions10-1,2or3and the IDT fingers could be less than one-half percent of the thickness of the substrate620-1,2or3and each IDT would have 65 to 130 IDT fingers. The three detail views illustrate the use of piezoelectric plate portion thickness to set the resonance frequencies of the XBARs within each sub-filter. Consider first the detail view of an IDT finger of XBAR X1A (the middle view of the three detail views), which shows an IDT finger630-1formed on a portion of the piezoelectric plate portion510-1. The IDT finger630-1is shown with a trapezoidal cross-section. The trapezoidal shape is exemplary and IDT fingers may have other cross-sectional shapes. The piezoelectric plate portion510-1has thickness tp1extending between its front surface611-1and its back surface612-1. Similarly, the right-hand detail shows piezoelectric plate portion510-2having thickness tp2extending between its front surface611-2and its back surface612-2. The left-hand detail shows piezoelectric plate portion510-3having thickness tp3extending between its front surface611-3and its back surface612-3. Each of thicknesses tp1, tp2and tp3is different than any of the others. In this example, XBAR X1A is an element of the sub-filter with the lowest passband frequency and XBAR X3A is an element of the sub-filter with the highest passband frequency. In this case tp1>tp2>tp3≥0. In other cases, two of the thickness tp1, tp2and tp3are the same but the third thickness is greater than or less than those two thicknesses. In some cases, there may be only two tp thicknesses, and in other cases there may be more than three tp thicknesses. Further in this example, XBARs X1B and X1C are also formed on portion510-1with XBAR X1A as elements of the sub-filter410-1with the lowest passband frequency. XBARs X3B and X3C are also formed on portion510-3with XBAR X3A as elements of the sub-filter410-3with the highest passband frequency. Finally, XBAR X2B and X2C are also formed on portion510-2with XBAR X2A as elements of the sub-filter410-2with the passband frequency between that of filters410-1and410-3. In a more general case where a matrix filter has n sub-filters, which are numbered in order of increasing passband frequency, tp1>tp2> . . . >tpn, where tpi is the thickness of the piezoelectric plate portion extending between its front and a back surfaces of sub-filter i. The low-edge resonators XL1and XL2may be formed using the thickest of the plate portions510-1,2or3. The low-edge resonators XL1and XL2may have their resonance frequencies set by the thickness of the plate portion they are formed using. In addition, or independently, the low-edge resonators XL1and XL2may have their resonance frequencies set by a thickness of a top layer dielectric. In this case, the space between IDT fingers of XL1and XL2and adjacent IDT fingers (and optionally the IDT finger) would be covered by a dielectric layer having a thickness td1and td2to set the XL1and XL2resonance frequencies. The dielectric layers may be silicon dioxide, silicon nitride, aluminum oxide or some other dielectric material or combination of materials. The dielectric layers may be the same or different materials. An XBAR filter device typically includes a passivation dielectric layer applied over the entire surface of the device, other than contact pads, to seal and passivate the conductor patterns and other elements of the device. FIG.7is graph700of an estimated performance of a matrix filter similar to the matrix filter500. The curve710is a plot of S21, the input-output transfer function, of the filter determined by estimation of a physical model of the filter. The broken lines720mark the band edges of 5G NR communication band n41. The matrix filter architecture extends the application of XBARs to lower frequency communications bands that are impractical using a conventional ladder filter architecture. The concepts described above forFIGS.5-7may be applied to form filters such as filter500but with different ranges and/or center points of the pass band. For example, a filter similar to filter500may be built with broken lines720that mark the band edges of 5G NR communication band n40 or n46. Description of Methods FIG.8is a simplified flow chart showing a process800for making an XBAR or a filter incorporating XBARs. The process800starts at805with a substrate and a plate of piezoelectric material and ends at895with a completed XBAR or filter. It may be an example for forming any of the XBARs herein with any of the piezoelectric plate portions610-1,2or3of one or more piezoelectric plates one any one or more of substrates620-1,2or3. The flow chart ofFIG.8includes only major process steps. Various conventional process steps (e.g. surface preparation, cleaning, inspection, baking, annealing, monitoring, testing, etc.) may be performed before, between, after, and during the steps shown inFIG.8. The flow chart ofFIG.8captures three variations of the process800for making an XBAR which differ in when and how cavities are formed in the substrate. The cavities may be formed at steps810A,810B, or810C. Only one of these steps is performed in each of the three variations of the process800. The piezoelectric plate may be, for example, Z-cut lithium niobate or lithium tantalate with Euler angles 0, 0, 90°. The piezoelectric plate may be rotated Z-cut lithium niobate with Euler angles 0, β, 90°, where β is in the range from −15° to +5°. The piezoelectric plate may be rotated Y-cut lithium niobate or lithium tantalate with Euler angles 0, β, 0, where β is in the range from 0 to 60°. The piezoelectric plate may be some other material or crystallographic orientation. The substrate may preferably be silicon. The substrate may be some other material that allows formation of deep cavities by etching or other processing. In one variation of the process800, one or more cavities are formed in the substrate at810A, before the piezoelectric plate is bonded to the substrate at820. A separate cavity may be formed for each resonator in a filter device. The one or more cavities may be formed using conventional photolithographic and etching techniques. Typically, the cavities formed at810A will not penetrate through the substrate. At820, the piezoelectric plate is bonded to the substrate. Bonding at820may be bonding any of the piezoelectric plates610-1,2or3to substrate620-1,2or3. The piezoelectric plate and the substrate may be bonded by a wafer bonding process. Typically, the mating surfaces of the substrate and the piezoelectric plate are highly polished. One or more layers of intermediate materials, such as an oxide or metal, may be formed or deposited on the mating surface of one or both of the piezoelectric plate and the substrate. One or both mating surfaces may be activated using, for example, a plasma process. The mating surfaces may then be pressed together with considerable force to establish molecular bonds between the piezoelectric plate and the substrate or intermediate material layers. A conductor pattern, including IDTs of each XBAR, is formed at830by depositing and patterning two or more conductor levels on the front side of the piezoelectric plate. The conductor levels typically include a first conductor level that includes the IDT fingers, and a second conductor level formed over the IDT busbars and other conductors except the IDT fingers. In some devices, a third conductor levels may be formed on the contact pads. Each conductor level may be one or more layers of, for example, aluminum, an aluminum alloy, copper, a copper alloy, or some other conductive metal. Optionally, one or more layers of other materials may be disposed below (i.e. between each conductor layer and the piezoelectric plate) and/or on top of each conductor layer. For example, a thin film of titanium, chrome, or other metal may be used to improve the adhesion between the first conductor level and the piezoelectric plate. The second conductor level may be conduction enhancement layer of gold, aluminum, copper or other higher conductivity metal may be formed over portions of the first conductor level (for example the IDT bus bars and interconnections between the IDTs). Each conductor level may be formed at830by depositing the appropriate conductor layers in sequence over the surface of the piezoelectric plate. The excess metal may then be removed by etching through patterned photoresist. The conductor level can be etched, for example, by plasma etching, reactive ion etching, wet chemical etching, and other etching techniques. Alternatively, each conductor level may be formed at830using a lift-off process. Photoresist may be deposited over the piezoelectric plate. and patterned to define the conductor level. The appropriate conductor layers may be deposited in sequence over the surface of the piezoelectric plate. The photoresist may then be removed, which removes the excess material, leaving the conductor level. When a conductor level has multiple layers, the layers may be deposited and patterned separately. In particular, different patterning processes (i.e. etching or lift-off) may be used on different layers and/or levels and different masks are required where two or more layers of the same conductor level have different widths or shapes. At840, dielectric layers may be formed by depositing one or more layers of dielectric material on the front side of the piezoelectric plate. As previously described, the dielectric layers may include a different dielectric thickness over the IDT fingers of the XBARs within each sub-filter. Each dielectric layer may be deposited using a conventional deposition technique such as sputtering, evaporation, or chemical vapor deposition. Each dielectric layer may be deposited over the entire surface of the piezoelectric plate, including on top of the conductor pattern. Alternatively, one or more lithography processes (using photomasks) may be used to limit the deposition of the dielectric layers to selected areas of the piezoelectric plate, such as only between the interleaved fingers of the IDTs. Masks may also be used to allow deposition of different thicknesses of dielectric materials on different portions of the piezoelectric plate. The matrix filter shown inFIG.5andFIG.6includes metal-insulator-metal (MIM) capacitors. A MIM capacitor consists of a first metal level and a second metal level separated by a dielectric layer. When a matrix filter includes MIM capacitor, the steps of forming the conductor patterns at830and forming the dielectric layers at840must overlap. At least one dielectric layer has to be formed at840after a first metal level is formed at830and before a final metal level is formed at830. The MIM capacitors may be formed beside or on any of the piezoelectric plate portions610-1,2or3; or substrates620-1,2or3. In a second variation of the process800, one or more cavities are formed in the back side of the substrate at810B. A separate cavity may be formed for each resonator in a filter device. The one or more cavities may be formed using an anisotropic or orientation-dependent dry or wet etch to open holes through the back side of the substrate to the piezoelectric plate. In this case, the resulting resonator devices will have a cross-section as shown inFIG.1. In the second variation of the process800, a back-side dielectric layer may be formed at850. In the case where the cavities are formed at810B as holes through the substrate, the back-side dielectric layer may be deposited through the cavities using a conventional deposition technique such as sputtering, evaporation, or chemical vapor deposition. In a third variation of the process800, one or more cavities in the form of recesses in the substrate may be formed at810C by etching the substrate using an etchant introduced through openings in the piezoelectric plate. A separate cavity may be formed for each resonator in a filter device. In all variations of the process800, the filter device is completed at860. Actions that may occur at860include depositing an encapsulation/passivation layer such as SiO2or Si3O4over all or a portion of the device; forming bonding pads or solder bumps or other means for making connection between the device and external circuitry; excising individual devices from a wafer containing multiple devices; other packaging steps; and testing. Another action that may occur at860is to tune the resonant frequencies of the resonators within the device by adding or removing metal or dielectric material from the front side of the device. After the filter device is completed, the process ends at895. The descriptions herein such as forFIGS.4-8provide improved XBAR radio frequency filter configurations by using XBAR sub-filters with different thicknesses of the piezoelectric plate portions connected in parallel between two ports. The sub-filters have a piezoelectric plate with a back surface attached to the different substrates and portions of the plate forming diaphragms spanning cavities in the substrates. Interleaved fingers of IDTs are on the diaphragms. The different thickness portions of the piezoelectric plates may be on different substrates of different die connected in parallel between two ports. These configurations form a distributed (matrix) XBAR filter that allows for the reduction of required resonator static capacitance C0, and therefore a reduction in required die area. These configurations are also scalable to arbitrary order and can readily be made reconfigurable with the use of RF switches. By incorporating the configurations' multi-die approach, similar to a ‘split ladder’ topology, additional freedom in the design of the distributed filter is achieved. Without these configurations, constraining all resonators of multiple sub-filters to a single die requires frequency separation of resonators to be achieved by varying top layer oxide and/or electrode dimensions. Instead, the multi-die configurations introduce the membrane thickness as an additional degree of freedom that may be applied by sub-filter resonator groups. CLOSING COMMENTS Throughout this description, the embodiments and examples shown should be considered as exemplars, rather than limitations on the apparatus and procedures disclosed or claimed. Although many of the examples presented herein involve specific combinations of method acts or system elements, it should be understood that those acts and those elements may be combined in other ways to accomplish the same objectives. With regard to flowcharts, additional and fewer steps may be taken, and the steps as shown may be combined or further refined to achieve the methods described herein. Acts, elements and features discussed only in connection with one embodiment are not intended to be excluded from a similar role in other embodiments. As used herein, “plurality” means two or more. As used herein, a “set” of items may include one or more of such items. As used herein, whether in the written description or the claims, the terms “comprising”, “including”, “carrying”, “having”, “containing”, “involving”, and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of”, respectively, are closed or semi-closed transitional phrases with respect to claims. Use of ordinal terms such as “first”, “second”, “third”, etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements. As used herein, “and/or” means that the listed items are alternatives, but the alternatives also include any combination of the listed items. | 42,146 |
11863161 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the drawings. It should be noted that the preferred embodiments described below are each specific or comprehensive examples. The numerical values, the shapes, the materials, the elements, the arrangements of the elements, the connection configurations, and the like described in the following preferred embodiments are mere examples and are not intended to limit the present invention. Among the elements in the following preferred embodiments, elements not recited in any of the independent claims are described as arbitrary or optional elements. Furthermore, the size or the size ratio of the elements illustrated in the drawings is not necessarily presented in an exact manner. Preferred Embodiments 1. Fundamental Configuration of Acoustic Wave Filter FIG.1is a circuit configuration diagram of an acoustic wave filter10according to a preferred embodiment of the present invention. The acoustic wave filter10includes a plurality of series arm resonators101,102and103provided on a path connecting a first terminal91and a second terminal92and a plurality of parallel arm resonators202and203provided between the path and the ground (reference terminal). The series arm resonator102is provided between the first terminal91and the series arm resonator101. The series arm resonator103is provided between the series arm resonator101and the second terminal92. In the following description, among the plurality of series arm resonators101to103, the series arm resonator101is also referred to as the first series arm resonator101, and the series arm resonators102and103different from the first series arm resonator101are also referred to as the second series arm resonators102and103. As illustrated inFIG.1, a first capacitive element C1is coupled in parallel with the first series arm resonator101. Specifically, one terminal of the first capacitive element C1is coupled to a node n1between the first series arm resonator101and the second series arm resonator102, and the other terminal of the first capacitive element C1is coupled to a node n2between the first series arm resonator101and the second series arm resonator103. As such, the acoustic wave filter10includes a first resonance circuit RC1including the first series arm resonator101and the first capacitive element C1coupled in parallel with the first series arm resonator101. The first series arm resonator101includes two divided resonators. Here, a series divided resonator is defined as each of the two resonators coupled in series with each other without a parallel arm resonator coupled between a node between the two resonators and the ground. Specifically, the first series arm resonator101includes a first divided resonator D1and a second divided resonator D2coupled in series with each other. The first divided resonator D1is coupled in series with the second series arm resonator102. The second divided resonator D2is coupled in series with the second series arm resonator103. The first divided resonator D1has a resonance frequency and an anti-resonant frequency higher than those of the second divided resonator D2. The resonant frequency or anti-resonant frequency denotes a resonant frequency or anti-resonant frequency inherent in the first divided resonator D1not coupled to any capacitive element. A second capacitive element C2is coupled in parallel with the first divided resonator D1. Specifically, one terminal of the second capacitive element C2is coupled to a node n3between the first divided resonator D1and the second series arm resonator102; and the other terminal of the second capacitive element C2is coupled to a node n4between the first divided resonator D1and the second divided resonator D2. The second capacitive element C2has a different capacitance than that of the first capacitive element C1. Specifically, the capacitance of the second capacitive element C2is lower than the capacitance of the first capacitive element C1, and more specifically, the capacitance of the second capacitive element C2is preferably, for example, in the range of about 0.1 to about 0.3 times the capacitance of the first capacitive element C1. No capacitive element is coupled in parallel with the second divided resonator D2. Furthermore, no parallel arm resonator is coupled between the ground and any node between the first divided resonator D1and the second divided resonator D2. As described above, the first resonance circuit RC1includes a second resonance circuit RC2including the first divided resonator D1and the second capacitive element C2and also includes the second divided resonator D2coupled in series with the second resonance circuit RC2. The anti-resonant frequency of the second resonance circuit RC2is the same or substantially the same as the anti-resonant frequency of the second divided resonator D2. In other words, the capacitance of the second capacitive element C2is configured such that the anti-resonant frequency of the second resonance circuit RC2is the same or substantially the same as the anti-resonant frequency of the second divided resonator D2. Here, “substantially the same” means, for example, that the difference between the anti-resonant frequency of the second resonance circuit RC2and the anti-resonant frequency of the second divided resonator D2is in the range of about 0.2% to about 5%. FIGS.2A and2Bprovide graphs illustrating the admittance characteristic of the first divided resonator D1, the admittance characteristic of the second divided resonator D2, and the admittance characteristic of the second resonance circuit RC2. FIG.2Aillustrates the admittance characteristic of the first divided resonator D1and the admittance characteristic of the second divided resonator D2. As illustrated in the graph, the resonant frequency of the first divided resonator D1is higher than the resonant frequency of the second divided resonator D2, and the anti-resonant frequency of the first divided resonator D1is higher than the anti-resonant frequency of the second divided resonator D2. FIG.2Billustrates the admittance characteristic of the second resonance circuit RC2and the admittance characteristic of the second divided resonator D2. Since the second capacitive element C2is coupled in parallel with the first divided resonator D1, the anti-resonant frequency of the second resonance circuit RC2indicated inFIG.2Bis lower than the anti-resonant frequency of the first divided resonator D1indicated inFIG.2A. In the present preferred embodiment, the anti-resonant frequency of the second resonance circuit RC2is close to, and more precisely, the same or substantially the same as the anti-resonant frequency of the second divided resonator D2. Furthermore, as indicated inFIG.2B, the resonant frequency of the second resonance circuit RC2is higher than the resonant frequency of the second divided resonator D2. As a result, the resonance bandwidth, which is a range between a resonant frequency and an anti-resonant frequency, of the second resonance circuit RC2is narrower than the resonance bandwidth of the second divided resonator D2. The difference between the anti-resonant frequency of the second resonance circuit RC2and the anti-resonant frequency of the second divided resonator D2is smaller than the difference between the resonant frequency of the second resonance circuit RC2and the resonant frequency of the second divided resonator D2. As described above, by using the anti-resonant frequency of the first resonance circuit RC1including the second resonance circuit RC2defined by coupling the second capacitive element C2in parallel with the first divided resonator D1, it is possible to define an attenuation pole on the higher frequency side than the pass band. Furthermore, the increase of the resonant frequency of the second resonance circuit RC2increases the resonant frequency of the first resonance circuit RC1, and as a result, it is possible to reduce the insertion loss on the higher frequency side of the pass band of the acoustic wave filter10. Here, the connection configuration of the resonance circuit and the resonators included in the acoustic wave filter10is described again. The first resonance circuit RC1described above is provided on the path connecting the first terminal91and the second terminal92. The second series arm resonator102is coupled between the first terminal91and the first resonance circuit RC1in series with the first resonance circuit RC1. The second series arm resonator103is coupled between the first resonance circuit RC1and the second terminal92in series with the first resonance circuit RC1. The parallel arm resonator202is provided between the ground and a node between the second series arm resonator102and the first resonance circuit RC1. The parallel arm resonator203is provided between the ground and a node between the first resonance circuit RC1and the second series arm resonator103. With the connection configuration described above, the acoustic wave filter10forms a ladder band pass filter. The resonant frequency of the first resonance circuit RC1, the resonant frequency of the second series arm resonator102, and the resonant frequency of the second series arm resonator103are all within the pass band of the acoustic wave filter10. The anti-resonant frequency of the first resonance circuit RC1is higher than the pass band and lower than the anti-resonant frequency of the second series arm resonator102and the anti-resonant frequency of the second series arm resonator103. As described above, by configuring the first resonance circuit RC1to have an anti-resonant frequency lower than the anti-resonant frequency of the series arm resonator102and the anti-resonant frequency of the series arm resonator103, it is possible to define an attenuation pole on the higher frequency side than the pass band. As such, it is possible to define the attenuation pole of the acoustic wave filter10and also reduce the insertion loss on the higher frequency side of the pass band. 2. Structure of Acoustic Wave Filter The following description is for a structure of the acoustic wave filter10that has the configuration described above. The acoustic wave filter10is a surface acoustic wave filter including a plurality of acoustic wave resonators, such as the series arm resonators101to103and the parallel arm resonators202and203, for example. FIGS.3A to3Cschematically illustrate examples of an acoustic wave resonator included in the acoustic wave filter10, in whichFIG.3Ais a plan view andFIGS.3B and3Care sectional views taken along the dot-dash line illustrated inFIG.3A. The acoustic wave resonator illustrated inFIGS.3A to3Care used only to explain a typical structure of the plurality of acoustic wave resonators, and thus, the number of electrode fingers of an electrode, the length of electrode fingers, and the like are not limited to this example. The acoustic wave resonator includes a substrate5with piezoelectricity and comb-shaped electrodes101aand101b. As illustrated inFIG.3A, the comb-shaped electrodes101aand101bfacing each other in a pair are provided on the substrate5. The comb-shaped electrode101aincludes a plurality of electrode fingers121aparallel or substantially parallel to each other and a busbar electrode111aconnecting the electrode fingers121ato each other. The comb-shaped electrode101bincludes a plurality of electrode fingers121bparallel or substantially parallel to each other and a busbar electrode111bconnecting the electrode fingers121bto each other. The plurality of electrode fingers121aand121bextend in a direction perpendicular or substantially perpendicular to a propagation direction of acoustic waves (the X-axis direction). An interdigital transducer (IDT) electrode54including the electrode fingers121aand121band the busbar electrodes111aand111bhas a layered structure including a fixing layer541and a main electrode layer542as illustrated inFIG.3B. The fixing layer541improves the firmness of the substrate5and the main electrode layer542and, for example, Ti is preferably used as a material of the fixing layer541. The thickness of the fixing layer541is preferably, for example, about 12 nm. As a material of the main electrode layer542, for example, Al including about 1% Cu is preferably used. The thickness of the main electrode layer542is preferably, for example, about 162 nm. A protective layer55covers the comb-shaped electrodes101aand101b. The protective layer55is provided to protect the main electrode layer542from the external environment, control the frequency temperature characteristic, increase moisture resistance, and the like. The protective layer55is preferably, for example, a dielectric film mainly including silicon dioxide. The thickness of the protective layer55is preferably, for example, about 25 nm. Materials included in the fixing layer541, the main electrode layer542, and the protective layer55are not limited to the materials described above. Furthermore, the IDT electrode54does not necessarily have the layered structure described above. The IDT electrode54may be made of, for example, a metal, such as Ti, Al, Cu, Pt, Au, Ag, or Pd, or an alloy thereof, or may be defined by a plurality of multilayer bodies made of the metal or the alloy. Additionally, the protective layer55is not necessarily provided. Next, a layered structure of the substrate5is described. As illustrated inFIG.3C, the substrate5includes a high-acoustic-velocity support substrate51, a low-acoustic-velocity film52, and a piezoelectric film53, and has a structure including the high-acoustic-velocity support substrate51, the low-acoustic-velocity film52, and the piezoelectric film53layered in this order. The piezoelectric film53is preferably made of, for example, a 50° Y-cut X-propagation LiTaO3 piezoelectric single crystal or piezoelectric ceramic (a lithium tantalate single crystal or ceramic that is cut at a plane perpendicular or substantially perpendicular to a normal line obtained by rotating an axis about an X-axis as a central axis by about 50° from a Y-axis and in which surface acoustic waves propagate in the X-axis direction). The thickness of the piezoelectric film53is preferably, for example, about 600 nm. The material and the cut-angle for a piezoelectric single crystal used for the piezoelectric film53are selected as appropriate in accordance with required specifications of individual filters. The high-acoustic-velocity support substrate51supports the low-acoustic-velocity film52, the piezoelectric film53, and the IDT electrode54. The high-acoustic-velocity support substrate51is also configured such that bulk waves in the high-acoustic-velocity support substrate51are faster in velocity than acoustic waves such as surface acoustic waves and boundary waves propagating along the piezoelectric film53. The high-acoustic-velocity support substrate51confines surface acoustic waves in a portion provided by layering the piezoelectric film53and the low-acoustic-velocity film52so that the surface acoustic waves do not leak down below the high-acoustic-velocity support substrate51. The high-acoustic-velocity support substrate51is preferably, for example, a silicon substrate. The thickness of the high-acoustic-velocity support substrate51is preferably, for example, about 200 μm. The low-acoustic-velocity film52is configured such that bulk waves in the low-acoustic-velocity film52are slower in velocity than bulk waves propagating in the piezoelectric film53. The low-acoustic-velocity film52is disposed between the piezoelectric film53and the high-acoustic-velocity support substrate51. This structure and a property of acoustic wave in which energy is naturally concentrated in low-acoustic-velocity media reduces or prevents leakage of surface acoustic wave energy outside the IDT electrode54. The low-acoustic-velocity film52is preferably mainly made of, for example, silicon dioxide. The thickness of the low-acoustic-velocity film52is preferably, for example, about 670 nm. Here, an example of electrode parameters for the IDT electrode54of the acoustic wave resonator is described. The wave length of acoustic wave resonator is defined as a wave length λ that is the repetition cycle of the electrode fingers121aor121bconstituting the IDT electrode54illustrated inFIG.3B. The electrode pitch is ½ of the wave length λ and defined as (W+S), where the line width of the electrode fingers121aand121bconstituting the comb-shaped electrodes101aand101bis W and the space width between an electrode finger121aand an electrode finger121bis S. An overlap width L of the comb-shaped electrodes101aand101bin a pair is, as illustrated inFIG.3A, the overlap length of the overlapping electrode fingers121aand121bas viewed in the propagation direction of acoustic waves (the X-axis direction). The electrode duty of each acoustic wave resonator is the line width occupancy rate of the electrode fingers121aand121b, that is, the rate of the line width to the sum of the line width and the space width of the electrode fingers121aand121b, which is defined as W/(W+S). The height of the comb-shaped electrodes101aand101bis h. Parameters defining the shape and size of the acoustic wave resonator such as the wave length λ, the overlap width L, the electrode duty, and the height h of the IDT electrode54are referred to as resonator parameters. Next, a structure of the first resonance circuit RC1of the acoustic wave filter10will be described. FIG.4is a plan view schematically illustrating the first resonance circuit RC1of the acoustic wave filter10. As illustrated inFIG.4, the first resonance circuit RC1includes the first series arm resonator101and the first capacitive element C1coupled in parallel with the first series arm resonator101. The first series arm resonator101includes the first divided resonator D1and the second divided resonator D2coupled in series with each other. The second capacitive element C2is coupled in parallel with the first divided resonator D1. The first resonance circuit RC1is defined by the second resonance circuit RC2including the first divided resonator D1and the second capacitive element C2and the second divided resonator D2coupled in series with the second resonance circuit RC2. Reflectors142are provided at both ends of the first divided resonator D1and both ends of the second divided resonator D2. The reflector142includes a plurality of electrode fingers parallel to each other and busbar electrodes connecting the electrode fingers to each other. The first series arm resonator101is provided on the substrate5such that the plurality of electrode fingers121aand121bof the first series arm resonator101are perpendicular or substantially perpendicular to the propagation direction of acoustic waves. The first divided resonator D1and the second divided resonator D2are arranged in this order in the direction perpendicular or substantially perpendicular to the propagation direction of acoustic waves. The first divided resonator D1has a resonance frequency and an anti-resonant frequency higher than those of the second divided resonator D2. Specifically, a wave length λ1is shorter than a wave length λ2, where the wave length λ1is one of the resonator parameters of the first divided resonator D1and the wave length λ2is one of the resonator parameters of the second divided resonator D2. The first capacitive element C1and the second capacitive element C2each include comb-shaped electrodes. Both of the first capacitive element C1and the second capacitive element C2are provided on the substrate5. Each pair of the comb-shaped electrodes of the first capacitive element C1and the second capacitive element C2includes a plurality of electrode fingers301aand301band a pair of busbar electrodes311aand311b. The electrode fingers301aand301bare parallel or substantially parallel to each other and interleaved with each other. The busbar electrodes311aand311bface each other between which the electrode fingers301aand301bare interposed. The electrode fingers301aare connected to the busbar electrode311a, and the electrode fingers301bare connected to the busbar electrode311b. The electrode fingers301aand301bare elongated in the propagation direction of surface acoustic waves of the IDT electrode54of the series arm resonator101and regularly disposed in the direction perpendicular or substantially perpendicular to the propagation direction. The second capacitive element C2has a lower capacitance than that of the first capacitive element C1. More specifically, the number of electrode fingers301aand301bof the second capacitive element C2is less than the number of electrode fingers301aand301bof the first capacitive element C1. The interval between the electrode finger301aand the electrode finger301bof the second capacitive element C2may be wider than that of the first capacitive element C1. When viewed in the direction perpendicular or substantially perpendicular to the propagation direction of acoustic waves, the overlap length of the overlapping electrode fingers301aand301bof the second capacitive element C2may be shorter than that of the first capacitive element C1. 3. Frequency Characteristic of Acoustic Wave Filter Next, the frequency characteristic of the acoustic wave filter10will be described in comparison to an acoustic wave filter of a comparative example. FIG.5is a circuit configuration diagram of an acoustic wave filter510of a comparative example. Unlike the acoustic wave filter10of the present preferred embodiment, the acoustic wave filter510of the comparative example does not include the second capacitive element C2. This means that a first resonance circuit511of the comparative example includes only the first series arm resonator101and the first capacitive element C1coupled in parallel with the first series arm resonator101. The first divided resonator D1and the second divided resonator D2are the same or substantially the same as each other with respect to the resonator parameters, the resonant frequency, and the anti-resonant frequency. FIG.6illustrates the bandpass characteristic of the acoustic wave filter10and the admittance characteristic of the first resonance circuit RC1.FIG.7illustrates the bandpass characteristic of the acoustic wave filter10of the preferred embodiment and the bandpass characteristic of the acoustic wave filter510of the comparative example. As illustrated inFIG.6, the anti-resonant frequency of the first resonance circuit RC1of the present preferred embodiment is the same or substantially the same as the anti-resonant frequency of the first resonance circuit511of the comparative example, whereas the resonant frequency of the first resonance circuit RC1of the present preferred embodiment is higher than the resonant frequency of the first resonance circuit511of the comparative example. Additionally, as illustrated inFIG.7, the insertion loss on the higher frequency side of the pass band is reduced in the acoustic wave filter10of the present preferred embodiment more than in the acoustic wave filter510of the comparative example. Specifically, the insertion loss in the vicinity of 652 MHz frequency is reduced in the acoustic wave filter10more than in the acoustic wave filter510. This is because, as described in the present preferred embodiment, by coupling the second capacitive element C2in parallel with the first divided resonator D1, it is possible to define an attenuation pole on the higher frequency side than the pass band with the use of the anti-resonant frequency of the first resonance circuit RC1involving the second resonance circuit RC2. Additionally, by increasing the resonant frequency of the first resonance circuit RC1as a result of increase of the resonant frequency of the second resonance circuit RC2, it is possible to reduce the insertion loss on the higher frequency side of the pass band of the acoustic wave filter10. 4. Modified Example of Preferred Embodiment Next, an acoustic wave filter according to a modified example of a preferred embodiment of the present invention will be described. FIG.8is a circuit configuration diagram of an acoustic wave filter10A according to a modified example of a preferred embodiment of the present invention. As illustrated in the drawing, the acoustic wave filter10A includes the series arm resonator102, the first resonance circuit RC1, parallel arm resonators202and204, and a longitudinally coupled resonator150. In other words, the acoustic wave filter10A is a filter provided by adding the longitudinally coupled resonator150to a ladder filter configuration. The longitudinally coupled resonator150is configured as a longitudinally coupled filter provided between the first terminal91and the second terminal92. The longitudinally coupled resonator150of the modified example is positioned on the second terminal92side with respect to the first resonance circuit RC1. The longitudinally coupled resonator150of the modified example includes nine IDTs and reflectors provided at both ends of the IDTs. The position of the longitudinally coupled resonator150is not limited to this example, and the longitudinally coupled resonator150may be positioned, for example, between the series arm resonator102and the first resonance circuit RC1. Similarly to the above-described preferred embodiment, the acoustic wave filter10A configured as described above can also reduce the insertion loss on the higher frequency side of the pass band of the acoustic wave filter10A. The acoustic wave filter10of the above-described present preferred embodiment includes the first resonance circuit RC1including the first series arm resonator101and the first capacitive element C1. The first series arm resonator101is provided on the path connecting the first terminal91and the second terminal92. The first capacitive element C1is coupled in parallel with the first series arm resonator101. The first series arm resonator101includes the first divided resonator D1and the second divided resonator D2coupled in series with each other. The first resonance circuit RC1includes the second resonance circuit RC2including the first divided resonator D1and the second capacitive element C2coupled in parallel with the first divided resonator D1. As described above, by using the anti-resonant frequency of the first resonance circuit RC1including the second resonance circuit RC2defined by coupling the second capacitive element C2in parallel with the first divided resonator D1, it is possible to define an attenuation pole on the higher frequency side than the pass band. With this configuration, for example, the increase of the resonant frequency of the second resonance circuit RC2increases the resonant frequency of the first resonance circuit RC1, and as a result, it is possible to obtain a steep attenuation slope on the higher frequency side than the pass band of the acoustic wave filter10and also reduce the insertion loss on the higher frequency side of the pass band of the acoustic wave filter10. Furthermore, the anti-resonant frequency of the second resonance circuit RC2may be the same or substantially the same as the anti-resonant frequency of the second divided resonator D2. Since the anti-resonant frequency of the second resonance circuit RC2is the same or substantially the same as the anti-resonant frequency of the second divided resonator D2, the anti-resonant frequency of the first resonance circuit RC1is set at a suitable level, and as a result, it is possible to define an attenuation pole on the higher frequency side than the pass band. With this configuration, for example, the increase of the resonant frequency of the second resonance circuit RC2increases the resonant frequency of the first resonance circuit RC1, and as a result, it is possible to obtain a steep attenuation slope on the higher frequency side than the pass band of the acoustic wave filter10and also reduce the insertion loss on the higher frequency side of the pass band of the acoustic wave filter10. Moreover, the resonant frequency of the second resonance circuit RC2may be higher than the resonant frequency of the second divided resonator D2. As such, it is possible to increase the resonant frequency of the first resonance circuit RC1. As a result, it is possible to reduce the insertion loss on the higher frequency side of the pass band of the acoustic wave filter10. Further, the resonance bandwidth, which is a range between a resonant frequency and an anti-resonant frequency, of the second resonance circuit RC2may be narrower than the resonance bandwidth of the second divided resonator D2. As such, it is possible to increase the resonant frequency of the first resonance circuit RC1. As a result, it is possible to reduce the insertion loss on the higher frequency side of the pass band of the acoustic wave filter10. Furthermore, the second capacitive element C2may have a different capacitance than that of the first capacitive element C1. As such, it is possible to adjust the anti-resonant frequency of the first resonance circuit RC1including the second resonance circuit RC2. As a result, it is possible to define an attenuation pole at an appropriate position on the higher frequency side than the pass band. Furthermore, the second capacitive element C2may have a smaller capacitance than the first capacitive element C1. As such, it is possible to make fine adjustments to the anti-resonant frequency of the first resonance circuit RC1including the second resonance circuit RC2. As a result, it is possible to define an attenuation pole at an appropriate position on the higher frequency side than the pass band. Further, the acoustic wave filter10further includes the second series arm resonator102coupled in series with the first resonance circuit RC1and also includes the parallel arm resonator202provided between the ground and the node between the first resonance circuit RC1and the second series arm resonator102. The resonant frequency of the first resonance circuit RC1and the resonant frequency of the second series arm resonator102may be within the pass band of the acoustic wave filter10. The anti-resonant frequency of the first resonance circuit RC1may be lower than the anti-resonant frequency of the second series arm resonator102. As described above, by configuring the first resonance circuit RC1to have an anti-resonant frequency lower than the anti-resonant frequency of the series arm resonator102, it is possible to define an attenuation pole on the higher frequency side than the pass band. As such, it is possible to define the attenuation pole of the acoustic wave filter10and also reduce the insertion loss on the higher frequency side of the pass band. It should be noted that the same advantageous effects are achieved with the configuration in which the second series arm resonator102is replaced with the second series arm resonator103while the parallel arm resonator202is replaced with the parallel arm resonator203. While the acoustic wave filters according to the above-described preferred embodiments have been described, the present invention is not limited to that preferred embodiments. For example, the present invention can include the following modifications to the above-described preferred embodiment. While the above-described preferred embodiments are examples in which the second resonance circuit RC2is coupled in series with the second series arm resonator102and the second divided resonator D2is coupled in series with the second series arm resonator103, the same may apply in reverse. Specifically, the second divided resonator D2is coupled in series with the second series arm resonator102and the second resonance circuit RC2is coupled in series with the second series arm resonator103. For example, the acoustic wave filter10may be used as a transmission or reception filter. For example, in the case in which the acoustic wave filter10is a transmission filter, the acoustic wave filter10may receive a transmitting wave generated by a transmission circuit (RFIC or the like, for example) and inputted via the second terminal92, filter the transmitting wave in accordance with a particular transmission pass band, and output the filtered transmitting wave to the second terminal92. In the case in which the acoustic wave filter10is a reception filter, the acoustic wave filter10may receive a received wave inputted from the first terminal91, filter the received wave in accordance with a particular reception pass band, and output the filtered received wave to the second terminal92. Moreover, the first terminal91and the second terminal92may be input and output terminals. For example, when the first terminal91is an input terminal, the second terminal92is an output terminal, and when the second terminal92is an input terminal, the first terminal91is an output terminal. Preferred embodiments of the present invention can be used as an acoustic wave filter with reduced insertion loss in the pass band for a multiplexer, a radio-frequency front-end circuit, a communication device, or the like, for example, and widely applied to communication devices such as mobile phones, for example. While preferred embodiments of the present invention have been described above, it is to be understood that variations and modifications will be apparent to those skilled in the art without departing from the scope and spirit of the present invention. The scope of the present invention, therefore, is to be determined solely by the following claims. | 33,705 |
11863162 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, preferred embodiments of the present invention will be described in detail using working examples and the drawings. Note that the preferred embodiments, which will be described below, each illustrate a comprehensive or specific example. Numeric values, shapes, materials, elements, arrangements and connection configurations of the elements, and the like illustrated in the following preferred embodiments are mere examples, and not intended to limit the present invention. Of elements in the following preferred embodiments, the elements that are not described in an independent claim will be described as optional elements. Further, sizes or ratios of the sizes of elements illustrated in the drawings are not necessarily exact. Further, in the drawings, the same reference characters denote the same or substantially the same elements, and in some cases an overlapping description is omitted or simplified. Further, in the following preferred embodiments, the term “connect” means not only the case of direct connection, but also the case where an electrical connection is established with another element or the like interposed therebetween. First Preferred Embodiment In the description of the present preferred embodiment, a quadplexer is used as an example of a multiplexer. 1. Basic Configuration of Multiplexer FIG.1is a configuration diagram of a quadplexer1according to the present preferred embodiment. Note thatFIG.1also illustrates an antenna2that is connected to a common terminal Port1of the quadplexer1. The quadplexer1is a multiplexer (demultiplexer) that includes a plurality of filters each having different pass band (here, for example, four filters11,12,21, and22) and in which the plurality of filters are bundled together at the common terminal Port1. Specifically, as illustrated inFIG.1, the quadplexer1includes the common terminal Port1, four individual terminals Port11, Port12, Port21, and Port22, and the four filters11,12,21, and22. The common terminal Port1is provided in common for the four filters11,12,21, and22and is connected to these filters11,12,21, and22in the inside of the quadplexer1. Further, the common terminal Port1is connected to the antenna2at the outside of the quadplexer1. That is, the common terminal Port1is also an antenna terminal of the quadplexer1. The individual terminals Port11, Port12, Port21, and Port22are respectively provided for the four filters11,12,21, and22in this order, and are connected to the corresponding filters in the inside of the quadplexer1. Further, on the outside of the quadplexer1, the individual terminals Port11, Port12, Port21, and Port22are connected to a RF signal processing circuit (for example, an RFIC: Radio Frequency Integrated circuit, not illustrated) via amplifier circuits or the like (not illustrated). The filter11is provided on a path connecting the common terminal Port1and the individual terminal Port11, and in the present preferred embodiment, is a reception filter whose pass band is, for example, a downlink frequency band (reception band) in Band 3 of LTE (Long Term Evolution). In the present preferred embodiment, the filter11corresponds to a second filter on a second path connecting the common terminal Port1and a second terminal (here, the individual terminal Port11). The filter12is provided on a path connecting the common terminal Port1and the individual terminal Port12, and in the present preferred embodiment, is a transmission filter whose pass band is, for example, an uplink frequency band (transmission band) in Band 3 of LTE. In the present preferred embodiment, the filter12corresponds to a first filter on a first path connecting the common terminal Port1and a first terminal (here, the individual terminal Port12). The filter21is provided on a path connecting the common terminal Port1and the individual terminal Port21, and in the present preferred embodiment, is, for example, a reception filter whose pass band is a downlink frequency band (reception band) in Band 1 of LTE. The filter22is provided on a path connecting the common terminal Port1and the individual terminal Port22, and in the present preferred embodiment, is, for example, a transmission filter whose pass band is an uplink frequency band (transmission band) in Band 1 of LTE. The filter11and the filter12define an unbalanced duplexer10whose pass band is, for example, Band 3 of LTE. Further, the filter21and the filter22define an unbalanced duplexer20whose pass band is, for example, Band 1 of LTE. That is, the quadplexer1according to the present preferred embodiment has a configuration such that the common terminal Port1is used as a common terminal (antenna terminal) of the duplexer10whose pass band is Band 3 of LTE and as a common terminal (antenna terminal) of the duplexer20whose pass band is Band 1 of LTE. In the present preferred embodiment, a signal path that passes the duplexer10and a signal path that passes the duplexer20are connected at a node N. That is, the node N is a point that bundles these two signal paths. The frequency bands assigned to Band 1 and Band 3 of LTE, which are pass bands of the quadplexer1according to the present preferred embodiment, are now described. Note that hereinbelow, with regard to the range of a frequency band, the numerical range greater than or equal to A and less than or equal to B is expressed in a simplified term, such as A to B. FIG.2is a diagram illustrating frequency bands assigned to Band 1 and Band 3. Note that hereinafter, in some cases, “Band of LTE” may be simply described as “Band”, and the reception band (Rx) and the transmission band (Tx) may be described in a simplified term such as, for example, “Band 1Rx band” for the reception band (Rx) of Band 1, by using the name of Band and letters indicating the reception band or the transmission band attached to the end of the name of Band. As illustrated inFIG.2, for example, in Band 1, about 1920 MHz to about 1980 MHz is assigned to the transmission band, and about 2110 MHz to about 2170 MHz is assigned to the reception band. In Band 3, about 1710 MHz to about 1785 MHz is assigned to the transmission band, and about 1805 MHz to about 1880 MHz is assigned to the reception band. Accordingly, as filter characteristics of the filters11,12,21, and22, characteristics indicated by the solid lines ofFIG.2, which allow the transmission band or the reception band of the corresponding Band to pass and attenuate the other bands, are desirable. As described above, the quadplexer1includes the filter12on the low frequency side (first filter) and the filter11on the high frequency side (second filter) whose pass band is higher in frequency than the filter12. Further, the quadplexer1includes the duplexer10including two filters (in the present preferred embodiment, the filters11and12) which includes the filter12and the duplexer20including two filters (in the present preferred embodiment, the filters21and22) which includes the filter22. Note that the pass bands of these two duplexers10and20are not limited to the combination of Band 3 and Band 1 and may be, for example, a combination of Band 25 and Band 66, a combination of Band 3 and Band 7, or the like. Further, in the quadplexer1, an impedance element, such as an impedance matching inductor or the like, for example, may be connected in or to one of paths connecting the respective filters11,12,21, and22to the node N or a path connecting the node N to the common terminal Port1or the like. 2. Basic Configuration of Filter Next, the basic configuration of each of the filters11,12,21, and21is described using the basic configuration of the filter12(first filter) whose pass band is Band 3Tx as an example. FIG.3is a circuit configuration diagram of the filter12. As illustrated inFIG.3, the filter12includes series resonators S1to S4, parallel resonators P1to P4, and inductors L1to L3. The series resonators S1to S4are connected in series from the individual terminal Port12side in this order on a signal path (series arm) connecting the common terminal Port1and the individual terminal Port12. Further, the parallel resonators P1to P4are connected in parallel to each other on paths (parallel arms) connecting respective connecting points between the individual terminal Port12and the series resonators S1to S4to a reference terminal (ground). Specifically, the parallel resonator P1is connected to the reference terminal via the inductor L1, the parallel resonators P2and P3are connected to the reference terminal via the inductor L2, and the parallel resonator P4is connected to the reference terminal via the inductor L3. Because of the foregoing connection configuration of the series resonators S1to S4and the parallel resonators P1to P4, the filter12defines a ladder band pass filter. As described above, the filter12(first filter) has a ladder filter structure that includes two or more series resonators (in the present preferred embodiment, for example, four series resonators S1to S4) on a signal path and one or more parallel resonators (in the present preferred embodiment, for example, four parallel resonators P1to P4) each on a path connecting the signal path and the reference terminal (ground). Note that each of the numbers of the series resonators and the parallel resonators of the filter12is not limited to four. The number of the series resonators may be any number greater than or equal to two, and the number of the parallel resonators may be any number greater than or equal to one. Further, the parallel resonators P1to P4may be directly connected to the reference terminal without the inductors L1to L3interposed therebetween. Further, impedance elements such as, for example, an inductor, a capacitor, and the like may be inserted in or connected to the series arm or the parallel arm. Further, inFIG.3, a common terminal is used for the reference terminal (ground) to which the parallel resonator P2and P3are both connected, and individual terminals are used for the reference terminals to which the parallel resonator P1and P4are connected, respectively. However, the reference terminal defining and functioning as the common terminal and the reference terminal defining and functioning as the individual terminal are not limited to the above and may be arbitrary selected depending on constraints of the mounting layout of the filter12. Further, a parallel resonator may be connected to a common terminal Port1side node of the series resonator S4, which is the resonator closest to the common terminal Port1in the series resonators S1to S4defining a ladder filter structure. Further, the parallel resonator P1connected to an individual terminal Port12side node of the series resonator S1, which is the closest to the individual terminal Port12, may be omitted. 3. Basic Structure of Resonator Next, the basic structure of each resonator (series resonator and parallel resonator) of the filter12(first filter) is described. In the present preferred embodiment, the resonator is a surface acoustic wave (SAW) resonator, for example. Note that the configurations of the other filters11,21, and22are not limited to the configuration described above and may be arbitrary designed depending on desired filter characteristics and the like. Specifically, the filters11,21, and22may not have a ladder filter structure and may have, for example, a longitudinally coupled filter structure. Further, each of the resonators of the filters11,21, and22is not limited to a SAW resonator and may alternatively be a bulk acoustic wave (BAW) resonator, for example. Furthermore, the filters11,21, and22may be configured without using any resonator and may alternatively be, for example, a LC resonant filter or a dielectric filter. FIG.4is a plan view and a cross-sectional view schematically illustrating a resonator of the filter12according to the present preferred embodiment. InFIG.4, as an exemplification of a plurality of resonators of the filter12, a schematic plan view and schematic cross-sectional views illustrating the structure of the series resonator S1are shown. Note that the series resonator S1illustrated inFIG.4is provided for illustrating a typical structure of the plurality of resonators, and the number, the length, and the like of the electrode fingers of the electrode are not limited to the ones illustrated inFIG.4. Further, although it is not illustrated inFIG.4, the electrode finger may alternatively be a variant finger including a variant portion on a top portion thereof. As illustrated in the plan view ofFIG.4, the series resonator S1includes a pair of comb-shaped electrodes32aand32bfacing each other and reflectors32calong an acoustic wave propagation direction for the pair of comb-shaped electrodes32aand32b. The pair of comb-shaped electrodes32aand32bdefine an IDT (InterDigital Transducer) electrode. Note that depending on constraints of the mounting layout or the like, one of the pair of the reflectors32cmay not need to be provided. The comb-shaped electrode32aincludes a plurality of electrode fingers322aand a plurality of offset electrode fingers323a, which are parallel to each other and arranged in a comb shape, and a busbar electrode321athat connects one-side end portions of respective ones of the plurality of electrode fingers322aand one-side end portions of respective ones of the plurality of offset electrode fingers323a. Further, the comb-shaped electrode32bis formed from a plurality of electrode fingers322band a plurality of offset electrode fingers323b, which are parallel or substantially parallel to each other and have a comb shape, and a busbar electrode321bthat connects one-side end portions of respective ones of the plurality of electrode fingers322band one-side end portions of respective ones of the plurality of offset electrode fingers323b. The pluralities of electrode fingers322aand322band the pluralities of offset electrode fingers323aand323bextend in an orthogonal or substantially orthogonal direction to the acoustic wave propagation direction (X-axis direction). Further, the electrode finger322aand the offset electrode finger323bface each other in the orthogonal or substantially orthogonal direction, and the electrode finger322band the offset electrode finger323aface each other in the orthogonal or substantially orthogonal direction. Here, a direction D connecting the other-side end portions of respective ones of the plurality of electrode fingers322a(end portions of respective ones of the plurality of electrode fingers322athat are not connected to the busbar electrode321a) crosses the acoustic wave propagation direction (X-axis direction) at a predetermined angle. Further, the direction D connecting the other-side end portions of respective ones of the plurality of electrode fingers322b(end portions of the plurality of electrode fingers322bthat are not connected to the busbar electrode321b) crosses the acoustic wave propagation direction (X-axis direction) at the predetermined angle. According to this shape, each of the IDT electrodes of the series resonators S1to S4and the parallel resonators P1to P4is a slanted IDT in which the acoustic wave propagation direction crosses a direction along which the plurality of electrode fingers are lined up. In a one-port resonator that utilizes surface acoustic waves and includes a piezoelectric layer, there may be a case where transverse mode ripples are produced between a resonant frequency and an anti-resonant frequency and a transmission characteristic in a pass band degrades. In the filter12according to the present preferred embodiment, to counteract such issue, the slanted IDT is provided as an IDT electrode of each resonator. The pair of the reflectors32care provided along the direction D with respect to the pair of the comb-shaped electrodes32aand32b. Specifically, the pair of the reflectors32csandwich the pair of the comb-shaped electrodes32aand32bin the direction D. Each reflector32cincludes a plurality of reflector electrode fingers in parallel or substantially in parallel to each other and reflector busbar electrodes that connect the plurality of reflector electrode fingers. The pair of the reflectors32cconfigured such that the reflector busbar electrodes are aligned in the direction D. The pair of the reflectors32cconfine a standing wave of an acoustic wave to be propagated without leaking to the outside of the resonator (here, the series resonator S1). This enables the resonator to propagate a radio frequency signal in a pass band, which is defined by the electrode pitch, the number of pairs, the intersecting width, and the like of the pair of the comb-shaped electrodes32aand32b, with low loss and to highly attenuate a radio frequency signal in the outside of the pass band. Further, the IDT electrode including the plurality of the electrode fingers322aand322b, the plurality of the offset electrode fingers323aand323b, and the busbar electrodes321aand321bhas a multilayer structure including an adhesive layer324and a primary electrode layer325as illustrated in the cross-sectional view ofFIG.4. Further, the cross-sectional structure of the reflector32cis the same or substantially the same as the cross-sectional structure of the IDT electrode, and thus the description thereof is omitted in the following description. The adhesive layer324improves adhesiveness between a piezoelectric layer327and the primary electrode layer325, and as a material therefor, for example, Ti may be used. The film thickness of the adhesive layer324is, for example, about 12 nm. For the primary electrode layer325, as a material, for example, Al including 1% of Cu may be used. The film thickness of the primary electrode layer325is, for example, about 162 nm. A protective layer326covers the IDT electrode. The protective layer326protects the primary electrode layer325from an external environment, adjusts a frequency-temperature characteristic, improves moisture resistance performance, and the like, and is a film whose main component is, for example, silicon dioxide. The film thickness of the protective layer326is, for example, about 25 nm. Note that the materials for the adhesive layer324, the primary electrode layer325, and the protective layer326are not limited to the materials described above. Further, the IDT electrode does not need to have the foregoing multilayer structure. The IDT electrode may be made of, for example, a metal such as Ti, Al, Cu, Pt, Au, Ag, Pd, or the like or an alloy thereof, or may include a plurality of multilayer bodies made of the metal or the alloy described above. Further, the protective layer326may not need to be provided. Such IDT electrode and the reflectors32care provided on a principal surface of a substrate320, which will be described in the next section. The multilayer structure of the substrate320is described below. As shown in the lower portion ofFIG.4, the substrate320includes a high acoustic velocity support substrate329, a low acoustic velocity film328, and the piezoelectric layer327, and has a structure in which the high acoustic velocity support substrate329, the low acoustic velocity film328, and the piezoelectric layer327are stacked on top of each other in this order. The piezoelectric layer327is a piezoelectric film on a principal surface of which the IDT electrode and the reflectors32care provided. For example, the piezoelectric layer327is made of 50° Y-cut X-propagation LiTaO3piezoelectric single crystal or piezoelectric ceramics (lithium tantalate single crystal or ceramics that is cut at a surface whose normal line is an axis obtained by rotating from the Y axis by about 50° about the X-axis serving as the center axis, wherein a surface acoustic wave propagates in the X-axis direction on this single crystal or ceramics). The thickness of the piezoelectric layer327is, for example, less than or equal to about 3.5λ where λ is the wavelength of an acoustic wave defined by the electrode pitch of the IDT electrode, and is about 600 nm, for example. The high acoustic velocity support substrate329is a substrate that supports the low acoustic velocity film328, the piezoelectric layer327, and the IDT electrode. Further, the high acoustic velocity support substrate329is a substrate such that the acoustic velocity of a bulk wave in the high acoustic velocity support substrate329is higher than the acoustic velocities of an acoustic wave such as a surface wave or a boundary wave propagating through the piezoelectric layer327, and confines a surface acoustic wave in a portion where the piezoelectric layer327and the low acoustic velocity film328are stacked on top of each other and prevents the surface acoustic wave from leaking downward below the high acoustic velocity support substrate329. The high acoustic velocity support substrate329is, for example, a silicon substrate and has a thickness of, for example, about 125 μm. Note that the high acoustic velocity support substrate329may be made of, for example, any one of (1) a piezoelectric body such as silicon carbide, silicon, lithium tantalate, lithium niobite, crystal, or the like, (2) various ceramics such as alumina, sapphire, zirconia, cordierite, mullite, steatite, forsterite, or the like, (3) magnesia or diamond, (4) a material whose main component is one of the foregoing materials, and (5) a material whose main component is a mixture of one of the foregoing materials. The low acoustic velocity film328is a film such that the acoustic velocity of a bulk wave in the low acoustic velocity film328is lower than the acoustic velocity of a bulk wave propagating through the piezoelectric layer327and is between the piezoelectric layer327and the high acoustic velocity support substrate329. According to this structure and the property that energy of an acoustic wave is focused in a medium where the acoustic velocity is inherently low, the leakage of surface acoustic wave energy to the outside of the IDT electrode is reduced or prevented. The low acoustic velocity film328is, for example, a film whose main component is silicon dioxide. The thickness of the low acoustic velocity film328is, for example, less than or equal to about 2λ where λ is the wavelength of an acoustic wave defined by the electrode pitch of the IDT electrode, and is about 670 nm, for example. According to the foregoing multilayer structure of the substrate320, it becomes possible to substantially increase the Q factor at a resonant frequency and an anti-resonant frequency compared with a known structure in which a single layer of a piezoelectric substrate is used. That is, because a high Q factor surface acoustic wave resonator may be obtained, it becomes possible to provide a filter having a low insertion loss using this surface acoustic wave resonator. Note that the high acoustic velocity support substrate329may alternatively have a structure in which a support substrate and a high acoustic velocity film are stacked on top of each other, the high acoustic velocity film being such that the acoustic velocity of a bulk wave propagating therethrough is higher than the acoustic velocities of acoustic waves such as a surface wave and a boundary wave propagating through the piezoelectric layer327. In this case, as the support substrate, a piezoelectric body such as, for example, lithium tantalate, lithium niobite, crystal, and the like, various ceramics such as alumina, magnesia, silicon nitride, aluminum nitride, silicon carbide, zirconia, cordierite, mullite, steatite, forsterite, and the like, a dielectric body such as glass, sapphire, and the like, a semiconductor such as silicon, gallium nitride, and the like, and a resin substrate and the like may be used. Further, for the high acoustic velocity film, various high acoustic velocity materials such as, for example, aluminum nitride, aluminum oxide, silicon carbide, silicon nitride, silicon oxynitride, a DLC film, diamond, a medium whose main component is one of the foregoing materials, a medium whose main component is a mixture of one of the foregoing materials, or the like may be used. Note that in the present preferred embodiment, an example is described using the case where the IDT electrode of the filter12is provided on the substrate320including the piezoelectric layer327. However, the substrate on which the IDT electrode is to be provided may alternatively be a piezoelectric substrate including a single layer of the piezoelectric layer327. The piezoelectric substrate in this case is made of, for example, a piezoelectric single crystal of LiTaO3or another piezoelectric single crystal such as LiNbO3or the like. Further, for the substrate on which the IDT electrode of the filter12is provided, beside one that is entirety is made of a piezoelectric layer, any structure in which a piezoelectric layer is stacked on a support substrate may also be used as long as the substrate includes a piezoelectric layer. Further, the piezoelectric layer327according to the foregoing present preferred embodiment uses 50° Y-cut X-propagation LiTaO3single crystal. However, the cut angle of a single crystal material is not limited thereto. That is, depending on desired band pass characteristics of an acoustic wave filter device, the multilayer structure, the material, and the thickness may be changed as needed, and even with a surface acoustic wave filter that uses a LiTaO3piezoelectric substrate having a cut angle other than the above, a LiNbO3piezoelectric substrate, or the like, the same or substantially the same advantageous effects may be produced. Here, electrode parameters of the IDT electrode of a surface acoustic wave resonator will be described. The wavelength of a surface acoustic wave resonator is defined by a wavelength λ, which is a repetition period of the plurality of electrode fingers322aor the plurality of electrode fingers322bthat forms the IDT electrode illustrated in the middle part ofFIG.4. Further, the electrode pitch is ½ of the wavelength λ and is defined as (W+S), where W is the line width of the electrode fingers322aand322bof the comb-shaped electrodes32aand32b, and S is the space width between the electrode finger322aand the electrode finger322b, which are adjacent to each other. Further, as illustrated in the top portion ofFIG.4, the intersecting width L of a pair of the comb-shaped electrodes32aand32bis the overlapping length of the electrode fingers when viewed from the direction D of the electrode finger322aand the electrode finger322b. Further, the electrode duty of each resonator is a line width share of the pluralities of electrode fingers322aand322band is defined as W/(W+S), which is a ratio of the line width of the pluralities of electrode fingers322aand322bto the sum of this line width and the space width of the pluralities of electrode fingers322aand322b. Note that in the above, an example is described using the case where the series resonator S1includes a slanted IDT. However, the present preferred embodiment is not limited thereto, and all of the series resonators and the parallel resonators may include a slanted IDT, or only the series resonators may include a slanted IDT. Further, in the above, an example is described using the case where the series resonator S1includes the offset electrode fingers. However, the present preferred embodiment is not limited thereto, and all of the series resonators and the parallel resonators may include one or more offset electrode fingers, or one or more of the resonators may include no offset electrode finger. 4. Resonator Structure in Filter According to Reference Example As described above, in a resonator including a slanted IDT electrode portion, ripples may be produced near a resonant frequency. Such ripples near a resonant frequency may be reduced or prevented by using a resonator in which variant fingers are used as electrode fingers in the slanted IDT electrode portion. However, it is likely to have ripples near an anti-resonant frequency. In view of the above, first, for the series resonators S1to S4of the filter12ofFIG.3, reference examples 1 to 4 are set to compare characteristics. The reference examples 1 to 4 are each configured such that all of the electrode fingers including the offset electrode fingers are the variant finger or all of the electrode fingers including the offset electrode fingers are not the variant finger. FIGS.5A and5Bare plan views of the IDT electrode of the series resonators S1to S4of the filter12according to the reference examples 1 to 4. In the filters12according to the reference examples 1 to 4, in each of the series resonators S1to S4, all of the electrode fingers322aand322band the offset electrode fingers323aand323bare either not the variant finger (FIG.5A) or the variant finger (FIG.5B). Here, the variant finger is, of a plurality of electrode fingers, an electrode finger with a wider electrode finger width at an end portion that is not connected to a busbar electrode than the electrode finger width at an electrode finger center portion (that is, having a variant portion). As illustrated inFIG.5A, in the resonator in which all of the electrode fingers are not the variant finger, all of the offset electrode fingers323a1and the electrode fingers322b1are a second electrode finger such that the electrode finger width at one end portion is less than or equal to the electrode finger width at a center portion. In this resonator, all of the electrode fingers322aand the offset electrode fingers323bare also the second electrode finger (not illustrated). On the other hand, as illustrated inFIG.5B, in the resonator in which all of the electrode fingers are the variant finger, all of the offset electrode fingers323a2and the electrode fingers322b2include variant portions323dand322d, respectively, and are a first electrode finger such that the electrode finger width at one end portion is wider than the electrode finger width at a center portion. In this resonator, all of the electrode fingers322aand the offset electrode fingers323bare also the first electrode finger (not illustrated). Table 1 indicates the arrangement of the resonators in which all of the electrode fingers (including the offset electrode fingers) are the variant finger in the reference examples 1 to 4. In the following description, the shape of an electrode finger (including an offset electrode finger) that does not include the variant portion is expressed by using the phrase “the variant portion is removed”. This phrase is only used to make a distinction between the shape of an electrode finger that does not include the variant portion and the shape of the variant finger, and does not limit the procedure of fabrication. That is to say, the electrode finger in which a variant portion is removed may be an electrode finger formed by patterning a shape that does not originally include the variant portion. Table 1 represents the removal ratio of variant portion for each resonator of the reference examples 1 to 4. In Table 1, that the removal ratio of variant portion is 0% means that all of the electrode fingers (including the offset electrode fingers) of a resonator are the variant finger, and that the removal ratio of variant portion is 100% means that all of the electrode fingers (including the offset electrode fingers) of a resonator do not include the variant portion. TABLE 1Removal ratio ofResonatorResonatorResonatorResonatorvariant portionS4S3S2S1Reference example 10%0%0%0%Reference example 20%100%0%0%Reference example 3100%100%100%100%Reference example 40%100%0%100% 5. Characteristic Comparison of Quadplexer Using Filters According to Reference Example The band pass characteristics and the isolation characteristics of the quadplexers1(hereinafter, simply referred to as reference examples 1 to 4) are described, in which the filters of the respective reference examples 1 to 4 are used as the filter12. First, the reference example 1 is described. FIG.6is a graph illustrating examples of the band pass characteristic between the individual terminal Port12and the common terminal Port1and the isolation characteristic between the individual terminal Port12and the individual terminal Port11in the reference example 1. Specifically,FIG.6illustrates the band pass characteristic of a path that goes through the filter12(filter for Band 3Tx) and the isolation characteristic between paths that go through the filter12and the filter11(filter for Band 3Rx). More specifically,FIG.6illustrates the insertion loss which is the ratio of the intensity of a signal output from the common terminal Port1to the intensity of a signal input to the individual terminal Port12and the isolation which is the ratio of the intensity of a signal output from the individual terminal Port11to the intensity of a signal input to the individual terminal Port12. In both of the band pass characteristic and the isolation characteristic illustrated inFIG.6, ripples are observed in a high frequency end region of the reception band (Band 3Rx) of Band 3. These ripples coincide in frequency with ripples near an anti-resonant frequency (not illustrated) in the characteristic of the filter12of the reference example 1 alone. Therefore, it is clear that these ripples are caused by the filter12. As described above, the use of the filter12in which all of the electrode fingers (including the offset electrode fingers) of all of the resonators are the variant finger in the quadplexer1may cause degradation in characteristics of a pass band in another filter (for example, the filter11). FIG.7Ais an enlarged graph illustrating an example of the isolation characteristic between the individual terminal Port12and the individual terminal Port11in the reference examples 1 to 4. FIG.7Bis an enlarged graph illustrating an example of the energy loss between the individual terminal Port12and the common terminal Port1in the reference examples 1, 3, and 4. Here, the energy loss means power consumption in a path, which is obtained by removing matching loss from passage loss. Contrary to the reference example 1, in the reference example 3, the variant portion is removed from all of the electrode fingers (including the offset electrode fingers) of all of the resonators in the filter12. In the reference example 3, although ripples in the isolation characteristic in a high frequency end region of Band 3Rx are small, the energy loss in Band 3Tx is large. Further, in the reference example 2, in the filter12, the variant portion is removed from all of the electrode fingers (including the offset electrode fingers) of the series resonator S2, and all of the electrode fingers (including the offset electrode fingers) of the series resonators S1, S3, and S4include the variant portion. In the reference example 2, in the isolation characteristic, ripples of the same or substantially the same level as in the reference example 1 are produced in a high frequency end region of Band 3Rx. Further, in the reference example 4, in the filter12, the variant portion is removed from all of the electrode fingers (including the offset electrode fingers) of the series resonators S2and S4, and all of the electrode fingers (including the offset electrode fingers) of the series resonators S1and S3include the variant portion. In the reference example 4, although ripples in the isolation characteristic in a high frequency end region of Band 3Rx are similarly small as in the reference example 3, the energy loss in Band 3Tx is larger compared with the reference example 1. As described above, the issue of the reference example 1 may not be resolved by the reference example 2 in which the removal ratio of variant portion is set to 100% only in one of the series resonators (here, the series resonator S2), and the same issue as in the reference example 3 arises in the reference example 4 in which the removal ratio of variant portion is set to 100% in two of the series resonators (here, the series resonators S2and S4). That is to say, an excellent characteristic in both of the loss in a pass band and the ripples near an anti-resonant frequency may not be achieved by setting the removal ratio of variant portion to 0% (no removal) or 100% (complete removal) in each resonator. 6. Configuration of Filter According to Working Example In view of the above, the inventors of preferred embodiments of the present invention studied a configuration in which the removal ratio of variant portion is set to an intermediate value which is greater than 0% and less than 100% (in other words, thinning the variant portions) in the series resonators S1and S3and to 0% (no removal) in the series resonators S2and S4. Specifically, filters in which the removal ratio of variant portion in both of the series resonators S1and S3is about 30%, about 50%, and about 75% are defined as working examples 1, 2, and 3, respectively. In the working examples 1, 2, and 3, the series resonators S2and S4are examples of the second series resonator including the first electrode fingers each including the variant portion, and the series resonators S1and S3are examples of the first series resonator including the first electrode fingers each including the variant portion and the second electrode fingers including no variant portion. Note that in the working examples 1 and 2, for ease of explanation, it is assumed that the second series resonator (series resonators S2and S4) include only the first electrode fingers (the removal ratio is 0%). However, the present preferred embodiment is not limited thereto, and the second series resonator may alternatively include, for example, several second electrode fingers. FIGS.8A to8Care plan views illustrating examples of the IDT electrode of the series resonators S1and S3in the filters12according to the working examples 1, 2 and 3 of a preferred embodiment of the present invention, respectively, and each illustrate the configuration illustrated inFIGS.5A and5Bin a simpler form for the entirety of the comb-shaped electrodes32aand32b.FIGS.8A to8Cillustrate examples of the arrangement of the variant portions322dand323d, in which the removal ratios of variant portion are about 30%, about 50%, and about 75%, respectively. Here, the removal ratio of variant portion is a ratio of the electrode fingers and the offset electrode fingers that do not include the variant portion to all of the electrode fingers and the offset electrode fingers of the IDT electrode. In all of the examples ofFIGS.8A to8C, the variant portion is not removed in a first portion A1that is centrally located in the IDT electrode in the acoustic wave propagation direction, and the variant portion is removed in a second portion A2and a third portion A3that are located on the two sides of the first portion A1in the acoustic wave propagation direction. That is, the first portion A1includes only the first electrode fingers (variant fingers), and the second portion A2and the third portion A3include only the second electrode fingers (fingers having no variant portion). The second portion A2and the third portion A3are each a portion of the IDT electrode sandwiched between the first portion A1and the reflector32c. In the example ofFIG.8A, the variant portion is removed from five electrode fingers (about 15% of a total of 32 electrode fingers) in each of the second portion A2and the third portion A3. Thus, the removal ratio of variant portion in the IDT electrode as a whole is about 30%. InFIG.8B, the variant portion is removed from eight electrode fingers (about 25%) in each of the second portion A2and the third portion A3. Thus, the removal ratio of variant portion in the IDT electrode as a whole is about 50%. InFIG.8C, the variant portion is removed from twelve electrode fingers (about 37.5%) in each of the second portion A2and the third portion A3. Thus, the removal ratio of variant portion in the IDT electrode as a whole is about 75%. 7. Characteristic Comparison of Quadplexer Using Filters According to Working Example Next, the band pass characteristics and the isolation characteristics of quadplexers1(hereinafter, simply referred to as working examples 1, 2, and 3) that include the respective filters according to the working examples 1, 2, and 3 as the filter12are described. FIG.9Ais a graph illustrating examples of the band pass characteristic between the individual terminal Port12and the common terminal Port1in the working examples 1, 2, and 3 comparing with the reference examples 1 and 4. Specifically,FIG.9Aillustrates the band pass characteristic of a path that goes through the filter12(filter for Band 3Tx). More specifically,FIG.9Aillustrates the insertion loss which is the ratio of the intensity of a signal output from the common terminal Port1to the intensity of a signal input to the individual terminal Port12. FIG.9Bis a graph illustrating examples of the isolation characteristic between the individual terminal Port12and the individual terminal Port11in the working examples 1, 2, and 3 comparing with the reference examples 1 and 4. Specifically,FIG.9Billustrates the isolation characteristic between paths that go through the filter12and the filter11(filter for Band 3Rx). More specifically,FIG.9Billustrates the isolation which is the ratio of the intensity of a signal output from the individual terminal Port11to the intensity of a signal input to the individual terminal Port12. FIG.9Cis a graph illustrating examples of the energy loss between the individual terminal Port12and the common terminal Port1in the working examples 1, 2, and 3 comparing with the reference examples 1 and 4. Specifically,FIG.9Cillustrates the band pass characteristic of a path that goes through the filter12(filter for Band 3Tx). More specifically,FIG.9Cillustrates the power consumption in the path, which is obtained by removing matching loss from the insertion loss which is the ratio of the intensity of a signal output from the common terminal Port1to the intensity of a signal input to the individual terminal Port12. As shown inFIGS.9A to9C, large ripples are produced in the isolation characteristic in a high frequency end region of a band of Band 3Rx in the reference example 1, and the insertion loss increases in a band of Band 3Tx in the reference example 4. The increase in the insertion loss in the band of Band 3Tx in the reference example 4 is caused by the series resonators S1and S3. The ripples produced in the isolation characteristic in a high frequency end region of the band of Band 3Rx are the largest (worst) in the reference example 1 and become gradually smaller (better) in the working example 1, the working example 2, the working example 3, and the reference example 4 in this order. Further, the insertion loss and the energy loss in the band of Band 3Tx are both the largest (worst) in the reference example 4 and become gradually smaller (better) in the working example 3, the working example 2, the working example 1, and the reference example 1 in this order. This result is summarized in Table 2 using the ripples produced in the isolation characteristic of the reference example and the insertion loss of the reference example 1 as the references of the ripples and the insertion loss. As summarized in Table 2, the ripples produced in the isolation characteristic are large in the reference example 1, and the insertion loss is large in the reference example 4. Thus, these reference examples may not provide an excellent characteristic in both the isolation characteristic and the insertion loss. In contrast, having smaller ripples compared with the reference example 1 and smaller insertion loss compared with the reference example 4, the working examples 1, 2, and 3 provide an excellent characteristic in both the isolation characteristic and the insertion loss. TABLE 2Removal ratioResonatorResonatorResonatorResonatorInsertionof variant portionS4S3S2S1RippleslossReference example 10%0%0%0%X⊚Reference example 40%100%0%100%⊚XWorking example 10%30%0%30%◯◯Working example 20%50%0%50%◯◯Working example 30%75%0%75%◯◯ From this result, it is possible to provide a filter with both smaller ripples and smaller insertion loss by configuring one or more series resonators of a plurality of series resonators of the filter in such a manner to provide the first portion of the IDT electrode centrally located in the acoustic wave propagation direction only using the first electrode fingers and provide the second portion and the third portion arranged on the two sides of the first portion only using the second electrode fingers. The series resonator in which the first portion of the IDT electrode including only the first electrode fingers and the second portion and the third portion including only the second electrode fingers may be used as, of a plurality of series resonators forming a filter, a series resonator that is not the series resonator having the lowest anti-resonant frequency (that is, the resonator that steepens end regions of a pass band of the filter). This provides a filter having an excellent feature for both the ripples near an anti-resonant frequency and the insertion loss without losing steepness in the band pass characteristic of the filter. 8. Configuration of Filter According to Modified Example In the first preferred embodiment, an example is described using the configuration in which the first filter (the filter12in the first preferred embodiment) includes only a ladder filter structure. However, the first filter may include, in addition to a ladder filter structure, a longitudinally coupled filter structure. In view of this, in the present modified example of the first preferred embodiment, a quadplexer including a first filter having such a filter structure is described. Note that of a plurality of filters included in the quadplexer, filters other than the first filter have the same or substantially the same configurations as those in the first preferred embodiment, and thus the description thereof is omitted. FIG.10is a circuit configuration diagram of a filter12A (first filter) according to a modified example of the first preferred embodiment. As illustrated inFIG.10, the filter12A includes series resonators S6and S7, parallel resonators P5and P6, and a longitudinally coupled resonator S5. That is to say, the filter12A is a filter in which the longitudinally coupled resonator S5is added to a ladder filter structure. The longitudinally coupled resonator S5has a longitudinally coupled filter structure between the common terminal Port1and the individual terminal Port12. In the present preferred embodiment, the longitudinally coupled resonator S5is on the individual terminal Port12side of the series resonator S6and includes, for example, nine IDTs and reflectors on both sides thereof. Note that the location where the longitudinally coupled resonator S5is to be provided is not limited to the above, and may be, for example, between the series resonator S7and the series resonator S6or on the common terminal Port1side of the series resonator S7. Even with the quadplexer including the first filter (in the present modified example, the filter12A) configured as described above, as is the case with the first preferred embodiment, it is possible to provide a filter having an excellent feature for both the ripples near an anti-resonant frequency and the insertion loss by providing the first electrode fingers and the second electrode fingers in the same or substantially the same sequence in the first portion and the second portion of the IDT electrode in at least one of the series resonators S6and S7. The series resonator in which the first electrode fingers and the second electrode fingers are arranged in the same or substantially the same sequence in the first portion and the second portion of the IDT electrode may be used as, of the series resonators S6and S7, a series resonator that is not the series resonator having the lowest anti-resonant frequency (that is, the resonator that defines an end region of a pass band of the filter). This enables a filter having an excellent feature for both the ripples near an anti-resonant frequency and the insertion loss without losing steepness in the band pass characteristic of the filter. Further, the filter12A according to the present preferred embodiment enables adjustment of the filter characteristic to a preferable characteristic such as improved attenuation and the like by providing the longitudinally coupled filter structure. Second Preferred Embodiment The quadplexers according to the first preferred embodiment and the modified example described above are applicable to a radio frequency front-end circuit, and further to a communication device including this radio frequency front-end circuit. Therefore, in the present preferred embodiment, such a radio frequency front-end circuit and a communication device are described. FIG.11is a configuration diagram of a radio frequency front-end circuit30according to a preferred embodiment 2. Note thatFIG.11also illustrates elements (e.g., an antenna2, a RF signal processing circuit (RFIC)3, and a base band signal processing circuit (BBIC)4) connected to the radio frequency front-end circuit30. The radio frequency front-end circuit30, the RF signal processing circuit3, and the base band signal processing circuit4define a communication device40. The radio frequency front-end circuit30includes a quadplexer1according to the first preferred embodiment, a reception side switch13and a transmission side switch23, a low noise amplifier circuit14, and a power amplifier circuit24. The reception side switch13is a switch circuit including two selection terminals respectively connected to the individual terminals Port11and Port21, which are reception terminals of the quadplexer1, and a common terminal connected to the low noise amplifier circuit14. The transmission side switch23is a switch circuit including two selection terminals respectively connected to the individual terminals Port12and Port22, which are transmission terminals of the quadplexer1, and a common terminal connected to the power amplifier circuit24. Each of the reception side switch13and the transmission side switch23connects the common terminal to a signal path corresponding to a predetermined band in response to a control signal from a controller (not illustrated) and is, for example, a SPDT (single pole double throw) switch. Note that the selection terminal to be connected to the common terminal is not limited to one terminal, and a plurality of selection terminals may alternatively be connected to the common terminal. That is, the radio frequency front-end circuit30may be compatible with carrier aggregation. The low noise amplifier circuit14is a reception amplifier circuit that amplifies a radio frequency signal (here, a received radio frequency signal) that goes through the antenna2, the quadplexer1, and the reception side switch13and outputs to the RF signal processing circuit3. The power amplifier circuit24is a transmission amplifier circuit that amplifies a radio frequency signal (here, a transmitting radio frequency signal) output from the RF signal processing circuit3and outputs to the antenna2via the transmission side switch23and the quadplexer1. The RF signal processing circuit3performs signal processing on the received radio frequency signal input from the antenna2via a reception signal path using down-converting and the like, for example, and outputs a reception signal generated by this signal processing to the base band signal processing circuit4. Further, the RF signal processing circuit3performs signal processing on a transmission signal input from the base band signal processing circuit4using up-converting and the like, for example, and outputs a transmitting radio frequency signal generated by this signal processing to the power amplifier circuit24. The RF signal processing circuit3is, for example, a RFIC. The signal processed in the base band signal processing circuit4is used, for example, as an image signal for image display or as an audio signal for call. Note that the radio frequency front-end circuit30may include other circuit elements between the elements described above. According to the radio frequency front-end circuit30and the communication device40configured as described above, it becomes possible to provide an excellent characteristic for both the ripples in the isolation characteristic and the passage loss by including the quadplexer1according to the first preferred embodiment. Note that instead of a quadplexer according to the first preferred embodiment, the radio frequency front-end circuit30may include the quadplexer1according to the modified example of the first preferred embodiment. Further, depending on the processing system of a radio frequency signal, the communication device40may not need to include the base band signal processing circuit (BBIC)4. Other Preferred Embodiments The filters, the multiplexers, the radio frequency front-end circuits, and the communication devices according to the preferred embodiments of the present invention have been described using the preferred embodiments and the modified example thereof. However, other preferred embodiments obtained by combining optional elements of the foregoing preferred embodiments and the modified example described above, modified examples obtained by applying various modifications apparent to those skilled in the art to the foregoing preferred embodiments without departing the scope of the present invention, and various devices including a radio frequency front-end circuit or a communication device according to preferred embodiments of the present invention may also be included in the present invention. For example, in the foregoing description, the quadplexer is used as an example of the multiplexer. However, the present invention is also applicable to, for example, a triplexer in which antenna terminals of three filters are connected to a common terminal, or a hexaplexer in which antenna terminals of six filters are connected to a common terminal. That is, the multiplexer may only need to include two or more filters. Further, the configuration of the multiplexer is not limited to the configuration that includes both the transmission filter and the reception filter and may alternatively have a configuration that includes only the transmission filter or only the reception filter. Further, in the first preferred embodiment, it is described that the filter12corresponds to the first filter and the filter11is the second filter. That is, in the first preferred embodiment, the first filter and the second filter are the transmission filter and the reception filter, respectively. However, the present invention may be applied to any multiplexers without being limited by the usage and the like of the first filter and the second filter, as long as stop band ripples of the first filter are located in a pass band of the second filter. Accordingly, the first filter and the second filter may both be a transmission filter. As described above, a filter according to a preferred embodiment of the present invention includes a pair of input/output terminals, and one or more series resonators on a signal path connecting the pair of input/output terminals, wherein each of the one or more series resonators includes an IDT electrode including a pair of comb-shaped electrodes on a substrate including a piezoelectric layer, each of the pair of comb-shaped electrodes included in each of the one or more series resonators includes a plurality of electrode fingers extending in a direction orthogonal or substantially orthogonal to an acoustic wave propagation direction, and a busbar electrode connecting one-side end portions of respective ones of the plurality of electrode fingers, the IDT electrode of each of the one or more series resonators is defined by first electrode fingers, second electrode fingers, or both the first electrode fingers and the second electrode fingers, the first electrode finger being one of the plurality of electrode fingers and having a wider electrode finger width at an another-side end portion thereof than an electrode finger width at a center portion thereof, the second electrode finger being one of the plurality of electrode fingers and having a narrower or equal electrode finger width at an another-side end portion thereof than an electrode finger width at a center portion thereof, the one or more series resonators includes one or more first series resonator, in the IDT electrode of each of the one or more first series resonators, a direction connecting the another-side end portions of respective ones of the plurality of electrode fingers crosses the acoustic wave propagation direction, and a first portion of the IDT electrode of each of the one or more first series resonators includes only the first electrode fingers, and a second portion and a third portion include only the second electrode fingers, the first portion being centrally located in the acoustic wave propagation direction, the second portion and the third portion being located on two sides of the first portion in the acoustic wave propagation direction. According to this, the first electrode fingers (variant fingers) and the second electrode fingers (electrode fingers including no variant portion) are provided in a mixed manner in the IDT electrode of the first series resonator of the filter. Because of this, the ripples near an anti-resonant frequency that are likely to increase in the case where the first electrode finger is used for all of the electrode fingers and the ripples near a resonant frequency that are likely to increase in the case where the second electrode finger is used for all of the electrode fingers are both reduced or prevented. As a result, it becomes possible to provide a filter that reduces or prevents both the ripples near a resonant frequency and the ripples near an anti-resonant frequency. Further, the one or more series resonator may further include one or more second series resonators on the signal path connecting the pair of input/output terminals, and the IDT electrode that defines each of the one or more second series resonators may include the first electrode fingers. Further, each of the one or more first series resonators may be used as a series resonator that is not the series resonator having a lowest anti-resonant frequency. Because of this, the first electrode fingers and the second electrode fingers are mixed in a series resonator that is not the series resonator having the lowest anti-resonant frequency, that is, the series resonator that provides steepness in a pass band end region of the filter. As a result, it becomes possible to provide a filter having an excellent feature for both the ripples near an anti-resonant frequency and the insertion loss without losing steepness in the band pass characteristic of the filter. Further, the filter may further include one or more parallel resonators on one or more paths that connect the signal path to ground and may have a ladder filter structure. This enables adjustment of the filter characteristic to a preferable characteristic, such as less loss property and the like. Further, a longitudinally coupled filter structure on the signal path may be included. This enables adjustment of the filter characteristic to a preferable characteristic such as enhanced attenuation and the like. Further, the substrate may include a piezoelectric layer in which the IDT electrode is provided on one of principal surfaces of the piezoelectric layer, a high acoustic velocity support substrate in which acoustic velocity of a bulk wave propagating through the high acoustic velocity support substrate is higher than acoustic velocity of an acoustic wave propagating through the piezoelectric layer, and a low acoustic velocity film in which acoustic velocity of a bulk wave propagating through the low acoustic velocity film is lower than acoustic velocity of a bulk wave propagating through the piezoelectric layer, the low acoustic velocity film being provided between the high acoustic velocity support substrate and the piezoelectric layer. This enables the Q factor of each resonator including the IDT electrode provided on the substrate including the piezoelectric layer to be maintained at a high value. Further, a multiplexer according to a preferred embodiment of the present invention includes a common terminal, a first terminal, and a second terminal, a first filter on a first path connecting the common terminal and the first terminal, and a second filter on a second path connecting the common terminal and the second terminal, a pass band of the second filter being higher in frequency than a pass band of the first filter, wherein the first filter is the filter described above. This enables the multiplexer having an excellent feature in both the insertion loss in the second path and the isolation between the first terminal and the second terminal to be provided. Further, a pass band of the first filter may be an uplink frequency band in Band 3 of LTE (Long Term Evolution), and a pass band of the second filter may be an uplink frequency band in Band 1 of LTE. In the case where a pass band of the first filter is the uplink frequency band in Band 3 of LTE and a pass band of the second filter is an uplink frequency band in Band 1 of LTE, ripples in the pass band of the second filter are likely to increase. This enables reduction or prevention of an increase of the ripples effectively by configuring the series resonator closest to the common terminal of the first filter so as to satisfy the condition described above. Further, a radio frequency front-end circuit according to a preferred embodiment of the present invention includes any one of the multiplexers described above, and an amplifier circuit connected to the multiplexer. This enables the radio frequency front-end circuit that enables reduction or prevention of ripples in a pass band to be provided. Further, a communication device according to a preferred embodiment of the present invention includes an RF signal processing circuit that performs processing on a radio frequency signal being transmitted or received by an antenna, and the foregoing radio frequency front-end circuit that transmits a radio frequency signal between the antenna and the RF signal processing circuit. This enables to provide the communication device that enables reduction or prevention of ripples in a pass band. Preferred embodiments of the present invention may be widely used in communication equipment such as cellular phones and the like, for example, as a filter, a multiplexer, a front-end circuit, and a communication device, which are applicable to multiband systems. While preferred embodiments of the present invention have been described above, it is to be understood that variations and modifications will be apparent to those skilled in the art without departing from the scope and spirit of the present invention. The scope of the present invention, therefore, is to be determined solely by the following claims. | 63,424 |
11863163 | DETAILED DESCRIPTION OF THE INVENTION The digital hybrid (active+passive) load pull tuner (DHLPT) comprises two tuning sections, a passive section and an active section, and a common slabline (FIG.7); the particularity of the apparatus is that the proper choice of the components allows for both sections to share the same slabline (73). The slabline (73) is used to extract signal power “from” the DUT, travelling on the center conductor (37) using coupler (36,71), amplitude- and phase-modulate (modify), amplify and re-inject the modified signal power “to” the DUT, using coupler (701), towards the test port (70); it does so twofold: a) by retrieving a small portion of the DUT-produced signal power into the active section, suppressing the spurious portion (32) into the isolated port and re-injecting the amplified signal into the DUT via the signal couplers (71) and (701), and b) by reflecting back signal power through the passive tuner probe (74). The couplers and the passive tuner can be integrated in the slabline and, at least one coupler, preferably the coupler (71) closest to the test port, should be adjustable; the extension of the same slabline (73) is also used for simultaneous passive tuning (physical signal reflection back into the DUT) using metallic reflective probe(s) (74) as follows: The signal exiting from the DUT enters the tuner into the test port (70); then it is sampled by signal coupler (71), of which the coupling factor (31) C1=Real(C1)+j*Imag(C1)=|C1|*exp(jΦ1) between the input port (30) and the coupled port is amplitude- and phase-adjustable (33,34); amplitude is controlled by adjusting the penetration of the coupling loop (35) (FIG.3) into the slabline (73) via the vertical axis (76), which is controlled by the motor (72); the phase is controlled by moving the carriage, which holds the coupling loop (35,71) along the slabline using ACME (702); the sampled signal power is injected through the coupled port (77) and an optional low pass filter into port1of the circulator (78). The signal travels with negligible loss to port2of the circulator and is reflected back by the digital electronic tuner (79), at its test port DT.A of which said tuner the second (idle) port DT.B is terminated with matched load (Zo=50Ω). The reflected signal at port2of the circulator, having modified amplitude and phase, continues with negligible loss to port3and from there to the amplifier (75). The amplified signal is injected through signal coupler (701) back into the slabline (73) towards the test port (70) and the DUT. The remaining (non-sampled) portion of the signal from the test port (70) travels through the slabline to the passive tuner, is reflected back by the tuning probe (74) towards the test port and vector-overlaps with the active injection signal. Depending on amplitude and relative phase of those returning signal waves a controllable reflection factor (S-TUN) is generated at the test port (70), which, due to the amplification of the active part of the signal, can reach |S-TUN|=1 at any reference plane, even beyond the test port, despite intervening insertion loss of the test fixture or wafer probe providing access to the DUT. An impedance tuner (load pull tuner), whether passive or active, should be calibrated before being used. Otherwise impedance synthesis (tuning) degenerates into lengthy in-situ real-time trial and error search operations without a specific direction. Calibration allows accelerating the testing because it uses saved data and computer speed in identifying the searched impedance, before either the lengthy (second- or minute-long) mechanical or even the faster electronic tuning occurs. Memory operations (lasting micro- or nano-seconds) are still orders of magnitude faster. Calibration means measuring the four tuner scattering parameters between arbitrary ports1and2(s-parameters, S11, S12, S21 and S22, all items are complex numbers Sik=Sik.real+j*Sik.imag) for a number of tuner settings and save them in calibration data files (FIG.9); calibration data are recalled at measurement time and associated with the measured DUT data, allowing the search for an optimum and/or the creation of “data versus impedance maps”, in laboratory jargon called “load-pull contours” (see ref. 1). As long as the tuner is a passive mechanical slide screw tuner, it can be assumed that its RF behavior will not change with the power injected (within reasonable limits) into it by the DUT; in this case the tuner is linear and calibration is straight forward s-parameter measurement and saving (FIG.10, ref. 7). For this s-parameters between ports A and B (FIG.8) are measured for a multitude of passive tuning probe (74) horizontal and vertical positions XT and YT, with the active tuner turned OFF (coupling loop71,35withdrawn and, if necessary, digital tuner79, set to THRU, or all diodes turned OFF to open circuit,FIG.14), and saved in a passive tuner calibration data file. In the case where the passive tuner comprises more than one tuning probe (used for high reflection or harmonic tuning), s-parameters are measured individually for each probe, all other probes being withdrawn from the slabline and s-parameters of all except the first, closest to the test port, probe are de-embedded (cascaded with the invers matrix of the initialized (all probes withdrawn) tuner s-parameter matrix [SAB0]−1). In this case the overall passive tuner calibration matrix is created in computer memory as the cascade of all permutations of s-parameters of all tuning probes for all (harmonic or not) test frequencies and saved. Active systems include amplifiers and amplifiers are often used to their limit and become notoriously nonlinear, because they are used at the maximum of their capability, mainly for cost or feasibility reasons; i.e. if there is need for 10 W power, rarely somebody will either buy a 100 W amplifier at a multiple of the cost, just to be on the safe side regarding linearity, or such high power amplifiers do not even exist for a specific frequency. An additional handicap is the fact that the amplifier may, without warning, become nonlinear during the same operation, as the power generated by the DUT changes. Since, normally, s-parameters are defined and measured in the linear range (also called “small signal”), it follows that, such data will become invalid at the moment the DUT produces enough power to cause the amplifier to shift into the nonlinear range, in which case its gain G and transmission phase (F will change; since in the active system the gain and phase of the amplifier are part of and determine the validity of the calibration, we are facing an impasse. The same is valid, in the present case, also with the digital electronic tuner, which uses PIN diodes as control devices (see ref. 4); these diodes may also become nonlinear at certain power, even though the relative low coupling factor of the adjustable coupler (71) will reduce the DUT power by (typically) 20-30 dB (FIG.4). It is assumed therefore, that, for these reasons, calibrations of active systems cannot be used entirely independently to operate under arbitrary input power conditions. In this invention we, therefore, use (i) a small signal pre-calibration of the active tuning section and proceed (ii) to the final tuning using in-situ real-time measurement, in two steps. The active tuning section calibration comprises two steps (FIG.11): in a first step one has to characterize separately the digital tuner. This happens by connecting the input (DT.A) and output (DT.B) ports of the digital tuner to a pre-calibrated VNA (FIGS.7and14) and measure s-parameters. We use a de-embedding technique by which each element (diode) at a time is switched ON (=short), all other diodes remaining OFF (=open). This means at a set of N (typically N=12) diodes we measure N sets of s-parameters. Then we cascade each set, except the set associated with the first element D1 (FIG.14) with the invers matrix of the digital tuner [SDT0]−1, measured with all elements turned OFF. Subsequently all 2Npermutations of all [SDTi] (i=1 to i=2N) matrices are generated numerically and only the test port reflection factor SDTi.11 is saved; in case N=12 this results to a data-base of 212=4096 complex numbers. As a next step the active loop is calibrated as a function of the coupling factor and phase of comprising the adjustable coupler. For this the tuning probe(s) of the passive tuner are withdrawn and an OPEN is connected to port2of the circulator to create a maximum transmission between ports1and3. Then s-parameters are measured between ports A and B for a multitude NX (typical NX=10) horizontal (XC) and NY (typical NY=3) vertical (YC) positions of the coupling loop yielding the matrices [SAB(XC, YC)] comprising typically items. Since the OPEN at port2corresponds to SDT.11=1 (SDT.11=Test port Reflection factor of Digital Tuner), we multiply [SAB(XC, YC)] with SDTi.11 to get the overall active tuner transmission gain SAT.21. and reflection factor SAT.11; the very small directivity couplings C2D*C1D (FIG.15) gain of the amplifier infers that SAT.22≈0 (very small signal injected into port B reaches the amplifier to be re-injected reverse into the slabline) and SAT.12≈1-C2R-C1R, wherein C1R and C2R are the inverse couplings of the two couplers, from the output to the isolated ports (FIG.15); those couplings are of the order of −15 dB to −30 dB (0.03 to 0.001) and will extract and withdraw a very small portion of the signal power injected into port B. As a result we have two sets of s-parameters of two cascaded tuners, the active tuner from port A to the fixed coupler and the passive tuner from the fixed coupler to port B. Since the s-parameters of the passive tuner have been already de-embedded, we can now cascade the s-parameters of the two tuners to obtain the global, small signal, hybrid tuner calibration. Assuming the passive tuner is a single probe tuner and is calibrated at 400 settings, the digital tuner has 12 diodes and the coupler is calibrated at 30 settings (3 vertical-amplitude and 10 horizontal-phase) number of measurements will be 400+30+12+2=444; (2 are the zero matrix measurements); the generated data will contain 400×30×4096=49,152,000 data points. If the passive tuner includes three independent probes for 2ndand 3rdharmonic tuning then the total number of measurements will be 1200+30+12+2=1244 and the of the generated data will contain 4003×30×4096=7,864*1012data points (FIG.11). However, handling this amount of data requires a super-computer. Therefore, all operations are split in steps (FIG.12). For instance, passive pre-tuning requires either the 400 points at the fundamental frequency, or an efficient search strategy in the 4003=64,000,000 data points for 3 harmonic tuning (see ref. 7). Once the passive pre-tuning (fundamental or harmonic) is settled, the in-situ real-time search amongst the only 30×4096=122,880 points of the combined tuning states of the “coupler-digital active tuner” assembly around vector (132) inFIG.13will rapidly yield an optimum. Notice: throughout this disclosure we use complex scattering (s-) parameters in single form S═Real(S)+j*Imag(S) and in matrix form [S]=S1, S2, S3, S4), each element being also a complex number. Thus, S includes two real numbers, [S] includes eight real numbers. Data blocks comprising a multitude M of s-parameter matrices [S(M)] are designated with brace parenthesis {[S(M)]}. Impedance synthesis (Tuning) in the digital hybrid load pull tuner system (DHLPS) occurs as follows: In a first step the small signal calibration of the passive tuning section is used to pre-tune (132) close to the overall target impedance (136) area (FIG.13); for this, first the passive tuner is moved (vector132pointing to impedance (reflection factor) (130)); the position and size of the impedance cloud (133) around the point (130) is created by the digital tuning points (131) depending on the amplitude and phase of the vector (132) and the factor “coupling coefficients C1F times C2F times the gain of the amplifier G”: C1F*C2F*G)”,FIG.6. The smaller this factor is, the smaller the cloud surface (described by the radius134) controllable by the digital tuning (131). The density of the digital tuning points is non homogenous over the Smith chart (seeFIG.16). It is expected that proper choice of the amplitude and phase of the C1F*C2F*G factor (which is in-situ adjustable via C1F=|C1F|*exp(jΦ1F)) will allow the initial small signal calibration to create and rotate (135) a, due to increased DUT output power, slightly different large-signal tuning area around the optimum target point. Actual tuned impedances are measured in real time by acquiring signal power waves <a2> and <b2> through at DUT reference plane calibrated (see ref. 6) bi-directional couplers (20), see ref. 3 shown inFIG.2and measured using the VNA (21). The power waves <a2> and <b2> at the DUT port, not only allow calculating the actual linear or non-linear load reflection factor S11.DUT=<a2>/<b2>, presented by the hybrid load pull tuner, but also determining the other RF characteristics of the DUT such as the delivered power, gain, efficiency, linearity and spectral behavior. In short, the two small signal calibrations are applied, in a first approximation, overlapped to create an impedance: S11.DUT=S11.PASSIVE+S11.ACTIVE=(<a2.p>+<a2.a>)/<b2>,FIGS.5and6) both referred to the test port. The passive reflected signal <a2.p> is weakened on its way to the test port through signal extraction by the directivity couplings C2D and C1D, but still this is a secondary effect of the order of 5% and is already part of the passive tuner calibration. In conclusion the small signal calibration can place the total load pull tuning vector close to the target, from where the in-situ measured large signal digital tuning will take over. Considering that neither the passive reflection wave <a2.p>, nor the directivity couplings C1D, C2D change with nonlinear amplifier behavior, this is a valid approach. In fact the small signal calibration of the passive tuner includes the leaks into the forward and returning signal into the signal couplers: the total returned signal <a2.p> can be calculated as follows: <a2.p>=<b2>*(1−C1F)(1−C2F)*ΓT*(1−C2R)*(1−C1R) whereby ΓTis the reflection factor of the passive tuner (FIG.5) and, assuming typical values C1R≈C2R≈0.03, C1F≈C2F≈0.1, leads to <a2.p>/<b2>≈0.76*ΓT. The complication arises when active injection is superimposed and amplified signal leaks through C2R towards the passive tuner and is reflected back, will affect the passive tuning vector. This secondary effect shall be taken care of in the in-situ search though. As shown inFIG.13, the small signal (linear) calibration, will allow placing the total reflection factor vector (load impedance) tuning space (cloud) in the general area of the target impedance (the conjugate complex of the DUT internal impedance). From now on the calibration data are not used any more, because they may not be valid, due to non-linearity of the amplifier mostly. Instead the actual load reflection factor S11.DUT=<a24<b2> is measured by the VNA (21) using the bidirectional couplers (20) (vector load pull,FIG.2and ref 6). This allows for a high-speed random search to be executed using only the digital tuner in the narrow area defined by the passive pre-tuning and coupler settings (FIG.13). This application discloses the concept of a digital high-speed hybrid load pull tuner system (DHLPTS) and the concept of calibrating the active and pre-matching passive tuners and in-situ large signal tuning. Obvious alternatives shall not impede on the originality of the concept | 15,731 |
11863164 | DESCRIPTION OF EMBODIMENTS Hereinafter, the disclosed technology will be specifically described based on the drawings illustrating an embodiment of the disclosed technology. FIG.1is a block diagram describing an overall configuration of a quantum information processing system according to the embodiment. The quantum information processing system according to the embodiment includes a classical computer unit10that functions as a classical computer that processes binary bit data and a quantum computer unit20that includes a quantum system composed of a plurality of quantum bits mutually interacting with one another and outputs a signal acquired based on superposition of states of respective quantum bits. The classical computer unit10includes a control unit11, a storage unit12, an input/output unit13, an observation unit14, and a learning unit15. The control unit11includes, for example, a central processing unit (CPU), a read only memory (ROM), a random access memory (RAM), and the like. The CPU in the control unit11controls operation of the classical computer unit10and the quantum computer unit20by executing various types of programs that are stored in the ROM or the storage unit12in advance. The RAM in the control unit11temporarily stores data and the like generated during execution of the various types of programs. Although, in the embodiment, a configuration for controlling operation of the quantum computer unit20, using the control unit11, which includes the CPU, the ROM, the RAM, and the like, will be described, the configuration of the control unit11is not limited to this configuration and the control unit11is only required to be any processing circuit that includes a multicore CPU, a graphic processing unit (GPU), a microcomputer, a field programmable gate array (FPGA), an analog circuit, and the like. The control unit11may also include a clock that outputs date and time information, a timer that measures elapsed time from a point of time at which a measurement start instruction is given to a point of time at which a measurement end instruction is given, a counter that counts a number, and the like. Although, in the embodiment, the classical computer unit10is assumed to have a configuration including the control unit11, the storage unit12, the input/output unit13, the observation unit14, and the learning unit15, it is obvious that functions of the learning unit15may, for example, be achieved by another computer, a virtual computer, or the like that is independent of the classical computer unit10. The storage unit12includes a storage device, such as a hard disk drive (HDD), and stores various types of programs and data. Programs that the storage unit12stores include an operating system (OS)12A that is a program for controlling overall operation of the classical computer unit10and a program (quantum program)12B for controlling operation of the quantum computer unit20. Data that the storage unit12stores include input data to be input to the quantum computer unit20, control data that are necessary to control the operation of the quantum computer unit20, and the like. The quantum program12B may be a program provided by a non-transitory recording medium M that records the program in a readable manner. The recording medium M is a portable memory, such as a CD-ROM, a USB memory, a secure digital (SD) card, a micro SD card, and a COMPACTFLASH (registered trademark). In this case, the classical computer unit10included in the quantum information processing system may read the quantum program12B from the recording medium M, using a not-illustrated reading device and install the read quantum program12B in the storage unit12. In a case in which the classical computer unit10includes a communication unit and is capable of communicating with an external communication device through the communication unit, the quantum program12B may be provided by communication through the communication unit. In this case, the classical computer unit10may acquire the quantum program12B from the external communication device through the communication unit and install the acquired quantum program12B in the storage unit12. The input/output unit13includes an input/output interface to which an input device, such as a keyboard and a mouse, and an output device, such as a display device, are connected. The input/output unit13transmits data input through the input device to the control unit11, and the control unit11performs various types of processing, based on the data from the input/output unit13. In a case in which the control unit11acquires data of which a user needs to be informed, such as output data output from the quantum computer unit20, the control unit11outputs the data to the output device through the input/output unit13. The observation unit14is connected to an output layer23of the quantum computer unit20and acquires output data read from a quantum circuit22, through the output layer23. The observation unit14transmits the acquired output data to the control unit11. The learning unit15includes a memory (not illustrated) for storing teacher data that indicate ideal output data with respect to input data to be input to the quantum computer unit20. The learning unit15is connected to the output layer23and learns a circuit parameter θ that defines a circuit configuration of the quantum circuit22, based on output data read through the output layer23and the teacher data stored in advance. The circuit parameter θ determined by the learning unit15is fed back to the quantum circuit22through the control unit11. The circuit parameter θ does not have to be a single parameter and may be a parameter group composed of a plurality of parameters. Although, in the embodiment, the classical computer unit10is assumed to have a configuration including the learning unit15as an independent constituent element, the control unit11may be provided with functions of the learning unit15. In this case, processing steps at the time of learning the circuit parameter θ is provided by the quantum program12B, and the teacher data is, for example, stored in the storage unit12. The control unit11achieves the functions of the learning unit15by executing the quantum program12B in a learning phase and thereby learning the circuit parameter θ, which defines the quantum circuit22, based on the output data from the quantum computer unit20acquired through the observation unit14and the teacher data read from the storage unit12and performing feedback of the circuit parameter θ to the quantum circuit22. The quantum computer unit20includes an input layer21, the quantum circuit22, and the output layer23. The input layer21outputs a signal based on input data input from the classical computer unit10to quantum bits that the quantum system in the quantum circuit22includes. A signal to be output to the quantum bits may be a binary signal taking a value of 0 or 1 (or −1 or +1) or a continuously variable signal taking a value in a range of from 0 to 1 (or from −1 to +1). The quantum circuit22includes a quantum system that is configured with a plurality of quantum bits that mutually interact with one another and the circuit configuration of which is defined by the circuit parameter θ. As the quantum system, any controllable quantum system including a physical system that behaves based on quantum mechanics and is represented by, for example, a liquid or solid nuclear magnetic resonance (NMR) quantum spin ensemble, a superconducting quantum circuit, trapped ions, quantum dots, or neutral atoms on an optical lattice can be used. The circuit parameter θ, which defines the circuit configuration of the quantum circuit22, is, for example, an operator that rotates one of the quantum bits about any rotation axis by a preset angle. The output layer23includes an observation means, such as a single electron transistor, a superconducting amplifier and detector, a coil detecting electromagnetic waves, and a photon detector, corresponding to each quantum bit. The output layer23reads output data using the circuit parameter θ, which is determined by the learning unit15, by observing a state of each quantum bit and applying an appropriately selected output function F (which will be described later) to the observed states of the quantum bits. On this occasion, the output layer23may acquire an average of a physical quantity by performing sampling through repeating observation multiple times, if necessary. Although the above-described quantum information processing system is assumed to have a configuration in which the classical computer unit10and the quantum computer unit20are separated from each other for convenience, the quantum information processing system does not necessarily have to include the classical computer unit10and the quantum computer unit20separately and may be constructed as a system (device) that integrally includes the classical computer unit10and the quantum computer unit20. FIG.2is an explanatory diagram describing dynamics of the quantum system that the quantum circuit22includes. It is assumed that the quantum system, which the quantum circuit22includes, includes as constituent elements N (N is an integer of 2 or more) quantum bits. In order to utilize quantum dynamics for information processing, input signals xi(I=1, 2, 3, . . . ) to the quantum system, which the quantum circuit22includes, are introduced. Each input signal xirepresents a static digital signal, such as an image signal. In the quantum circuit22, in a case in which an input signal xiis given to initialized quantum bits (qubits), an input state |ψin(xi)> of the qubits is replaced with |ψin(xi)>=V(xi)|0>. In this expression, V(xi) denotes a unitary input gate. In the embodiment, the control unit11applies a unitary operator U(θ) that is parameterized by the circuit parameter θ of the quantum circuit22to the input state. This operation causes |ψout(xi, θ)>=U(θ)|ψin(xi)> to be acquired as an output state. In this expression, U(θ) may be a quantum circuit composed of basic quantum operators with parameters, such as rotation angles, and may include time evolution generated by a Hamiltonian. The quantum circuit22may include not only the unitary operator U(θ) but also another quantum circuit that does not depend on the circuit parameter θ or any processing circuit. A signal acquired from each qubit is defined by a result of projective measurement of a physical quantity that is locally observable in the qubit or an average of the physical quantity acquired by repeatedly sampling the physical quantity. A Pauli operator {Bj} that acts on a j-th qubit is used as an observable physical quantity herein. For example, in a case in which spin is used as a qubit, the observable physical quantity corresponds to magnetic susceptibility of the nuclear spin. An output signal that is acquired by applying the Pauli operators {Bj} is, for example, expressed by y(xi, θ)=F({Bj(xi), θ)}). In this expression, F denotes an appropriate output function. In the learning phase in which the circuit parameter θ in the quantum circuit22is adjusted, optimization of the circuit parameter θ is performed using a cost function. The cost function is expressed by, for example, the following Formula, using, for example, an output signal yifrom the quantum computer unit20and a teacher signal f(xi) that indicates an ideal output signal with respect to an input signal xi. L=∑if(xi)-yi2[Math.1] In the above Formula, yiis an output signal that is read from the quantum computer unit20in a case in which the input signal xiand a circuit parameter θ={θi} are given. An operator |□| denotes a Euclidean norm. In a case in which the learning unit15acquires the output signal yi, which is an output from the quantum computer unit20, in the learning phase in which the circuit parameter θ of the quantum circuit22is optimized, the learning unit15adjusts the circuit parameter θ in such a way as to minimize the above-described cost function, which is set using the output signal yiand the teacher signal f(xi). In the embodiment, the cost function is not limited to Formula of Math. 1. For example, a cost function expressed by the following Formula may be used. L=-∑if(xi)logyi[Math.2] Operation of the quantum information processing system at the time of adjusting the circuit parameter θ in the quantum circuit22will be described below. FIG.3is a flowchart describing processing steps that are performed in the quantum information processing system according to the embodiment. The operation described by the flowchart inFIG.3is achieved by the control unit11reading and executing various types of programs, such as the OS12A and the quantum program12B, stored in the storage unit12and controlling operation of the respective units that the classical computer unit10and the quantum computer unit20include. The control unit11determines whether or not the operation is currently in a learning phase in which the circuit parameter θ of the quantum circuit22is learned (step S101). In the case of determining that the operation is not in the learning phase (S101: NO), the control unit11terminates the processing performed in accordance with the flowchart without performing the following processing. In a case in which the control unit11, for example, accepts a change instruction to transition from an operation phase to the learning phase through the input/output unit13, the control unit11determines that the operation is currently in the learning phase (S101: YES) and gives an input signal based on an input data {xi} to be processed to the quantum circuit22through the input layer21of the quantum computer unit20(step S102). That is, applying a unitary input gate V(xi) to initialized qubits |0> causes |ψin(xi)>=V(xi)|0> to be acquired as an input state. It may also be assumed that, using as an input g(xi) that is obtained by applying an appropriate nonlinear function g to the input signal xi, |ψin(xi)>=V(g(xi))|0> is acquired. In the quantum circuit22, nonlinearity is induced by a tensor product structure of the quantum system and the unitary input gate V(xi). The nonlinearity may also be induced by performing quantum analog-digital conversion on a quantum state that is acquired by applying the unitary operator U(θ) to an input state, storing analog information of a wave function in a quantum random access memory as digital quantum information, performing nonlinear conversion with the quantum state treated as digital quantum information, and performing the quantum analog-digital conversion again. That is, the quantum circuit22may include not only the unitary operator U(θ) but also a first quantum analog-digital conversion A that converts analog information to digital information, another quantum circuit R relating to nonlinear operation, and a second quantum analog-digital conversion circuit D that converts digital information to analog information. The control unit11applies the unitary operator U(θ), which is parameterized by the circuit parameter θ of the quantum circuit22, to the input state. This operation causes |ψout(xi,θ)>=U(θ)|ψin(xi)> to be acquired as an output state. Next, the observation unit14observes a state of a selected qubit of the quantum circuit22(step S103) and acquires an output signal through the output layer23(step S104). The output layer23may acquire as an output signal an expectation of a physical quantity by repeatedly performing sampling on an observation result multiple times, if necessary. An output signal yi=y(xi, θ) can be expressed as y(xi,θ)=F({Bj(xi,θ)}), using a Pauli operator {Bj}. In this expression, F denotes an appropriate output function. The learning unit15calculates a cost function L(f(xi), y(xi, θ)) that is set based on the output signal yi, which is input through the observation unit14, and a teacher signal f(xi) that indicates an ideal output with respect to the input signal xi(step S105). Next, the learning unit15determines whether or not the calculated value of the cost function L is equal to or less than a threshold value ε (step S106). The threshold value ε is a threshold value for determining whether or not the cost function has converged, and an appropriate minuscule value is set to the threshold value ε. In a case in which it is determined that the calculated value of the cost function L is not equal to or less than the threshold value ε (S106: NO), the learning unit15changes the circuit parameter θ (step S107). For example, in a case in which the circuit parameter θ includes an operator that rotates a qubit constituting the quantum circuit22about any rotation axis, the learning unit15may change the circuit parameter θ by changing the rotation angle of the operator. In a case in which the quantum circuit22is a quantum annealing machine or a quantum circuit produced by digitizing a quantum annealing machine, the learning unit15may change the circuit parameter θ by changing coupling constants between qubits. The learning unit15may set the circuit parameter θ by keeping values of the cost function L that were calculated in step S105and using a gradient method in such a way that a cost function value Lncalculated at this time becomes less than a cost function value Ln-1calculated at the previous time. A gradient value of the cost function with respect to a parameter can be acquired by appropriately changing the circuit configuration of the quantum circuit22partially and performing direct observation using the same method as regular reading. The learning unit15may adjust the circuit parameter θ, using a gradient that is calculated based on a difference between an output of the quantum circuit22in a case in which the circuit parameter θ is increased by a set value ε (for example, π/2 at which the difference is maximized) and an output of the quantum circuit22in a case in which the circuit parameter θ is decreased by ε. A value calculated from the output of the quantum circuit22can, for example, be expressed as the following Formula. A(x,{θk})=0|⊗nV†(x)U({θk})†AU({θk})V(x)|0⊗n[Math. 3] In this Formula, V(x) is a quantum operation for input, and U({θk}) is a parameterized quantum circuit. On this occasion, in a case in which it is assumed that teacher data are denoted by {x(j), y(j)}, the cost function L({θk}) can be expressed as the following Formula. L({θk})=∑j(〈A(x(j),{θk})〉-y(j))2[Math.4] The gradient of the cost function L({θk}) is expressed by the following Formula. ∂∂θlL({θk})=∂∂θl∑j(〈A(x(j),{θk})〉-y(j))2=∑j2(〈A(x(j),{θk})〉-y(j))∂∂θl〈A(x(j),{θk})〉[Math.5] That is, in order that the cost function is an analytic function, it is sufficient that a partial differential of the output <A> can be calculated. For example, in a case in which U({θk}) is expressed by Formula of Math. 6 and if Pksatisfies Pk2=I, it is possible to calculate the partial differential of the cost function L({θk}) analytically and the partial differential of the output <A> can be expressed as Formula of Math. 7. In addition, ε in Math. 7 is a set value that is set in advance. U({θk})=∏kWke-i(θk/2)Pk[Math.6]∂∂θl〈A(x(j),{θk})〉=12sin(ϵ)(〈A(x(j),{θ1,…θl+ϵ,θl+1,…})〉-〈A(x(j),{θ1,…θl-ϵ,θl+1,…})〉)[Math.7] As expressed by Math. 7, the learning unit15is able to calculate a gradient, based on a difference between an output of the quantum circuit22in a case in which the l-th circuit parameter is shifted by +ε and an output of the quantum circuit22in a case in which the l-th circuit parameter is shifted by −ε. The learning unit15is only required to adjust the quantum parameter θ, based on the calculated gradient. The learning unit15outputs the changed value of the circuit parameter θ to the control unit11. The control unit11changes the circuit configuration of the quantum circuit22, based on the circuit parameter θ output from the learning unit15and returns the processing to step S102. In a case in which it is determined that the calculated value of the cost function L is equal to or less than the threshold value ε (S106: YES), the learning unit15determines the value of the circuit parameter θ at this time as a value of the circuit parameter θ to be used in the operation phase (step S108). The learning unit15outputs the determined circuit parameter θ to the control unit11and terminates the processing performed in accordance with the flowchart. Results of numerical simulations in which typical machine learning tasks were performed using the quantum information processing system according to the embodiment will be described below. FIGS.4A,4B,5A, and5Bare graphs illustrating results of fitting to nonlinear functions.FIGS.4A and4Billustrate results of fitting to P(x)=x2and sin x, respectively, andFIGS.5A and5Billustrate results of fitting to P(x)=ex and |x|, respectively. The abscissa and ordinate of each graph represent an input value x to a function and an output value P(x) of the function, respectively. The alternate long and short dash line and solid line in each graph indicate an output (initial data) of the quantum computer unit20in a case in which the circuit parameter θ is set at random and an output (final data) of the quantum computer unit20in a case in which the circuit parameter θ is optimized, respectively. Filled circles in each graph indicate teacher data that were used at the time of optimizing the circuit parameter θ. It is revealed that all the functions can be reproduced with high precision by optimizing the circuit parameter θ in the quantum circuit22, using a framework of quantum circuit learning according to the embodiment. FIGS.6A and6Bare explanatory diagrams describing an application example to a classification problem.FIG.6Aillustrates an example of teacher data. Filled circles and unfilled circles illustrated inFIG.6Arepresent points belonging to a class 0 and points belonging to a class 1, respectively. The example inFIG.6Aindicates that in total 200 points representing the teacher data (each of the numbers of points belonging to the class 0 and points belonging to the class 1 is 100) are distributed in a range of x=−1 to +1 and y=−1 to +1. FIG.6Billustrates optimized output values from the qubits after Softmax conversion has been performed. InFIG.6B, an output value “0.5” indicates a threshold value in the classification, and a value equal to or greater than the threshold value (that is, a value of 0.5 or greater) represents a point to be classified into the class 1 and a value less than the threshold value (that is, a value of less than 0.5) represents a point to be classified into the class 0. The output result inFIG.6Bindicates that points in a region on the outside of an annular region indicated by gray points are points to be classified into the class 1 and points in a region on the inside of the annular region are points to be classified into the class 0, and it is revealed that the distribution of the teacher data illustrated inFIG.6Ais appropriately classified. As illustrated inFIG.6B, it is possible to solve a nonlinear classification problem by optimizing the circuit parameter θ in the quantum circuit22, using the framework of quantum circuit learning, in the embodiment. As described above, it is possible to adjust the circuit parameter of the quantum circuit22, which is used for generalization, prediction, and the like, to an appropriate value, in the embodiment. This capability enables a quantum algorithm to be provided that operates in a quantum computer expected to be achieved in the near future and capable of running a shallow circuit using 50 to 100 qubits and that is capable of executing practical machine learning tasks. This capability also enables a systematic adjustment method in a case in which the number of circuit parameters has increased to be provided and a setting method of a cost function and an embedding method of input into a quantum state in the case of performing supervised learning to be provided. The framework of quantum circuit learning according to the embodiment may be used for optimization of precision improvement in the quantum circuit22. For example, a situation is considered in which, although a quantum calculation W is expected to be executed with as high fidelity as possible, a quantum calculation W′ that is experimentally achieved is caused to be a quantum calculation that is substantially different from the ideal quantum calculation W and has a low fidelity because the quantum calculation W′ includes an unknown parameter. In this case, an input-output relationship {xi} and {yi} for the ideal quantum calculation W is prepared in advance. The circuit parameter θ is determined using the framework of quantum circuit learning according to the embodiment in such a way that the quantum circuit22(unitary operator U(θ)) that has a well-controlled circuit parameter θ is made to operate on the quantum calculation W′, which is experimentally achieved, and U(θ)W′|xi> that is acquired through the operation indicates the ideal output signal yi. Use of the determined circuit parameter θ enables U(θ)W′ that has a high fidelity (that is close to the ideal quantum calculation W) to be achieved and fidelity of quantum calculation to be improved. The embodiment disclosed herein should be considered as illustrative in all aspects rather than restrictive. The scope of the disclosed technology is defined by the terms of claims, rather than the description above, and is intended to include any modifications within the scope and meaning equivalent to the terms of claims. In the case of related technology, designing a quantum circuit properly and cutting off noise from the outside and thereby achieving a universal quantum computer have remained to be difficult tasks. There is also a problem in that no application method of quantum circuits to information processing tasks having high general versatility like ones dealt with in the machine learning field has been established. The disclosed technology has been made in consideration of the above-described problems, and an object of the disclosed technology is to provide a quantum circuit learning device, a quantum circuit learning method, a computer program, and a recording medium that are capable of adjusting a circuit parameter in a quantum circuit and applicable to information processing tasks. The disclosed technology enables a circuit parameter in a quantum circuit to be adjusted and application to information processing tasks to be achieved. The disclosures of Japanese Patent Application No. 2018-032118, Feb. 26, 2018 is incorporated herein by reference in their entirety. All publications, patent applications, and technical standards mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent application, or technical standard was specifically and individually indicated to be incorporated by reference. | 26,971 |
11863165 | DESCRIPTION OF EXAMPLE EMBODIMENTS OF THE DISCLOSURE Overview The trend in wireless communication receivers is to capture more and more bandwidth to support higher throughput, and to directly sample the radio frequency (RF) signal to enable re-configurability and lower cost. Other applications like instrumentation also demand the ability to digitize wide bandwidth RF signals. These applications benefit from input circuitry which can perform well with high speed, wide bandwidth RF signals. An input buffer and bootstrapped switch are designed to service such applications, and can be implemented in 28 nm complementary metal-oxide (CMOS) technology. High Speed Analog-to-Digital Converters ADCs are electronic devices that convert a continuous physical quantity carried by an analog signal to a digital output or number that represents the quantity's amplitude (or to a digital signal carrying that digital number). An ADC can be defined by the following application requirements: its bandwidth (the range of frequencies of analog signals it can properly convert to a digital signal) and its resolution (the number of discrete levels the maximum analog signal can be divided into and represented in the digital signal). An ADC also has various specifications for quantifying ADC dynamic performance, including signal-to-noise-and-distortion ratio SINAD, effective number of bits ENOB, signal to noise ratio SNR, total harmonic distortion THD, total harmonic distortion plus noise THD+N, and spurious free dynamic range SFDR. Analog-to-digital converters (ADCs) have many different designs, which can be chosen based on the application requirements and specifications. High speed applications are particularly important in communications and instrumentation. The input signal can have a frequency in the gigahertz range, and the ADC may need to sample in the range of Giga-samples per second. High frequency input signals can impose many requirements on the circuits receiving the input signal, i.e., the “front end” circuitry of the ADC. The circuit not only has to be fast, for some applications, the circuit needs to meet certain performance requirements, such as SNR and SFDR. Designing an ADC that meets both speed and performance requirements is not trivial. FIG.1shows a front end to an analog-to-digital converter, according to some embodiments of the disclosure. Typically, an input signal VIN(e.g., a high frequency input signal in the gigahertz range) is provided to an input buffer102. The output VINXof the input buffer is then provided to a sampler106where the input signal, in the form of VINXfrom the output of the input buffer, is sampled onto a sampling capacitor CS112 A transistor MN108(e.g., an N-type complementary metal-oxide field-effect (CMOS) transistor, or NMOS transistor) is provided to allow the input signal VINXto be provided to the sampling capacitor CS. Transistor MN108is sometimes referred herein as the sampling switch. During sampling, transistor MN108is turned on, and switch110is closed. The output VINXof the input buffer may pass through a transmission line (“T-LINE”)104going from the output of the input buffer102to the sampler106. In some cases where the ADC includes a plurality of ADCs in parallel (e.g., where the ADC is a time-interleaved ADC or a randomized time-interleaved ADC), there are multiple (matched) samplers, including sampler106, in parallel. Multiple (matched) transmission lines can be included to provide the output signal VINXfrom a common input buffer102to each sampler. Time-interleaved ADCS or randomized time-interleaved ADCs can sample the input signal VINXone at a time. In some cases, a reference ADC and one of the time-interleaved ADCs sample the output signal VINXat substantially the same time. For time-interleaved ADCs or randomized time-interleaved ADCs, some of the samplers may be off at any given time while one or more samplers loads the input buffer. To reduce degradation of SFDR, the back gates of the transistors in the samplers coupled to receive the input signal VINX(e.g., transistor MN108) can be tied to a negative voltage, such as −1 volts, to minimize the non-linearity in those transistors. Bootstrapped Switching Circuit Referring back toFIG.1, the timing of the transistor MN108turning on quickly enough to allow VINXto be sampled onto the sampling capacitor CS112is critical, especially for high speed applications. Consider an example where an ADC has a sampling rate of 10 Giga-samples per second, the transistor MN108must turn on quickly enough to allow sampling of the input signal VINXonto sampling capacitor Cs112with only a hundred of picoseconds between samples. The timing for turning on transistor MN108can depend on the inherent transistor characteristics of transistor MN108, and also on the signal VBSTRPdriving MN108at the gate with respect to the signal VINXat the source. Examples herein are described where signals are referred to as going high or going low, which refers to different logic levels of the signals. FIG.2shows a bootstrapped switching circuit200, according to some embodiments of the disclosure. The bootstrapped switching circuit includes the transistor MN108fromFIG.1, which receives input signal VINXat its source, and its drain is connected to one plate of sampling capacitor (e.g., sampling capacitor Cs112ofFIG.1). The bootstrapped switching circuit also includes a bootstrapped gate voltage generator (circuit) for generating a gate voltage signal VBSTRPdriving the gate of transistor MN108(the sampling switch). The bootstrapped gate voltage generator generates the gate voltage signal VBSTRPin a manner that ensures the transistor MN108is turned on quickly. The bootstrapped gate voltage generator can receive VINX, and include a boot capacitor for generating a boosted voltage of VINX+VBOOT. The bootstrapped gate voltage generator has a positive feedback loop. The positive feedback loop takes VINXas input to the positive feedback loop, and the positive feedback loop includes the boot capacitor in the positive feedback loop path. An output of the positive feedback loop generates the gate voltage signal VBSTRPdriving the gate of transistor MN108(the sampling switch). The positive feedback loop serves to bring the gate voltage signal VBSTRPhigh quickly to ensure fast turn on of the transistor MN108. The positive feedback loop is bootstrapped to the input signal VINX, where the goal of the positive feedback loop is to drive gate voltage signal VBSTRPto be VINXplus the voltage VBOOT(VBOOTbeing the voltage across the boot capacitor CBOOT) to turn on transistor MN108. Specifically, the positive feedback loop drives the gate voltage signal VBSTRPto be high enough to cause sufficient voltage VGSacross the gate and the source for transistor MN108to turn on. The bootstrapped gate voltage generator is driven by a clock signal CLK, and CLKB being the inverted version of CLK. The bootstrapped gate voltage generator can also receive a charging phase clock signal CLKBBST, which controls the timing of a charging phase of the boot capacitor CBOOT. The transistor MN108is expected to turn on quickly when CLK goes high, and transistor MN108is expected to turn off when CLK goes low. During the charging phase (CLKB and CLKBBSTare both high), transistors MN224and transistor MN210(e.g., NMOS transistors) are turned on to charge a voltage VBOOTacross boot capacitor CBOOT(e.g., VBOOT=VDD−VSS). Turning transistor MN224on connects top plate of capacitor CBOOTto VDD. Turning transistor MN210on connects the bottom plate of capacitor CBOOTto VSS. If VSSis ground, then the boot capacitor CBOOTis charged to VDD. Just before the positive feedback loop is activated, node X was at VDDsince CLK was low in the previous phase (charging phase). CLK drives the gate of transistor MP214(e.g., a P-type complementary metal-oxide field-effect (CMOS) transistor, or PMOS transistor). CLK being low would make the transistor MP214on. When transistor MP214was on, the drain of transistor MP214(which is node X) was at VDD. When node X was at VDDand CLKB was high, transistor MP202(e.g., PMOS transistor) is off. Herein, transistor MP202can be referred to as the output transistor which outputs VBSTRPdriving the gate of transistor MN108(the sampling switch). VBSTRPwas at a low state, which keeps sample switch, i.e., transistor MN108off. CLK going from low to high (or CLKB goes from high to low) activates the positive feedback loop. When CLKB driving the gate of transistor MP204(e.g., PMOS transistor) goes low (i.e., CLK goes high), transistor MP204(e.g., PMOS transistor) is turned on, pulling the drain of MN208(e.g., NMOS transistor) close to VDD(goes high) and pulling the drain of MN206(e.g., NMOS transistor) high (e.g., VDD), which in turns makes the VBSTRPnode go high. VBSTRPdrives the gates of transistors MN216(e.g., NMOS transistor) and MN212(e.g., NMOS transistor). Transistor MN212can be referred to as the input transistor since transistor MN212receives the input signal VINX. VBSTRPgoing high can turn on transistor MN216(e.g., NMOS transistor) and transistor MN212(e.g., NMOS transistor). Meanwhile, transistor MP214has been turned off since CLK went high. Effectively, through the on transistors MN216and MN212, the gate of transistor MP202, i.e., node X, gets tied to VINX. In a previous phase (i.e., the charging phase), boot capacitor CBOOTis charged to have VBOOTacross the boot capacitor. When the positive feedback loop is engaged, the gate of transistor MP202can have VINX, the source of transistor MP202can have a voltage of VINX+VBOOT. Transistor MP202turns on, making VBTSTRPrise to VINX+VBOOT, which increases the voltage across the gate and the source VGS(i.e., VBSTRP−VINX=VBOOT) of the sampling switch, i.e., transistor MN108, to turn on. As VBTSTRPrises, the positive feedback of VBTSTRPrising loops through transistors MN216and MN212, which again in turn keeps VBSTRPrising further to turn on transistor MN108. As a result, the positive feedback loop enables a fast turn on of transistor MN108. In some cases, at the startup of the positive feedback loop when the gate of transistor MP202, i.e., node X, is getting tied to VINX, the two transistors MN216and MN212in the positive feedback loop assisting in the action of bringing node X, can be slow to turn on, which greatly slows down the positive feedback loop when node X does not get tied to VINXquickly enough. Consider when VINX(i.e., at the source of transistor MN212) is close to VDDat a particular instant in time, and the gate of transistor MN216and the gate of transistor MN212(i.e., the VBSTRPnode) is also close to VDDas soon as CLKB goes low at the startup (startup meaning CLKB has just became low, or CLK has just became high). Node X is also at VDDat the start up (since CLK was low, and node X is at VDDvia transistor MP214). This scenario can make all terminals of the transistor MN216at roughly VDD. The transistors MN216and MN212might not see enough voltage across the gate and the source (VGS) of the respective transistors to turn on. Therefore transistors MN216and MN212would barely/weakly turn on since there is not enough VGS, slowing down the positive feedback action of the loop. The loop eventually works as transistors MN216and MN212turns on more fully to pull node X closer to VINXto turn on transistor MP202, which serves to allow VINX+VBOOTto pass through transistor MP202towards the gate of transistor MN108and making VBSTRPrise. Jumpstarting the Positive Feedback Loop To address this slowdown of the positive feedback loop, a jump start circuit can be included to quickly turn on transistor MP202(the output transistor) at the startup of the positive feedback loop action to allow VINX+VBOOTto pass through transistor MP202towards the gate of transistor MN108more quickly, causing VBSTRPto rise more quickly, which in turn can turn on transistors MN216and MN212faster. The result is a much faster bootstrapped switching circuit. FIG.3shows a bootstrapped switching circuit300having accelerated turn on, according to some embodiments of the disclosure. The bootstrapped switching circuit300has a sampling switch, e.g., transistor MN108, receiving a voltage input signal, e.g., VINX, and a gate voltage, e.g., VBTSTRP. The bootstrapped switching circuit also has a bootstrapped voltage generator. The bootstrapped voltage generator generates the gate voltage, e.g., VBTSTRP, for the sampling switch. The bootstrapped switching circuit comprises a positive feedback loop to generate the gate voltage for turning on the sampling switch. The positive feedback loop can include an input transistor, e.g., transistor MN212, receiving the voltage input signal, e.g., VINX, and an output transistor, e.g., transistor MP202, outputting the gate voltage of the sampling switch. The positive feedback loop comprises a boot capacitor, e.g., CBOOT, which can be used to generate a boosted voltage, e.g., VINX+VBOOT. Because the sampling switch, e.g., transistor MN108, has VINXat its source, the boosted voltage being at the gate of the sampling switch would turn on the sampling switch. In other words, the positive feedback loop turns on the sampling switch, e.g., transistor MN108, by bringing the gate voltage to the boosted voltage generated based on the voltage input signal VINXand the voltage across the boot capacitor CBOOT. The input transistor, e.g., source of transistor MN212, is coupled to a first plate of the boot capacitor. The output transistor, e.g., source of transistor MP202, is coupled to a second plate of the boot capacitor. The positive feedback loop operates by using the gate voltage as positive feedback to drive the transistors in the loop, e.g., transistors MN212and MN216. Those transistors in turn bring the gate voltage of the output transistor, e.g., transistor MP202, to VINXand assists the output transistor, e.g., transistor MP202, with passing the boosted voltage or bringing the gate voltage to the boosted voltage. The boosted voltage can turn on the sampling switch, e.g., transistor MN108. For the exemplary positive feedback loop shown, the input transistor, e.g., transistor MN212, is driven by the gate voltage VBSTRPof the sampling switch, e.g., transistor MN108. The positive feedback loop further comprises a first transistor, e.g., transistor MN216, coupled to the gate of the output transistor, e.g., transistor MP202, and a drain of the input transistor, e.g., transistor MN212. The first transistor is also driven by the gate voltage of the sampling switch as well. Together, the first transistor and the input transistor, when turned on, brings node X to VINX during the positive feedback loop action. The bootstrapped switching circuit also includes a jump start circuit302to turn on the output transistor for a limited period of time during which the input transistor is turning on at a startup of the positive feedback loop. The jump start circuit302is coupled to node X, e.g., at the gate of transistor MP202, where transistor MP202is the output transistor of the positive feedback loop. In some embodiments, the jump start circuit302, e.g., provides/outputs a signal at node X, to turn on the transistor MP202momentarily when CLKB goes low to jump start the positive feedback loop action. The jump start circuit302ceases to turn on the output transistor, e.g., transistor MP202, after the limited period of time and allows the positive feedback loop to operate. Phrased differently, the jump start circuit302engages the output transistor MP202for when the positive feedback loop action begins, and disengages from the output transistor MP202so that the positive feedback loop action can engage to drive the output transistor MP202(allowing the positive feedback loop action to bring node X to VINX). This jump start circuit302can help the positive feedback loop move faster during the (short period of) time when transistors MN216and MN212are slow to turn on. The jump start circuit302can jump start the positive feedback loop action by pulling the node X towards a low logic level (e.g., ground or some other bias voltage) momentarily at the gate of transistor MP202so that transistor MP202turns on to allow VINX+VBOOT(i.e., top plate voltage of the boot capacitor CBOOT) to pass through output transistor MP202towards the gate of transistor MN108more quickly, causing VBSTRPto rise more quickly. Note that jump start circuit302only pulls the node X towards a low logic level momentarily but preferably does not let node X get to ground or a low logic level completely. Pulling node X to ground completely can cause unwanted stress on transistor MP202, since the source of transistor MP202sees VINX+VBOOT. Furthermore, the jump start circuit302quickly “lets go” of node X (or cease the pulling of node X towards the low logic level) to allow the positive feedback loop to operate, and preferably “lets go” prior to transistors MN216and MN212engaging fully to tie node X to VINX. The timing of the jump start circuit302can vary depending on the implementation. At the startup of the positive feedback loop, and just prior to CLKB going low, node X is at VDDto keep output transistor MP202off when boot capacitor CBOOTis charging and to keep VBSTRTPlow. However, when node X starts at VDDat the startup of the positive feedback loop action, node X slows down the feedback mechanism. The jump start circuit302quickly turns on transistor MP202by pulling node X towards a suitable logic level so that node X starting at VDDno longer impedes the speed of the feedback loop action. In some cases, an additional transistor MN218(e.g., NMOS transistor), with its gate connected to CLK, its source connected to the drain of input transistor, e.g., transistor MN212(and the source of transistor MN216), and its drain connected to node X (i.e., gate of the output transistor MP202), can be included to assist tying node X to VINXduring the positive feedback loop action. The additional transistor is controlled by a clock signal which activates the positive feedback loop, e.g., CLK. Transistor MN218is on when CLK goes high at the startup to assist tying node X to VINX, in an attempt to overcome the slow turn on of transistor MN216. The jump start circuit302operates differently from the additional transistor MN218, and the jump start circuit302can provide a greater amount of increase in speed of the bootstrapped switching circuit than the additional transistor MN218alone. The timing of pulling down node X towards a low logic level and quickly letting go take into account or depend on factors such as the circuit design, the process in which the circuit is fabricated, and parasitics in the bootstrapped switching circuit. The timing can be determined from simulations or testing of the circuit. The timing can be variable or controllable. In some cases, the timing can depend on one or more voltage levels or signals in the bootstrapped switching circuit, which may indicate when the jump start circuit302should begin the pull down action and/or cease the pull down action. If the transistor MP202is an NMOS transistor (in a complementary/equivalent implementation), the jump start circuit302can provide a momentary pull up function to quickly jump start the feedback loop. Exemplary Implementations of the Jump Start Circuit FIGS.4A-Bshow an exemplary implementation for a jump start circuit, according to some embodiments of the disclosure. In this example shown inFIG.4A, the jump start circuit includes a transistor MN404(e.g., NMOS transistor). Transistor MN404receives CLKB (used for activating the positive feedback loop, in the form of CLK and CLKB) at the source and CLKBDELat the gate. CLKB goes low at the startup of the positive feedback loop. CLKBDELis a delayed version of CLKB, and thus for a short period of time, CLKBDELremains high when CLKB goes low. During this period of time, CLKBDELbeing high when CLKB is low turns on transistor MN404and pulls node X towards CLKB's low logic level (e.g., ground). When the delay period is over, CLKBDELgoes low to turn transistor MN404off. This jump start circuit effectively pulls node X towards a low logic level and quickly lets go of node X to allow the positive feedback loop to continue its operation. In other words, the transistor is turned on by a delayed version of the clock signal to output the clock signal to turn on the output transistor for the limited period of time. As illustrated byFIG.4B, the jump start circuit can include two inverters for generating the delayed version of the clock signal CLKBDELbased on the clock signal CLKB. As result, CLKBDELcan have the same polarity of CLKB but with two inverter delays. Other implementations for generating CLKBDELwith a desired amount of delay are envisioned by the disclosure, including using a pass gate, resistor-capacitor delay circuits, etc. The implementation shown inFIG.4Bis not meant to be limiting. FIGS.5A-Cshow another exemplary implementation for a jump start circuit, according to some embodiments of the disclosure. In this example shown inFIG.5A, the jump start circuit includes a switch501controlled by control signal CTRL. The switch501connects a gate of the output transistor (e.g., transistor MP202) to a bias voltage VONfor turning on the output transistor. The control signal can have a pulse to close the switch501. The pulse can serve to jump start the output transistor for a limited period of time (pulling the gate to the bias voltage and letting go of the gate to allow the positive feedback loop to operate).FIG.5Bshows an exemplary waveform for the control signal CTRL, which has a short pulse used to close the switch and pull node X towards bias voltage VONand quickly lets go of node X (opening the switch and disconnecting node X from VON) to allow the positive feedback loop to continue its operation. Voltage VONcan be a suitable bias voltage for turning transistor MP202on, e.g., ground, or some other suitable voltage level. Switch501can be implemented using transistor(s). In some embodiments, the jump start circuit includes a sense circuit502(as shown inFIG.5C) so that a closed loop delay can be implemented. The sense circuit activates the jump start circuit based on one or more conditions of the bootstrapped switching circuit indicating the startup of the positive feedback loop. A closed loop delay means that the control signal CTRL, or the timing of the jump start circuit for pulling node X to a low logic level and/or letting go of node X can be depend on one or more conditions of the bootstrapped switching circuit. Preferably, the one or more conditions indicate the startup of the positive feedback loop. The sense circuit502can sense a voltage VSENSEand generate the control signal CTRL accordingly. The voltage VSENSEcan represent a voltage level at any suitable node in the bootstrapped switching circuit. The node can be a node in the positive feedback loop. In one example, the sense circuit502includes a comparator coupled the source of the transistor MP202to compare the voltage at the source of the transistor MP202against a predetermined threshold, or another node in the positive feedback loop. The voltage passing across the predetermined threshold can indicate the startup of the positive feedback loop. If the voltage (e.g., the source of the transistor) rises above the predetermined threshold (indicating the positive feedback loop has begun its operation), the output of the comparator can trigger the control signal CTRL accordingly to shut off the jump start action. A Method for Accelerated Turn on of a Sampling Switch FIG.6is a flow diagram illustrating a method for accelerated turn on of a sampling switch. In602, an output transistor (e.g., transistor MP202ofFIG.3) of a positive feedback loop, outputs an output voltage (e.g., VBSTRPofFIG.3) of a bootstrapped voltage generator for driving the sampling switch (e.g., transistor MN108ofFIG.3). In some embodiments, the sampling switch receives a voltage input signal (e.g., VINXto be sampled). The positive feedback loop can receive the voltage input signal at an input transistor (e.g., transistor MN212ofFIG.3) driven by the output voltage (e.g., VBSTRPofFIG.3) output by the output transistor. The positive feedback loop can generate a boosted voltage signal (e.g., bootstrapped voltage of VINX+VBOOT) based on the voltage input signal as the output voltage of the bootstrapped voltage generator to turn on the sampling switch when the positive feedback loop is engaged. In604, a jump start circuit can pull a gate voltage of the output transistor (e.g., node X ofFIG.3) to an on-voltage level to turn on the output transistor for a period of time after the positive feedback loop is activated. In some embodiments, pulling the gate voltage of the output transistor includes changing the gate voltage from an off-voltage level to an on-voltage level. Before the positive feedback action is engaged, the gate voltage can be at VDDas illustrated byFIGS.2and3, which is considered an “off-voltage level” for transistor MP202. The jump start circuit can momentarily pull the gate voltage to an “on-voltage level”, such as a logical low voltage level to turn on the output transistor for a short period of time. In606, the jump start circuit can cease or stop the pulling of the gate voltage after the period of time. For instance, the jump start circuit can release the gate voltage of the output transistor back to a voltage being delivered by the positive feedback loop after the period of time. For instance, the jump start circuit can let the positive feedback loop operate and bring the gate voltage close to the input signal VINXto be sampled. In some embodiments, ceasing the pulling of the gate voltage after the period of time or releasing the gate voltage of the output transistor after the period of time includes allowing the positive feedback loop to bring the gate voltage to a voltage level of a voltage input signal (e.g., VINX) provided to the bootstrapped voltage generator and the sampling switch. In some embodiments, a sense circuit (e.g., sense circuit502ofFIG.5C) can sense one or more conditions indicating the positive feedback loop has been activated. The sense circuit can generate a control signal in response to sensing the one or more conditions. The control signal can trigger triggers the pulling of the gate voltage of the output transistor. An Apparatus for Accelerated Turn On of a Sampling Switch For accelerated turn on of a sampling switch, an apparatus can include sampling means (e.g., transistor MN108ofFIG.3) receiving an input signal (e.g., VINXofFIG.3) to be sampled and a control signal (e.g., VBSTRPofFIG.3) which turns the sampling means on and off. The apparatus can further include means (e.g., transistor MN210, CBOOT, and transistor MN224ofFIG.3) for generating a boosted voltage signal based on the input signal (e.g., bootstrapped voltage of VINX+VBOOT). The apparatus can include output means for outputting the control signal (e.g., transistor MP202ofFIG.3). The apparatus can include means for bringing the control signal to the boosted voltage through positive feedback action of the control signal, as illustrated byFIGS.2and3. The apparatus can include means (e.g., jump start circuit302ofFIG.3and associated examples seen inFIGS.4A-Band5A-C) for turning on the output means for a limited period of time at a startup of the positive feedback action. Input Buffer CMOS input buffers (single ended) can include a stack of NMOS transistors and a current source. The voltage input to the input buffer can be directly connected to a gate of the NMOS transistor (whose source is connected to the current source), and the source of the NMOS transistor is the output. In this kind of input buffer, the output is shifted by one voltage across the gate and the source VGSdownwards via the NMOS transistor buffering the voltage input from its gate to its source, i.e., the output. This voltage shift from the input to the output means that the output voltage range depend on the input voltage range. Phrased differently, there is an offset between the input voltage and the output voltage. If the input buffer is driving circuits that require a particular voltage range, this offset can be undesirable or cumbersome to address in the circuit design. FIG.7shows an exemplary input buffer, according to some embodiments of the disclosure. The input buffer can be used in the manner illustrated byFIG.1. The input buffer has an input VINfor receiving a voltage input signal. The voltage input signal can be a high frequency data signal to be converted by a data converter, such as a high speed ADC. The input buffer includes a push pull circuit outputting a voltage output signal at an output VINX. The push pull circuit comprises a first transistor of a first type, and a second transistor of a second type complementary to the first type. For instance, the first transistor can be transistor MN702(e.g., NMOS transistor) and the second transistor can be transistor MP704(e.g., PMOS transistor). The sources of the two transistors are coupled to each other, and the sources also serves as the output VINXof the input buffer providing output signal VINX. For this input buffer, the transistors MN702and MP704are not directly connected to the input VIN. Rather, the gate of transistor MN702is connected to the input VINvia level shifter703, and the gate of transistor MP704is connected to the input VINvia level shifter705. In some embodiments, the input buffer can include a first level shifter coupled to the input for shifting a voltage level of the voltage input signal by a first amount of voltage shift across the first level shifter and generating a first level shifted voltage signal to bias the first transistor. For example, level shifter703can shift VINby a first amount of voltage shift (e.g., up by some amount of voltage) across the level shifter703and generate a first level shifted voltage V1to bias the first transistor, i.e., transistor MN702. In some embodiments, the input buffer can include a second level shifter coupled to the input for shifting the voltage level of the voltage input signal by a second amount of voltage shift across the second level shifter and generating a second level shifted voltage signal to bias the second transistor. For example, level shifter705can shift VINby a second amount of voltage shift (e.g., down by some amount of voltage) across the level shifter705and generate a second level shifted voltage V2to bias the second transistor, i.e., transistor MP704. In this input buffer seen inFIG.7, the input buffer has a push pull architecture. The push pull architecture has at least an NMOS transistor MN702and PMOS transistor MP704, whose source is connected to the source of a PMOS transistor MP704. The sources are coupled together and forms the output VINX. For 28 nm CMOS process, PMOS and NMOS devices are complementary in behavior including bandwidth, capacitances, transconductance per unit current, etc. In some other processes, the PMOS transistors can have drastically different behavior than the NMOS transistors. This complementary push pull architecture using NMOS transistor(s) on one side and PMOS transistor(s) on the other side enables a complementary buffer to have the same behavior on the PMOS side and the NMOS side, in a process like the 28 nm CMOS process. The structure offers symmetric pull up and pull down characteristics, no matter which side is supplying a current to the output VINXto drive the load. The two sides are equal in strength, therefore achieving a symmetric pull up and pull down. From a distortion perspective, the complementary structure means that there can be less even order distortions (e.g., second order harmonics are reduced). Besides the symmetric behavior, the input buffer is efficient because the NMOS transistor MN702and PMOS transistor MP704, for a given amount of current going through the transistors, effectively doubles the transconductance of the input buffer. For the same amount of current, the NMOS transistor MN702and PMOS transistor MP704enables the input buffer to get two transconductances in parallel. For this input buffer, it is not possible to tie the gates of NMOS transistor MN702and PMOS transistor MP704together, since shorting the gate of NMOS transistor MN702and PMOS transistor MP704), neither transistor would turn on because there would not be any voltage across the gate and the source of either transistors (insufficient VGS). Therefore, at least one of the two level shifters703and705is provided between the gates of NMOS transistor MN702and PMOS transistor MP704. The level shifters pulls the gates of the two transistors apart with sufficient difference in voltage across the gate and the source to keep the transistors on. Level shifter703and level shifter705connected to VINcan be considered as (programmable) voltage shifts to bias the NMOS transistor MN702and PMOS transistor MP704at the gates of the respective transistors. In other words, the first amount of voltage shift can be programmable, and the second amount of voltage shift can be programmable. As used herein, a level shifter is a circuit which shifts a voltage level of an input to the level shifter by an amount to generate a level shifted voltage level at the output of the level shifter. Biasing the NMOS transistor MN702and PMOS transistor MP704, i.e., setting appropriate voltages V1and V2, is not trivial. If the two gates are too far apart, too much current might flow through the two transistors. But the two gates are not far enough apart (without enough VGSfor both transistors, i.e., less than two VGS's) the transistors might not be turned on enough. Preferably, a desirable amount of current flows through the transistors. To ensure that the transistors have the desirable amount of current flowing through the transistors, a replica bias block can be used to set the voltages of level shifter703and level shifter705to ensure the NMOS transistor MN702and PMOS transistor MP704are running at the desired current. Preferably, the difference in voltage between the gate of the NMOS transistor MN702and the gate of PMOS transistor MP704has to be at least two VGS, e.g., threshold voltage VGSof the NMOS transistor MN702and threshold VGSof the PMOS transistor MP704, and set to ensure a desired amount of current is running through the NMOS transistor MN702and PMOS transistor MP704. In some embodiments, a sum of the first amount of voltage shift (e.g., of level shifter703) and the second amount of voltage shift (e.g., of level shifter705) is at least a sum of a first threshold voltage of the first transistor (e.g., transistor MN702) and a second threshold voltage of the second transistor (e.g., transistor MP704). Input to Output Offset and Design Considerations for the Level Shifters As a result of level shifter(s), the input VINand the output VINXare independent, and the voltage range for the input and the voltage range for the output no longer have to depend on each other or have to be the same. Any offset between the input and the output can be selected by implementing appropriate level shifters (i.e., implementing level shifters703and705appropriately). By selecting appropriate first amount of voltage shift and second amount of voltage shift, the voltage output signal at VINXcan be offset or have an offset from the voltage input signal at VIN. In one example, the voltage input signal can be centered at 0.5 volts, and the voltage input signal can be centered at 0.25 volts. The input buffer is more flexible. In some cases, the input voltage at VINand the output voltage at VINXcan be roughly the same voltage. For instance, VINgoes up with level shifter703, and down a gate to source voltage VGSof transistor MN702at the output VINX. VINgoes down with level shifter705, and up a gate to source voltage VGSof transistor MP704at the output VINX. There is no input to output offset if appropriate level shifters are used. This feature is not available in other input buffers implementing a single source follower. However, the input to output offset does not have to be zero either. Having the two level shifters means that the voltage range of the input VINcan be different from the voltage range of the output VINX. With the two level shifters, as long as the difference in voltage between the gate of the NMOS transistor MN702and the gate PMOS transistor MP704is appropriate (i.e., biasing the transistors to have the desired current running through them), the input to output voltages can be adjusted to fit the application (e.g., if the offset is desirable). The input to output offset can be variable. Used herein, variable means different over time, or different from one application to another application. The voltage shifts being provided by the level shifters can also be variable (and vice versa). A degree of freedom of the input buffer is that the level shifters703and705can be adjusted to have the particular output voltage range or voltage level. In some embodiments, level shifters703and level shifters705(and other level shifters disclosed here) are variable or programmable. In some embodiments, one amount of voltage shift by a level shifter can differ from another amount of voltage shift by another level shifter in the input buffer. The amount of voltage shift can be user adjustable, and/or on-chip controllable. The amount of voltage shift can be optimized for other factors including distortions, electrostatic discharge (ESD), etc. In some cases, one of level shifters703and level shifters705can be entirely omitted, where either the voltage at the gate of NMOS transistor MN702or the voltage at the gate of PMOS transistor MP704is level shifted to achieve the appropriate voltage difference between the gates of the two transistors. Implementing a Level Shifter One aspect of the level shifter is its ability to provide an amount of voltage shift, from the input to the gate of the transistors, independent of input frequency, or all the way to DC (i.e., zero frequency or constant input VIN). In other words, the level shifted signal would follow the input VINacross all frequencies of the input. Some other level shifters would not have such a frequency response. The level shifter can be implemented in different ways. For instance, a level shifter can include one or more of the following: one or more current sources, one or more resistors, one or more transistors, one or more diodes, one or more diode-connected transistor, one or more capacitors, one or more batteries, and one or more non-linear resistor. In some embodiments, the level shifter includes means for providing a voltage shift which is controlled by an amount of current flowing through the level shifter, and can be independent of the input frequency. For instance, a diode-connected transistor can provide a voltage level shift which depends on a current flowing through the diode-connected transistor (the current can be provided by one or more current sources). In some embodiments, the level shifter can include switched capacitor circuits. Preferably, a level shifter is implemented using passive circuit elements (as opposed to active elements involving complementary transistors as followers that shifts up or down from the input). Passive circuit elements uses less current and can be less noisy and more linear than active circuit elements. Passive circuit elements can include diode connected transistor(s), resistor(s), capacitor(s) circuits, and suitable combination thereof. FIG.8shows an exemplary level shifter, according to some embodiments of the disclosure. The exemplary level shifter includes current sources, with a resistor and a capacitor in parallel between the current sources. For instance, a level shifter mentioned herein can include one or more current sources (e.g., I1and I2), and a resistor (or resistive element, e.g., R) and a capacitor (or a capacitive element, e.g., C) in parallel with the resistor. The resistor and the capacitor in parallel with the resistor are between current sources I1and I2. Other configurations of these circuit elements are envisioned by the disclosure. Any current provided by the current sources would flow through the resistor and capacitor in parallel. The resistor and the amount of current flowing through the resistor sets the voltage shift across the level shifter (voltage shift can equal to the amount of current multiplied by the resistance). In other words, an amount of current, flowing through the resistor and provided by the current sources, sets an amount of voltage shift across the level shifter. For a programmable level shifter, the amount of current can be programmable, or the amount of resistance of the resistor can be programmable. Any one of the level shifters can be implemented in the manner described and illustrated herein. Depending on the particular application or the level shifter, the values of the different components within the level shifters may vary. Bootstrapping Back Gates of the Main Transistors Achieving high performance for an input buffer, such as good linearity, is not trivial. In some embodiments, a first back gate of the first transistor (e.g., transistor MN702) and a second back gate of the second transistor are coupled to the output VINXor follows the voltage output signal VINX. For instance, back gates (body) of the NMOS transistor MN702and PMOS transistor MP704are tied directly to the output VINX, i.e., the back gates are bootstrapped to the output node VINX. If the back gates of NMOS transistor MN702and PMOS transistor MP704are tied to some fixed voltage, e.g., ground and VDD, as the input VINvary, the VGSof the two transistor would also vary. The change in voltage between the source and the back gate would change the VGSof the transistors. The variation could also modulate the threshold voltage VGSand the capacitance of the transistor. The variation(s) can cause distortions. To avoid this issue, the back gate NMOS transistor MN702and PMOS transistor MP604are tied or bootstrapped to the output VINX. For all values of the input signal VIN(and VINXfollowing VIN), the voltage between the back gate and the source of the transistors is zero. VGSno longer varies as the input signal VINvaries. Capacitance in the transistor can be shorted. Performance is improved. The input buffer seen inFIG.7along with at least some of the features described so far can reduce of some of the non-linearities or variations (first order). Minimizing Capacitances to Improve Performance When the input buffer is driving a high frequency input signal VIN, it is preferable to minimize all the capacitances that matter, or at least make the capacitance constant. Or, if the capacitances are going vary, it is preferable to reverse bias the junction causing the capacitance as much as possible so that the variation in the capacitance is small, or at least make the voltage across the capacitor constant to reduce the variation. Reverse biasing the junction, i.e., the voltage dependent junction capacitor, as much as possible can make the capacitance smaller and less non-linear. Tying the back gate to the source (and output VINX) of transistor MN702creates a capacitance between the back gate and the deep N-well. The N-well is at a fixed potential, and the back gate is moving around with the signal. NMOS transistor MN702can be in its own isolated P-Well (back gate), which can be inside a deep N-well isolation region. A capacitance between a back gate and a deep N-well of a first transistor (e.g., transistor MN702) can be reversed biased. For instance, the deep N-well can be tied to a high potential, so that the capacitance between the back gate (P) and the deep N-well (N) is as strongly reversed biased as possible (for reasons mentioned above). As a result, the undesirable effect of the capacitance can be reduced (e.g., making it more linear). The input buffer seen inFIG.7along with at least some of the features described so far can reduce of some of the non-linearities or variations (first order). Bootstrapped Cascodes to Improve Performance If the input buffer is made using 28 nm CMOS process technology, the output conductance, or the ratio of the conductance GDSto the transconductance GMis small and highly non-linear. This can make it undesirable to tie the drain of NMOS transistor MN702and the drain of PMOS transistor MP704to a fixed supply, because as the signal VINor VINXmoves up and down, that varies the voltage across the transistor, i.e., VDS(drain to source voltage), is moving up and down. This can cause, e.g., 25-40 dBs of distortion. One way to fix this distortion is to bootstrap the drain of NMOS transistor MN702and the drain of PMOS transistor MP704(e.g., the input VINor the output VINX), so that it is no longer fixed to some supply voltage. FIG.9shows another exemplary input buffer, according to some embodiments of the disclosure. The push pull circuit of the input buffer further includes a third transistor of the first type (e.g., transistor MN706) in cascode configuration with the first transistor (e.g., transistor MN702), and a fourth transistor of the second type (e.g., transistor MP708) in cascode configuration with the second transistor (e.g., transistor MP704). One or more bootstrapped cascodes, e.g., transistors in cascode configuration with the first/second transistor can be provided to boost the effective output impedance and therefore SFDR. The cascodes can require the use of higher supply voltages to improve the performance of the input buffer. Additional cascodes further improves performance. The first cascode is transistor MN706(e.g., NMOS transistor), which is another follower tied to the input VIN. The gate of transistor MN706can be tied to the input VINvia level shifter707and level shifter703in series (as shown). In some embodiments, level shifter707can be directly coupled to the input VIN. The level shifter707or the level shifters707and703in series can serve as a third level shifter coupled to the input VINfor shifting the voltage level of the voltage input signal by a third amount of voltage shift across the third level shifter and generating a third level shifted voltage signal V3to bias the third transistor, e.g., transistor MN706. The first cascode MN706, its gate is being driven by the input VIN(going up and down), has a specific level shifter707such that the output voltage (MN706's source), provides enough VDSfor transistor MN702to operate in saturation under all conditions. Transistor MN706is bootstrapped to the input VINto isolate the transistor MN702from variation in VDS. If the drain of transistor MN706(exactly) follows the input or the output, then VDSwould be substantially constant (no variation). Depending on the level of distortion tolerated, more cascode(s) can be added to serve this function, such as transistor MN710(e.g., NMOS transistor). Each cascode can provide an additional 20 dB in performance. Since the input buffer has a complementary design, cascode(s) being added to the NMOS side is also added to the PMOS side. Accordingly, transistor MP708(e.g., PMOS transistor) can be added to bootstrap and fix VDSof transistor MP704. The gate of transistor MP708can be tied to the input VINvia level shifter709and level shifter705in series. In some embodiments, level shifter709can be directly coupled to the input VIN. The level shifter709or the level shifters709and705in series can serve as a fourth level shifter coupled to the input VINfor shifting the voltage level of the voltage input signal by a fourth amount of voltage shift across the fourth level shifter and generating a fourth level shifted voltage signal V4to bias the fourth transistor, e.g., transistor MP708. In the example shown, the push pull circuit of the input buffer further includes a fifth transistor of the first type (e.g., transistor MN710) in cascode configuration with the third transistor (e.g., transistor MN706), and a sixth transistor of the second type (e.g., transistor MP712) in cascode configuration with the fourth transistor (e.g., transistor MP708). In other words, the second cascode on the NMOS side, i.e., transistor MN710, finally connects to supply. Also, a second cascode on the PMOS side, i.e., transistor MP712(e.g., PMOS transistor), finally connects to supply. The gate of upper most cascode MN710is driven from the source of the first cascode on the NMOS side, e.g., via level shifter711. Level shifter711can be a fifth level shifter coupled to a source of the third transistor (e.g., transistor MN706) for shifting a voltage at the source of the third transistor by a fifth amount of voltage shift across the fifth level shifter and generating a fifth level shifted voltage signal V5to bias the fifth transistor (e.g., transistor MN710). The gate of lower most cascode MP712is driven from the source of the first cascode on the PMOS side, e.g., via level shifter713. Level shifter716can be a sixth level shifter coupled to a source of the fourth transistor (e.g., transistor MP708) for shifting a voltage at the source of the fourth transistor by a sixth amount of voltage shift across the sixth level shifter and generating a sixth level shifted voltage signal V6to bias the sixth transistor (e.g., transistor MN710). This bootstrapping scheme (e.g., bootstrapping to the sources of the third/fourth transistor and drains of the first/second transistor) unloads the buffer input and output (both of which are candidates to bootstrap from) from the non-bootstrapped gate-drain capacitance of the upper cascode connected to the supply, which could be a significant source of distortion. In the examples shown inFIGS.7and9, the bootstrapping is done primarily by tying the gates of the transistors the input (or some other node which follows the input). This feature was selected to reduce possible ringing, which can be caused by bootstrapping the gates to the output. While the bootstrapping to the input can load the input and adds extra parasitics, high speed applications may prefer an input buffer that suffers from less ringing. While there could potentially be some ringing from the upper cascode since it is bootstrapped to the source of the first cascode, the ringing may be tolerated over an alternative solution where the distortions at the source of the upper cascode could distort the input VINand output VINXif it was bootstrapped to the input or the output. Further, the back gate of the various cascode transistors in the input buffer are bootstrapped as shown inFIG.9to improve SFDR. Similar to the description of the back gates of transistors MN702and MP704, the back gates of the cascodes preferably being bootstrapped as well (i.e., it is undesirable to have voltage across the back gate and the source to vary). Unfortunately, in some implementations, VSSis negative, which means that the drain of the transistor MP708swings negative. In 28 nm CMOS process technology, the N-well of the PMOS transistors sit in the substrate, and substrate is at 0 volts. If N-well goes negative, it forward biases the diode between the P substrate (at 0 volts) and all the N wells (cathode end of the diode). If the N terminal goes below ground, it forward biases the diode and causes distortions. Tying the back gates of the cascodes on the PMOS side to the sources of the respective cascodes (same cascode) means it can cause distortions. The solution is to tie the back gates of the cascodes to each other, i.e., the back gate of an NMOS cascode is connected to a source of corresponding/complementary PMOS cascode, and vice versa. The sources are following the input and thus tying them to each other helps to bootstrap the back gates of the cascodes (to the input). Denoted by VBGN1, a back gate of the third transistor (e.g., transistor MN706) is coupled to a source of the fourth transistor (e.g., transistor MP708). Denoted by VBGP1, a back gate of the fourth transistor (e.g., transistor MP708) is coupled to a source of the third transistor (e.g., transistor MN706). Denoted by VBGN2, a back gate of the fifth transistor (e.g., transistor MN710) is coupled to a source of the sixth transistor (e.g., transistor MP712). Denoted by VBGP2, a back gate of the sixth transistor (e.g., transistor MP712) is coupled to a source of the fifth transistor (e.g., transistor MN710). Tying the back gate to the output is less desirable because it would load it with a non-linear capacitance. Linearity is improved since there is now a large voltage across the junction. While the cascodes on the NMOS side can tie the back gates to their respective sources, the complementary design of tying the back gates to the sources of the complementary cascode is preferable to achieve a complementary design and equalize loading for symmetric pull up and pull down behavior. Method for Buffering a Voltage Input Signal FIG.10is a flow diagram for buffering an input signal, according to some embodiments of the disclosure. In1002, a first voltage shift set by (one or more current sources of) a first level shifter shifts the voltage input signal to generate a first signal. In1002, a second voltage shift set by (one or more current sources of) a second level shifter shifts the voltage input signal to generate a second signal. The first voltage shift and second voltage shift can represent the level shifters703and705ofFIGS.7and9. The first signal and the second signal can represent V1and V2ofFIGS.7and9. In1004, the first signal biases a first transistor of a first type. In1004, the second signal biases a second transistor of a second type complementary to the first type. The first transistor and the second transistor are coupled in a push pull architecture, as illustrated by transistor MN702and transistor MP704ofFIGS.7and9. In1006, the first transistor and the second transistor output a voltage output signal, e.g., VINXofFIGS.7and9. In some embodiments, a third signal biases a first cascode transistor coupled to the first transistor. The third signal can follow the voltage input signal. In some embodiments, a fourth signal biases a second cascode transistor coupled to the second transistor. The fourth signal can follow the voltage input signal. For instance, the third/fourth signal can be the signal V3or V4ofFIG.9. In some embodiments, a fifth signal biases a third cascode transistor coupled to the first cascode transistor. The fifth signal can also follow the voltage input signal. In some embodiments, a sixth signal biases a fourth cascode transistor coupled to the first cascode transistor. The fifth signal can also follow the voltage input signal. For instance, the fifth/sixth signal can be the signal V5or V6ofFIG.9. Apparatus for Buffering an Input Signal An apparatus for buffering an input signal can include means for implementing the methods described herein. In some embodiments, the apparatus includes means for receiving an input signal. For instance, an input node can be provided to receive an input signal (e.g., VINofFIGS.1,7, and9), such as a high frequency signal to be converted by a data converter. The apparatus can further include push pull means for generating an output signal. Push pull means can include the push pull circuit and push pull architecture described herein (e.g., transistors seen inFIGS.7and9). The apparatus can further include means for generating a first signal for biasing a first transistor of the push pull means. The first signal follows the input signal across all frequencies of the input signal. Further means can be included for generating other signals for biasing other transistors of the push pull means. The means for generating signals for biasing transistors can include level shifters described in relation toFIGS.7-9. The means for generating signals for biasing transistors (bootstrapping the transistors to the input) are distinguishable from other circuits which generate a biasing signal based on fixed/predetermined bias voltages. The means for generating the signals for biasing transistors follows the input signal or is bootstrapped to the input signal across all frequencies of the input signal, i.e., all the way to DC. In contrast, the other circuits which generate a biasing signal based on fixed/predetermined bias voltages do not follow the input signal across all frequencies of the input signal. For those other circuits, signals for biasing transistors can be generated using a fixed biasing voltage and a resistor, and a capacitor in series with the input. Such signals for biasing transistors does not buffer or follow the input signal at low frequencies because the capacitor has a high impedance at low frequencies and the resistor dominates. Therefore, the non-bootstrapped biasing signal would be set by the fixed biasing voltage and the resistor at low frequencies (and does not respond to the input signal). In contrast, the level shifters described herein as means for generating the (bootstrapped) signals for biasing transistors can respond to the input signal across all frequencies (at low and high frequencies), since the level shifters described herein have a different frequency response. Examples Example 1 is an input buffer comprising: a input receiving a voltage input signal; a push pull circuit outputting a voltage output signal at an output, wherein the push pull circuit comprises a first transistor of a first type, a second transistor of a second type complementary to the first type; and a first level shifter coupled to the input for shifting a voltage level of the voltage input signal by a first amount of voltage shift across the first level shifter and generating a first level shifted voltage signal to bias the first transistor, wherein the first amount of voltage shift provided by the first level shifter is independent of a frequency of the voltage input signal. In Example 2, Example 1 can further include a second level shifter coupled to the input for shifting the voltage level of the voltage input signal by a second amount of voltage shift across the second level shifter and generating a second level shifted voltage signal to bias the second transistor. In Example 3, Example 1 or 2 can further include the first amount of voltage shift being programmable. In Example 4, any one of Examples 1-3 can further include an amount of current, flowing through a resistive element and provided by one or more current sources, setting the first amount of voltage shift across the first level shifter. In Example 5, any one of Examples 1-4 can further include a sum of the first amount of voltage shift and the second amount of voltage shift being at least a sum of a first threshold voltage of the first transistor and a second threshold voltage of the second transistor. In Example 6, any one of Examples 1-5 can further include the first amount of voltage shift being different from the second amount of voltage shift. In Example 7, any one of Examples 1-6 can further include the voltage output signal being offset from the voltage input signal. In Example 8, any one of Examples 1-7 can further include a first back gate of the first transistor and a second back gate of the second transistor being coupled to the output or follows the voltage output signal. In Example 9, any one of Examples 1-8 can further include a capacitance between a back gate and a deep N-well of a first transistor being reversed biased. In Example 10, any one of Examples 1-9 can further include the push pull circuit further comprising: a third transistor of the first type in cascode configuration with the first transistor; and a fourth transistor of the second type in cascode configuration with the second transistor. In Example 11, any one of Examples 1-10 can further include a third level shifter coupled to the input for shifting the voltage level of the voltage input signal by a third amount of voltage shift across the third level shifter and generating a third level shifted voltage signal to bias the third transistor. In Example 12, any one of Examples 1-11 can further include the push pull circuit further comprising: a fifth transistor of the first type in cascode configuration with the third transistor; and a sixth transistor of the second type in cascode configuration with the fourth transistor. In Example 13, any one of Examples 1-12 can further include a fourth level shifter coupled to a source of the third transistor for shifting a voltage at the source of the third transistor by a fourth amount of voltage shift across the fourth level shifter and generating a fourth level shifted voltage signal to bias the fifth transistor. In Example 14, any one of Examples 1-12 can further include: a back gate of the third transistor being coupled to a source of the fourth transistor; and a back gate of the fourth transistor being coupled to a source of the third transistor. In Example 15, any one of Examples 1-14 can further include: a back gate of the fifth transistor being coupled to a source of the sixth transistor; and a back gate of the sixth transistor being coupled to a source of the fifth transistor. Example 16 is a method for buffering a voltage input signal, the method comprising: level shifting the voltage input signal by a first voltage shift of a first level shifter to generate a first signal, wherein the first voltage shift is independent of a frequency of the voltage input signal; biasing, by the first signal, a first transistor of a first type; biasing, by a second signal, a second transistor of a second type complementary to the first type, wherein the first transistor and the second transistor are coupled in a push pull architecture; and outputting, by the first transistor and the second transistor, a voltage output signal. In Example 17, Example 16 can further include level shifting the voltage input signal by a second voltage shift set by a second level shifter to generate the second signal. In Example 18, Example 16 or 17 can further include biasing, by a third signal, a first cascode transistor coupled to the first transistor, wherein the third signal follows the voltage input signal. In Example 19, any one of Examples 16-18 can further include biasing, by a fourth signal, a second cascode transistor coupled to the first cascode transistor, wherein the fourth signal follows the voltage input signal. Example 20 is an apparatus comprising: means for receiving an input signal; push pull means for generating an output signal; and (passive) means for generating a first signal for biasing a first transistor of the push pull means, wherein the first signal follows the input signal across all frequencies of the input signal. Example 21 is an apparatus comprising means for implementing/carrying out any one of the methods in Examples 16-19. Example 101 is a bootstrapped switching circuit with accelerated turn on, comprising: a sampling switch receiving a voltage input signal and a gate voltage; a bootstrapped voltage generator comprising a positive feedback loop to generate the gate voltage for turning on the sampling switch, said positive feedback loop comprising an input transistor receiving the voltage input signal and an output transistor outputting the gate voltage of the sampling switch; and a jump start circuit to turn on the output transistor for a limited period of time during which the input transistor is turning on at a startup of the positive feedback loop. In Example 102, Example 101 can further include the jump start circuit being coupled to a gate of the output transistor. In Example 103, Example 101-102 can further include the jump start circuit ceasing to turn on the output transistor after the limited period of time and allows the positive feedback loop to operate. In Example 104, any one of Examples 101-103 can further include: the jump start circuit comprising a transistor receiving a clock signal used for activating the positive feedback loop; and the transistor being turned on by a delayed version of the clock signal to output the clock signal to turn on the output transistor for the limited period of time. In Example 105, any one of Examples 101-104 can further include the jump start circuit further comprising two inverters for generating the delayed version of the clock signal based on the clock signal. In Example 106, any one of Examples 101-105 can further include: the jump start circuit comprising a switch for connecting a gate of the output transistor to a bias voltage for turning on the output transistor; and the switch is controlled by a control signal having a pulse to close the switch. In Example 107, any one of Examples 101-106 can further include the jump start circuit comprising a sense circuit for activating the jump start circuit based on one or more conditions of the bootstrapped switching circuit indicating the startup of the positive feedback loop. In Example 108, any one of Examples 101-107 can further include the sense circuit sensing a voltage representing a voltage level at a node in the bootstrapped switching circuit. In Example 109, any one of Examples 101-108 can further include the node is at a node in the positive feedback loop. In Example 110, any one of Examples 101-109 can further include the sense circuit comprising a comparator comparing the voltage against a predetermined threshold indicating the startup of the positive feedback loop. In Example 111, any one of Examples 101-110 can further include: the positive feedback loop comprising a boot capacitor; and the positive feedback loop turning on the sampling switch by bringing the gate voltage to a boosted voltage generated based on the voltage input signal and a voltage across the boot capacitor. In Example 112, any one of Examples 101-111 can further include: the input transistor being coupled to a first plate of the boot capacitor; and the output transistor being coupled to a second plate of the boot capacitor. In Example 113, any one of Examples 101-112 can further include: the input transistor being driven by the gate voltage of the sampling switch; and the positive feedback loop further comprising a first transistor coupled to a gate of the output transistor and a drain of the input transistor, wherein the first transistor is driven by the gate voltage of the sampling switch. In Example 114, any one of Examples 101-113 can further include: the positive feedback loop further comprising: an additional transistor coupled to a gate of the output transistor and a drain of the input transistor, wherein the additional transistor is controlled by a clock signal which activates the positive feedback loop. Example 115 is a method for accelerated turn on of a sampling switch, comprising: outputting, by an output transistor of a positive feedback loop, an output voltage of a bootstrapped voltage generator for driving the sampling switch; pulling a gate voltage of the output transistor to an on-voltage level to turn on the output transistor for a period of time after the positive feedback loop is activated; and ceasing the pulling of the gate voltage after the period of time. In Example 116, Example 115 can further include: the sampling switch receiving a voltage input signal; and the positive feedback loop receiving the voltage input signal at an input transistor driven by the output voltage output by the output transistor, and generates a boosted voltage signal based on the voltage input signal as the output voltage of the bootstrapped voltage generator to turn on the sampling switch when the positive feedback loop is engaged. In Example 117, Example 115 or 116 can further include pulling the gate voltage of the output transistor comprising changing the gate voltage from an off-voltage level to an on-voltage level. In Example 118, any one of Examples 115-117 can further include: allowing the positive feedback loop to bring the gate voltage to a voltage level of a voltage input signal provided to the bootstrapped voltage generator and the sampling switch after the period of time. In Example 119, any one of Examples 115-118 can further include: sensing one or more conditions indicating the positive feedback loop has been activated; and generating a control signal in response to sensing the one or more conditions, wherein the control signal triggers the pulling of the gate voltage of the output transistor. Example 120 is an apparatus comprising: sampling means receiving an input signal to be sampled and a control signal which turns the sampling means on and off; means for generating a boosted voltage based on the input signal; output means for outputting the control signal; means for bringing the control signal to the boosted voltage through positive feedback action of the control signal; and means for turning on the output means for a limited period of time at a startup of the positive feedback action. Example 121 is an apparatus comprising means for implementing/carrying out any one of the methods in Examples 115-119. Variations and Implementations A source of a transistor, e.g., metal-oxide-semiconductor field-effect transistor (MOSFET), is where charge carriers enter a channel of a transistor. A drain of the transistor is where the charge carriers leave the channel. In some cases, the source and the drain can be considered as two terminals of the transistor. A gate of a transistor can be considered a control terminal of the transistor, because the gate can control the conductivity of the channel (e.g., an amount of current through a transistor). A back gate (body) of a transistor can also be considered as a control terminal of the transistor. Gates and back gates can be used as terminals for biasing a transistor. Note that the activities discussed above with reference to the FIGURES are applicable to any integrated circuits that involve processing analog signals and converting the analog signals into digital data using one or more ADCs. In certain contexts, the features discussed herein related to ADCs in general, including, e.g., ADCs of various flavors including pipeline ADCs, delta sigma ADCs, successive approximation register ADCs, multi-stage ADCs, time-interleaved ADCs, randomized time-interleaved ADCs, etc. The features can be particularly beneficial to high speed ADCs, where input frequencies are relatively high in the gigahertz range. The ADC can be applicable to medical systems, scientific instrumentation, wireless and wired communications systems (especially systems requiring a high sampling rate), radar, industrial process control, audio and video equipment, instrumentation, and other systems which uses ADCs. The level of performance offered by high speed ADCs can be particularly beneficial to products and systems in demanding markets such as high speed communications, medical imaging, synthetic aperture radar, digital beam-forming communication systems, broadband communication systems, high performance imaging, and advanced test/measurement systems (oscilloscopes). The present disclosure encompasses apparatuses which can perform the various methods described herein. Such apparatuses can include circuitry illustrated by the FIGURES and described herein. Parts of various apparatuses can include electronic circuitry to perform the functions described herein. The circuitry can operate in analog domain, digital domain, or in a mixed-signal domain. In some cases, one or more parts of the apparatus can be provided by a processor specially configured for carrying out the functions described herein (e.g., control-related functions, timing-related functions). In some cases that processor can be an on-chip processor with the ADC. The processor may include one or more application specific components, or may include programmable logic gates which are configured to carry out the functions describe herein. In some instances, the processor may be configured to carrying out the functions described herein by executing one or more instructions stored on a non-transitory computer medium. In the discussions of the embodiments herein, the parts and components can readily be replaced, substituted, or otherwise modified in order to accommodate particular circuitry needs. Moreover, it should be noted that the use of complementary electronic devices, hardware, etc. offer an equally viable option for implementing the teachings of the present disclosure. For instance, complementary configurations using PMOS transistor(s) (p-type metal-oxide semiconductor transistor(s)) to replace NMOS transistor(s) (p-type metal-oxide semiconductor transistor(s)) or vice versa, are envisioned by the disclosure. For instance, the present disclosure/claims encompasses implementations where all NMOS devices are replaced by PMOS devices, or vice versa. Connections and the circuit can be reconfigured to achieve the same function. These implementations are equivalent to the disclosed implementations using complementary transistors devices because the implementations would perform substantially the same function in substantially the same way to yield substantially the same result. It is understood by one skilled in the art that a transistor device can be generalized as a device having three (main) terminals. Furthermore, it is understood by one skilled in the art that a switch, a transistor, or transistor device, during operation, can have a characteristic behavior of transistors corresponding to devices such as NMOS, PMOS devices (and any other equivalent transistor devices). In one example embodiment, any number of components of the FIGURES may be implemented on a board of an associated electronic device. The board can be a general circuit board that can hold various components of the internal electronic system of the electronic device and, further, provide connectors for other peripherals. More specifically, the board can provide the electrical connections by which the other components of the system can communicate electrically. Any suitable processors (inclusive of digital signal processors, microprocessors, supporting chipsets, etc.), computer-readable non-transitory memory elements, etc. can be suitably coupled to the board based on particular configuration needs, processing demands, computer designs, etc. Other components such as external storage, additional sensors, controllers for audio/video display, and peripheral devices may be attached to the board as plug-in cards, via cables, or integrated into the board itself. In various embodiments, the functionalities described herein may be implemented in emulation form as software or firmware running within one or more configurable (e.g., programmable) elements arranged in a structure that supports these functions. The software or firmware providing the emulation may be provided on non-transitory computer-readable storage medium comprising instructions to allow a processor to carry out those functionalities. In another example embodiment, the components of the FIGURES may be implemented as stand-alone modules (e.g., a device with associated components and circuitry configured to perform a specific application or function) or implemented as plug-in modules into application specific hardware of electronic devices. Note that particular embodiments of the present disclosure may be readily included in a system on-chip (SOC) package, either in part, or in whole. An SOC represents an IC that integrates components of a computer or other electronic system into a single chip. It may contain digital, analog, mixed-signal, and often radio frequency functions: all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of separate ICs located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the error calibration functionalities may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips. It is also imperative to note that all of the specifications, dimensions, and relationships outlined herein (e.g., the number of processors, logic operations, etc.) have only been offered for purposes of example and teaching only. Such information may be varied considerably without departing from the spirit of the present disclosure, or the scope of the appended claims (if any) or examples described herein. The specifications apply only to one non-limiting example and, accordingly, they should be construed as such. In the foregoing description, example embodiments have been described with reference to particular processor and/or component arrangements. Various modifications and changes may be made to such embodiments without departing from the scope of the appended claims (if any) or examples described herein. The description and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense. Note that with the numerous examples provided herein, interaction may be described in terms of two, three, four, or more electrical components or parts. However, this has been done for purposes of clarity and example only. It should be appreciated that the system can be consolidated in any suitable manner. Along similar design alternatives, any of the illustrated components, modules, blocks, and elements of the FIGURES may be combined in various possible configurations, all of which are clearly within the broad scope of this Specification. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of electrical elements. It should be appreciated that the electrical circuits of the FIGURES and its teachings are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of the electrical circuits as potentially applied to a myriad of other architectures. Note that in this Specification, references to various features (e.g., elements, structures, modules, components, steps, operations, characteristics, etc.) included in “one embodiment”, “example embodiment”, “an embodiment”, “another embodiment”, “some embodiments”, “various embodiments”, “other embodiments”, “alternative embodiment”, and the like are intended to mean that any such features are included in one or more embodiments of the present disclosure, but may or may not necessarily be combined in the same embodiments. It is also important to note that the functions described herein illustrate only some of the possible functions that may be executed by, or within, systems/circuits illustrated in the FIGURES. Some of these operations may be deleted or removed where appropriate, or these operations may be modified or changed considerably without departing from the scope of the present disclosure. In addition, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by embodiments described herein in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the present disclosure. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims (if any) or examples described herein. Note that all optional features of the apparatus described above may also be implemented with respect to the method or process described herein and specifics in the examples may be used anywhere in one or more embodiments. | 80,285 |
11863166 | DESCRIPTION OF EMBODIMENTS Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings. In the following description, a plurality of embodiments will be described; however, it is initially expected at the time of filing the present application that the configurations described in each embodiment may be appropriately combined as long as they are not contradictory to each other. It should be noted that the same or corresponding portions in the drawings are denoted by the same reference numerals, and the description thereof will not be repeated. First Embodiment The configuration of a power semiconductor module according to the first embodiment will be described with reference toFIGS.1to4.FIG.1is a plan view schematically illustrating the configuration of a power semiconductor module according to a first embodiment.FIG.2is a cross-sectional view schematically illustrating a part of the power semiconductor module illustrated inFIG.1.FIG.3is a plan view schematically illustrating the configuration of a semiconductor element illustrated inFIG.1.FIG.4is an equivalent circuit diagram illustrating an electrical configuration of the power semiconductor module illustrated inFIG.1. With reference toFIGS.1to3, the power semiconductor module100includes a semiconductor element D, a drain pattern4, a source pattern5, a gate control pattern6, a source control pattern7, a source wire8, a gate wire9, a source control wire10, an insulating substrate11, and a base plate12. The base plate12is a heat dissipation plate which is made of metal and is configured to dissipate heat inside the module to the outside. The insulating substrate11is made of ceramic and is disposed on the base plate12. The insulating substrate11is not necessarily made of ceramic, it may be made of a metal substrate provided with a resin insulating layer. As illustrated inFIG.2, a back surface pattern24is bonded to the back surface (the lower surface in the drawing) of the insulating substrate11by brazing or the like, and the insulating substrate11is bonded to the upper surface of the base plate12via solder23. Various wiring patterns (such as the drain Pattern4, the source pattern5, the gate control pattern6, and the source control pattern7) are bonded to the front surface (the upper surface in the drawing) of the insulating substrate11by brazing or the like. The semiconductor element D is bonded to the drain pattern4via solder25. Other bonding materials may be used instead of the solders23and25. With reference toFIG.4, the semiconductor element D includes a semiconductor switching element1, a diode2, and a gate resistor3. The semiconductor switching element1is a MOSFET (Metal Oxide Semiconductor Field Effect Transistor). The diode2is electrically connected in anti-parallel to the semiconductor switching element1. The diode2is a body diode (it may be referred to as a parasitic diode or an internal diode) formed inside the semiconductor switching element1. The gate resistor3is connected to a gate electrode of the semiconductor switching element1. The gate resistor3is an internal gate resistor formed inside the semiconductor switching element1. In the present embodiment, although the semiconductor switching element1, the diode2, and the gate resistor3are integrally formed as the semiconductor element D as described in the above, the diode2may be an external diode outside the semiconductor switching element1, and the gate resistor3may be an external gate resistor. Although the semiconductor switching element1is a MOSFET as described in the above, it may be a bipolar transistor such as an IGBT (Insulated Gate Bipolar Transistor). In this case, the semiconductor switching element1, the diode2and the gate resistor3may be constituted by separate chips. The semiconductor element D is made of a wide bandgap semiconductor. The wide bandgap semiconductor may be, for example, silicon carbide (SiC), gallium nitride (GaN), or diamond (C). Since the wide bandgap semiconductor is superior in voltage resistance as compared with a conventional silicon semiconductor, if the semiconductor element D is made of a wide bandgap semiconductor, it is possible for the semiconductor element D to resist the same voltage with a half thickness or less as compared with a conventional silicon semiconductor. As a result, it is possible to reduce the size of a chip constituting the semiconductor element D. Furthermore, since the thickness is made smaller as compared with a conventional silicon semiconductor, the resistance is reduced accordingly, which makes it possible to reduce the loss. A drain pad (not shown) of the semiconductor element D is bonded to the drain pattern4via the solder25(FIG.2), and a source pad27(FIG.3) of the semiconductor element D is connected to one end of the source wire8, and the other end of the source wire8is connected to the source pattern5. One end of the source control wire10is connected to the source pad27, and the other end of the source control wire10is connected to the source control pattern7. One end of the gate wire9is connected to a gate pad28(FIG.3) of the semiconductor element D, and the other end of the gate wire9is connected to the gate control pattern6. The drain pattern4and the source pattern5are connected to the drain terminal13and the source terminal14(FIG.4), respectively, and the gate control pattern6and the source control pattern7are connected to a gate terminal15and a source control terminal16(FIG.4), respectively. The drain terminal13, the source terminal14, the gate terminal15, and the source control terminal16(which are not shown inFIG.1) are exposed to the outside of the power semiconductor module100. The power semiconductor module100according to the first embodiment further includes a capacitor17, a capacitor arrangement pattern18, and a wire19. Hereinafter, the reason why the capacitor17, the capacitor arrangement pattern18, and the wire19are provided will be described. In the power semiconductor module having the configuration mentioned above, undesired gate oscillation or noise may occur in the gate voltage of the semiconductor switching element during the switching operation of the semiconductor switching element. For example, in double-pulse switching by using an L load (inductance), a large amplitude oscillation may occur in the gate-source voltage of a semiconductor switching element when the semiconductor switching element is turned on or turned off. It is considered that such oscillation is caused by the parasitic capacitance of the semiconductor switching element and the parasitic inductance of the wiring connected to the semiconductor switching element, and is called gate oscillation. The gate oscillation may damage the oxide film of the semiconductor switching element, resulting in degradation or destruction to the semiconductor switching element as well as radiation noise to the outside of the module and propagation noise to an external circuit. In addition, such a phenomenon may occur when a plurality of semiconductor switching elements are connected in parallel to each other. As an approach of suppressing the gate oscillation, it is known to add a gate resistor, or dispose a capacitor between the gate and the source, or provide a ferrite core in the gate wiring. The present disclosure focuses on the approach of disposing a capacitor between the gate and the source. When a capacitor is disposed between the gate and the source, a low pass filter is formed in the portion where the capacitor is disposed, and whereby, the gate oscillation is suppressed. On the other hand, when such a capacitor is provided, the input capacitance of the semiconductor switching element increases apparently, which decreases the switching speed of the semiconductor switching element. Therefore, depending on a product type, it is possible to prioritize the switching speed without forming a filter. Further, the performance of the formed filter is not determined only by the capacitance of the capacitor, it may be affected by the parasitic inductance of peripheral patterns and the characteristics of the semiconductor switching element. Therefore, when a filter is to be formed, it is required to design the filter while paying attention to the parasitic inductance of the peripheral patterns or the like. However, if the manufacturing process is made different for each module depending on the performance of a filter and whether or not a filter is necessary, the manufacturing process becomes complicated, which increases the manufacturing cost. Therefore, the power semiconductor module100according to the first embodiment includes a capacitor17, a capacitor arrangement pattern18, and a wire19, which makes it possible to easily adjust the filter formed from the capacitor17and easily connect the filter to or disconnect the filter from the power semiconductor module. Specifically, the capacitor arrangement pattern18is arranged close to both the gate control pattern6and the source control pattern7. In the present embodiment, as illustrated inFIG.1, the gate control pattern6and the source control pattern7are arranged parallel to each other on a lateral side of the drain pattern4(the upper side inFIG.1). The length of the gate control pattern6is shorter than the length of the source control pattern7, and the capacitor arrangement pattern18is arranged in a space formed by the difference in length. One end of the capacitor17is bonded to the capacitor arrangement pattern18by solder or the like, and the other end of the capacitor17is bonded to the source control pattern7by solder or the like. The capacitor arrangement pattern18and the gate control pattern6are connected to each other by the wire19. With such a configuration, the capacitor17is arranged between the gate and the source of the semiconductor switching element1, and an LC filter (low-pass filter) is formed by the inductance of each pattern and the wire19and the capacitance of the capacitor17. The performance of the LC filter may be adjusted by changing the wiring inductance through appropriate modification of the length and/or the diameter of the wire19. In other words, it is possible to easily adjust the performance of the filter formed from the capacitor17simply by changing (adjusting) the length and/or the diameter of the wire19. When a filter is not required so as to prioritize the switching speed of the semiconductor switching element1, the wire19may not be bonded. Thus, the filter (the capacitor17) may be easily disconnected from the circuit. In the present embodiment, a circuit composed of a passive element or an active element or a combination thereof which forms a filter together with the inductance of the wire19and each pattern may be used instead of the capacitor17or together with the capacitor17. For example, a diode having a capacitance component, a MOSFET with a variable capacitance and resistance under external control, or the like may be used instead of the capacitor17. First Modification of First Embodiment In the first embodiment, the capacitor arrangement pattern18is provided on a lateral side of the gate control pattern6which is arranged in parallel to the source control pattern7, and the capacitor arrangement pattern18is connected to the gate control pattern6by the wire19, but the capacitor arrangement pattern may be provided on a lateral side of the source control pattern7. FIG.5is a plan view schematically illustrating the configuration of a power semiconductor module according to a first modification of the first embodiment, andFIG.6is an equivalent circuit diagram illustrating an electrical configuration of the power semiconductor module illustrated inFIG.5. With reference toFIGS.5and6, the power semiconductor module100A includes a capacitor arrangement pattern31instead of the capacitor arrangement pattern18in the power semiconductor module100according to the first embodiment illustrated inFIG.1. The capacitor arrangement pattern31is arranged close to both the gate control pattern6and the source control pattern7. In the present modification, the length of the source control pattern7is shorter than the length of the gate control pattern6, and the capacitor arrangement pattern31is arranged in a space formed by the difference in length. One end of the capacitor17is bonded to the capacitor arrangement pattern31by solder or the like, and the other end of the capacitor17is bonded to the source control pattern7by solder or the like. The capacitor arrangement pattern31and the gate control pattern6are connected to each other by the wire19. With such a configuration, it is possible to easily adjust the performance of the filter formed from the capacitor17simply by changing (adjusting) the length and/or the diameter of the wire19as in the first embodiment. Further, when a filter is not required so as to prioritize the switching speed, the filter (the capacitor17) may be easily disconnected from the circuit simply by not bonding the wire19. Second Modification of First Embodiment The capacitor arrangement pattern may be provided on both a lateral side of the gate control pattern6and a lateral side of the source control pattern7. FIG.7is a plan view schematically illustrating the configuration of a power semiconductor module according to a second modification of the first embodiment, andFIG.8is an equivalent circuit diagram illustrating an electrical configuration of the power semiconductor module illustrated inFIG.7. With reference toFIGS.7and8, the power semiconductor module100B further includes a capacitor arrangement pattern31and wires19A and19B instead of the wire19in the power semiconductor module100according to the first embodiment illustrated inFIG.1. The capacitor arrangement pattern31is the same as that described in the first modification and illustrated inFIG.5. One end of the capacitor17is bonded to the capacitor arrangement pattern18by solder or the like, and the other end of the capacitor17is bonded to the capacitor arrangement pattern31. The capacitor arrangement pattern18and the gate control pattern6are connected to each other by the wire19A, and the capacitor arrangement pattern31and the source control pattern7are connected to each other by the wire19B. With such a configuration, it is possible to easily adjust the performance of the filter formed from the capacitor17simply by changing (adjusting) the length and/or the diameter of the wires19A and19B as in the first embodiment and the first modification. When a filter is not required so as to prioritize the switching speed, the filter (the capacitor17) may be easily disconnected from the circuit simply by not bonding at least one of the wires19A and19B. Further, according to the second modification, since two wires19A and19B are used, it is possible to increase (enlarge) the adjustable range of the inductance of the filter formed from the capacitor17. Third Modification of First Embodiment Each configuration including the capacitor17, the capacitor arrangement pattern18(31) and the wire19(19A,19B) as mentioned above is suitable for a power semiconductor module composed of a plurality of semiconductor elements that operate in parallel. In other words, in order to implement a power semiconductor module that performs switching operation at a large current, a plurality of semiconductor elements are mounted in the module and are operated in parallel. However, in this case, the gate control pattern or the like may serve as an amplification path between adjacent semiconductor elements to amplify a high-frequency portion of a gate voltage, which may cause gate oscillation to occur. Therefore, in the third modification, the capacitor17, the capacitor arrangement pattern18and the wire19illustrated inFIG.1are provided in a power semiconductor module composed of a plurality of semiconductor elements that operate in parallel. FIG.9is a plan view schematically illustrating the configuration of a power semiconductor module according to a third modification of the first embodiment, andFIG.10is an equivalent circuit diagram illustrating an electrical configuration of the power semiconductor module illustrated inFIG.9. With reference toFIGS.9and10, the power semiconductor module100C includes a plurality of semiconductor elements DA and DB instead of the semiconductor element D in the power semiconductor module100illustrated inFIG.1. The semiconductor element DB and the semiconductor element DA are connected in parallel to each other. In the present modification, it is described that the power semiconductor module100C includes two semiconductor elements DA and DB, but the number of the semiconductor elements connected in parallel may be three or more. The semiconductor element DA includes a semiconductor switching element1A, a diode2A, and a gate resistor3A. The semiconductor element DB includes a semiconductor switching element1B, a diode2B, and a gate resistor3B. The configuration of each of the semiconductor elements DA and DB is the same as that of the semiconductor element D described above. One end of a source wire8A is connected to the source pad of the semiconductor element DA, and the other end of the source wire8A is connected to the source pattern5. Further, one end of a source control wire10A is connected to the source pad of the semiconductor element DA, and the other end of the source control wire10A is connected to the source control pattern7. One end of a gate wire9A is connected to the gate pad of the semiconductor element DA, and the other end of the gate wire9A is connected to the gate control pattern6. One end of a source wire8B is connected to the source pad of the semiconductor element DB, and the other end of the source wire8B is connected to the source pattern5. Further, one end of a source control wire10B is connected to the source pad of the semiconductor element DB, and the other end of the source control wire10B is connected to the source control pattern7. One end of a gate wire9B is connected to the gate pad of the semiconductor element DB, and the other end of the gate wire9B is connected to the gate control pattern6. With this configuration, the semiconductor elements DA and DB are allowed to perform parallel operations. The capacitor17, the capacitor arrangement pattern18, and the wire19are the same as those provided in the power semiconductor module100illustrated inFIG.1. According to the third modification, in the power semiconductor module100C composed of a plurality of semiconductor elements that operate in parallel, it is possible to effectively suppress the gate oscillation in the semiconductor elements DA and DB by forming a filter with the capacitor17. Further, it is possible to easily adjust the performance of the filter formed from the capacitor17simply by changing (adjusting) the length and/or the diameter of the wire19. When a filter is not required so as to prioritize the switching speed, the filter (the capacitor17) may be easily disconnected from the circuit simply by not bonding the wire19. In the modification described above, the semiconductor elements DA and DB are provided instead of the semiconductor element D in the power semiconductor module100according to the first embodiment, but the semiconductor elements DA and DB may be provided instead of the semiconductor element D in the power semiconductor module100A according to the first modification or the power semiconductor module100B according to the second modification. Second Embodiment It is important to monitor an internal temperature of a power semiconductor module. In particular, in a power semiconductor module which operates at a large current, the internal temperature of a semiconductor element and/or a peripheral component may exceed an allowable temperature, resulting in deterioration or failure of the semiconductor element and/or the peripheral component. The internal temperature of the power semiconductor module is generally monitored in such manner that the thermal resistance of the power semiconductor module is determined in advance, and the internal temperature of the module is estimated from the input power and the base temperature of the module. However, due to the degree of warp of the base plate, the variation in the thickness of grease between the cooler and the module or the like, the internal temperature may not be accurately estimated. In addition, if the thermal resistance determined in advance is different from the actual thermal resistance of the module, the internal temperature may not be accurately estimated. In order to increase the accuracy of estimating the internal temperature, a thermistor is generally mounted inside the power semiconductor module to detect the internal temperature. The internal temperature of the module may be detected by using a detection circuit to detect the resistance value of the thermistor which changes in response to the internal temperature. It is empirically known that the gate oscillation described in the first embodiment tends to occur in a semiconductor element at a high temperature. Thus, in the second embodiment, an NTC (Negative Temperature Coefficient) thermistor is provided in the power semiconductor module. The NTC thermistor is connected in series to a capacitor17which is provided for the purpose of suppressing gate oscillation. As illustrated inFIG.11, the NTC thermistor has a large resistance value at a low temperature and a small resistance value at a high temperature. Therefore, by connecting the NTC thermistor in series to the capacitor17, it is possible to prevent the capacitor17from functioning at a low temperature and allow the capacitor17to function at a high temperature so as to suppress the gate oscillation. The internal temperature of the power semiconductor module also affects the switching speed of the semiconductor element. When the internal temperature of a MOSFET increases, the turn-on speed thereof increases due to a decrease in a gate threshold voltage Vth, which may cause the current waveform to oscillate. In addition, it is not preferable that the behavior of the semiconductor module changes in response to the internal temperature, and it is desirable that the switching speed of the semiconductor element is not affected by the internal temperature. Therefore, as described above, by connecting an NTC thermistor in series to the capacitor17, it is possible to allow the capacitor17to function at a high temperature so as to prevent the switching speed from increasing. FIG.12is a plan view schematically illustrating the configuration of a power semiconductor module according to a second embodiment, andFIG.13is an equivalent circuit diagram illustrating an electrical configuration of the power semiconductor module illustrated inFIG.12. With reference toFIGS.12and13, the power semiconductor module100D includes a semiconductor element D, a drain pattern4, a source pattern5, a gate control pattern6, a source control pattern7, a source wire8, a gate wire9, a source control wire10, an insulating substrate11, and a base plate12. The power semiconductor module100D further includes a capacitor17, a thermistor20, and an arrangement pattern32for arranging the capacitor17and the thermistor20. The thermistor20is an NTC thermistor. The arrangement pattern32is arranged close to both the gate control pattern6and the source control pattern7. In the present embodiment, the length of the source control pattern7is shorter than the length of the gate control pattern6, and the arrangement pattern32is arranged in a space formed by the difference in length. One end of the thermistor20and one end of the capacitor17are bonded to the arrangement pattern32by solder or the like, the other end of the thermistor20is bonded to the gate control pattern6, and the other end of the capacitor17is bonded to the source control pattern7. The other components except the capacitor17, the thermistor20and the arrangement pattern32are the same as those described in the first embodiment. With the configuration mentioned above, the capacitor17and the thermistor20are arranged in series between the gate and the source of the semiconductor switching element1. Thereby, an LC filter is formed. The LC filter prevents the capacitor17from functioning at a low temperature and allows the capacitor17to function at a high temperature, which makes it possible to effectively suppress the gate oscillation. When the temperature is high, the capacitor17functions, and thereby the input capacitance of the semiconductor switching element1increases. Accordingly, it is possible to prevent the switching speed from increasing at a high temperature, which makes it possible to prevent the behavior of the power semiconductor module100D from changing in accordance with the internal temperature. In the present embodiment, a circuit composed of a passive element or an active element or a combination thereof which forms a filter may be used instead of the capacitor17or together with the capacitor17. First Modification of Second Embodiment In the second embodiment, it is described that the arrangement pattern32is provided on a lateral side of the source control pattern7which is arranged in parallel to the gate control pattern6, but the arrangement pattern may be provided on a lateral side of the gate control pattern6. FIG.14is a plan view schematically illustrating the configuration of a power semiconductor module according to a first modification of the second embodiment. With reference toFIG.14, the power semiconductor module100E includes an arrangement pattern33instead of the arrangement pattern32in the power semiconductor module100D illustrated inFIG.12. The arrangement pattern33is arranged close to both the gate control pattern6and the source control pattern7. In the present modification, the length of the gate control pattern6is shorter than the length of the source control pattern7, and the arrangement pattern33is arranged in a space formed by the difference in length. One end of the thermistor20and one end of the capacitor17are bonded to the arrangement pattern33by solder or the like, the other end of the thermistor20is bonded to the gate control pattern6, and the other end of the capacitor17is bonded to the source control pattern7. The equivalent circuit illustrating the electrical configuration of the power semiconductor module100E is the same as that of the power semiconductor module100D according to the second embodiment illustrated inFIG.13. According to such a configuration, it is possible to prevent capacitor17from functioning at a low temperature and allow the capacitor17to function at a high temperature so as to suppress the gate oscillation as in the second embodiment. Accordingly, it is possible to prevent the switching speed of the semiconductor switching element1from increasing at a high temperature, which makes it possible to prevent the behavior of the power semiconductor module100E from changing in accordance with the temperature. Second Modification of Second Embodiment In the second embodiment which is provided with a capacitor17and a thermistor20, a wire may be further provided. Thereby, it is possible to easily adjust the performance of the filter formed from the capacitor17and easily disconnect the filter (the capacitor17) from the circuit. FIG.15is a plan view schematically illustrating the configuration of a power semiconductor module according to a second modification of the second embodiment, andFIG.16is an equivalent circuit diagram illustrating an electrical configuration of the power semiconductor module illustrated inFIG.15. With reference toFIGS.15and16, the power semiconductor module100F further includes arrangement patterns33and36, and wires19A and19B in the power semiconductor module100D according to the second embodiment illustrated inFIG.12. The arrangement pattern36is arranged between the arrangement pattern32and the source control pattern7. The arrangement pattern33is the same as that described in the first modification and illustrated inFIG.14. One end of the thermistor20and one end of the capacitor17are bonded to the arrangement pattern32, the other end of the thermistor20is bonded to the arrangement pattern33, and the other end of the capacitor17is bonded to the arrangement pattern36. Further, the arrangement pattern33and the gate control pattern6are connected to each other by the wire19A, and the arrangement pattern36and the source control pattern7are connected to each other by the wire19B. According to the second modification, in addition to the effect described in the second embodiment, it is possible to easily adjust the performance of the filter formed from the capacitor17simply by changing (adjusting) the length and/or the diameter of the wires19A and19B. When a filter is not required so as to prioritize the switching speed, the filter (the capacitor17) may be easily disconnected from the circuit simply by not bonding at least one of the wires19A and19B. Further, since two wires19A and19B are used, it is possible to increase (enlarge) the adjustable range of the inductance of the filter formed from the capacitor17. Although two wires19A and19B are provided in the present modification as described above, only one of the wires19A and19B may be provided. In other words, the arrangement pattern36and the wire19B may not be provided, and the other end of the capacitor17may be directly bonded to the source control pattern7. Alternatively, the arrangement pattern33and the wire19A may not be provided, and the other end of the thermistor20may be directly bonded to the gate control pattern6. Third Modification of Second Embodiment In the second modification, it is described that the additional arrangement pattern36is arranged between the arrangement pattern32and the source control pattern7, but the additional arrangement pattern may be arranged between the arrangement pattern33and the gate control pattern6. FIG.17is a plan view schematically illustrating the configuration of a power semiconductor module according to a third modification of the second embodiment. With reference toFIG.17, the power semiconductor module100G includes an arrangement pattern38instead of the arrangement pattern36in the power semiconductor module100F according to the second modification illustrated inFIG.15. The arrangement pattern38is arranged between the arrangement pattern33and the gate control pattern6. One end of the thermistor20and one end of the capacitor17are bonded to the arrangement pattern33, the other end of the thermistor20is bonded to the arrangement pattern38, and the other end of the capacitor17is bonded to the arrangement pattern32. Further, the arrangement pattern38and the gate control pattern6are connected to each other by the wire19A, and the arrangement pattern32and the source control pattern7are connected to each other by the wire19B. The equivalent circuit illustrating the electrical configuration of the power semiconductor module100G is the same as that of the power semiconductor module100F according to the second modification of the second embodiment illustrated inFIG.16. According to the third modification, the same effect as that of the second modification may be obtained. Although in the above, it is described that two wires19A and19B are provided in the present modification, only one of the wires19A and19B may be provided. In other words, the arrangement pattern32and the wire19B may not be provided, and the other end of the capacitor17may be directly bonded to the source control pattern7. Alternatively, the arrangement pattern38and the wire19A may not be provided, and the other end of the thermistor20may be directly bonded to the gate control pattern6. Fourth Modification of Second Embodiment In the fourth modification, a capacitor17and a thermistor20are provided in a power semiconductor module including a plurality of semiconductor elements that operate in parallel. FIG.18is a plan view schematically illustrating the configuration of a power semiconductor module according to a fourth modification of the second embodiment, andFIG.19is an equivalent circuit diagram illustrating an electrical configuration of the power semiconductor module illustrated inFIG.18. With reference toFIGS.18and19, the power semiconductor module100H includes a plurality of semiconductor elements DA and DB instead of the semiconductor element D in the power semiconductor module100D illustrated inFIG.12. The semiconductor elements DA and DB are the same as those described with reference toFIG.9. According to the fourth modification, it is also possible for the power semiconductor module100H including a plurality of semiconductor elements operating in parallel to obtain the same effect as that of the second embodiment. Although in the above, it is described that the semiconductor elements DA and DB are provided instead of the semiconductor element D in the power semiconductor module100D according to the second embodiment, but the semiconductor elements DA and DB may be provided instead of the semiconductor element D in the power semiconductor module100E according to the first modification, the power semiconductor module100F according to the second modification, or the power semiconductor module100G according to the third modification. In the second embodiment and the first to fourth modifications thereof, the semiconductor element D (or the semiconductor element DA or the semiconductor element DB) is made of a wide bandgap semiconductor (such as SiC-MOSFET). Since it is empirically known that gate oscillation is likely to occur in the wide bandgap semiconductor at a high temperature, an NTC thermistor having a small resistance value at a high temperature is used as the thermistor20so as to form a filter with the capacitor17at a high temperature. On the other hand, when a semiconductor element is made of a conventional silicon semiconductor (such as Si-IGBT), gate oscillation may occur in the silicon semiconductor at a low temperature. Therefore, the gate oscillation is prominent at a low temperature than at a high temperature. When it is desired to suppress the gate oscillation generated at a low temperature, as illustrated inFIG.20, a PTC (Positive Temperature Coefficient) thermistor having a small resistance at a low temperature and a large resistance at a high temperature may be used as the thermistor20. Third Embodiment The third embodiment discloses a circuit for detecting the temperature of the thermistor20in the power semiconductor module according to the second embodiment and the modifications thereof in which the capacitor17and the thermistor20are provided. FIG.21is a circuit diagram illustrating an electrical configuration of a power semiconductor element according to a comparative example.FIG.21illustrates a conventional temperature detection circuit as a comparative example of the temperature detection circuit for detecting the temperature of the thermistor20. With reference toFIG.21, the power semiconductor device500includes a power semiconductor module100Z, a driving circuit40, and a temperature detection circuit42. The power semiconductor module100Z includes a semiconductor switching element1, a diode2, a gate resistor3, a capacitor17, and a thermistor20. The capacitor17is electrically connected between the gate electrode and the source electrode of the semiconductor switching element1. The driving circuit40is connected to a gate terminal15and a source control terminal16of the power semiconductor module100Z. The gate terminal15is electrically connected to a gate control pattern (not shown) of the power semiconductor module100Z, and the source control terminal16is electrically connected to a source control pattern (not shown) of the power semiconductor module100Z. The driving circuit40generates a gate voltage for driving the semiconductor switching element1. One end of the thermistor20is connected to the source electrode of the semiconductor switching element1and the other end thereof is connected to the thermistor detection terminal35. The temperature detection circuit42is connected to the source control terminal16and the thermistor detection terminal35. The temperature detection circuit42includes a voltage source44, a resistance element46, and a resistance value detection circuit48. The voltage source44supplies a constant voltage. The resistance value detection circuit48calculates the resistance value of the thermistor20by detecting a voltage of the resistance element46. Specifically, since the voltage of the resistance element46changes in response to the resistance value of the thermistor20, if the relationship between the voltage of the resistance element46and the resistance value of the thermistor20is determined in advance, the resistance value of the thermistor20may be determined from the detection voltage of the resistance value detection circuit48. Then, the resistance value of the thermistor20may be converted into a temperature by using the relationship illustrated inFIG.11. However, the temperature detection circuit42requires a dedicated voltage source44to detect the resistance value of the thermistor20. Therefore, the third embodiment discloses a temperature detection circuit that does not use such a voltage source. FIG.22is a circuit diagram illustrating an electrical configuration of a power semiconductor device according to a third embodiment. With reference toFIG.22, the power semiconductor device110includes a power semiconductor module100D (seeFIGS.12and13), a driving circuit40, and a temperature detection circuit50. The power semiconductor module100D is the same as that described in the second embodiment (seeFIGS.12and13). The driving circuit40is the same as that described with reference toFIG.21. A temperature detection circuit50is connected to a detection terminal34and a source control terminal16of the power semiconductor module100D. The detection terminal34is connected to a connection node between the thermistor20and the capacitor17. Specifically, the detection terminal34is electrically connected to the arrangement pattern32illustrated inFIG.12. The source control terminal16is electrically connected to the source control pattern7illustrated inFIG.12. The temperature detection circuit50detects an inter-terminal voltage of the capacitor17, and detects a temperature of the thermistor20based on a change in the detected voltage. Hereinafter, a method of detecting the temperature of the thermistor20by the temperature detection circuit50will be described in detail. FIG.23is a diagram illustrating the relationship between a time constant and a temperature of a RC circuit composed of the thermistor20and the capacitor17illustrated inFIG.22. With reference toFIG.23, as the temperature increases, the time constant of the RC circuit composed of the thermistor20and the capacitor17decreases, and as the temperature decreases, the time constant of the RC circuit increases. FIG.24is a waveform diagram illustrating a change in an inter-terminal voltage Vc of the capacitor17in response to a change in a gate-source voltage Vgs. With reference toFIG.24, the gate-source voltage Vgs (hereinafter will be simply referred to as the “gate voltage Vgs”) is simply illustrated as a rectangular wave, and the gate voltage Vgs rises at timing t1. In the inter-terminal voltage Vc (hereinafter will be simply referred to as the “voltage Vc”) of the capacitor17, a solid line k1indicates a change in the voltage Vc at a relatively low temperature, and a dotted line k2indicates a change in the voltage Vc at a relatively high temperature. As illustrated inFIG.23, as the temperature decreases, the time constant of the RC circuit increases, and thereby, the voltage Vc should reach a predetermined voltage at timing t3. However, as the temperature increases, the time constant of the RC circuit decreases, and thereby, the voltage Vc reaches the predetermined voltage at timing t2earlier than timing t3. Accordingly, if the relationship between the temperature and the time from a timing when the gate voltage Vgs changes to a timing when the voltage Vc reaches the predetermined voltage is determined in advance, the temperature detection circuit50may detect the temperature of the thermistor20by detecting an interval from timing t1when the gate voltage Vgs changes to a timing when the voltage Vc reaches the predetermined voltage on the basis of the relationship. According to the third embodiment, it is possible to detect the temperature of the thermistor20without using a dedicated voltage source44as illustrated inFIG.21. Although in the above, it is described that the temperature detection circuit50is connected to the detection terminal34and the source control terminal16of the power semiconductor module100D, the temperature detection circuit50may be connected to the source terminal14instead of the source control terminal16. Even with such a configuration, the temperature of the thermistor20may be detected by using the temperature detection circuit50. Although in the above, it is described that the driving circuit40and the temperature detection circuit50are provided as being separate from the power semiconductor module100D, the driving circuit40and/or the temperature detection circuit50may be built in the power semiconductor module100D. Further, in the above, it is described that the power semiconductor device110includes the power semiconductor module100D according to the second embodiment, it may include any of the power semiconductor module100E according to the first modification, the power semiconductor module100F according to the second modification, the power semiconductor module100G according to the third modification, and the power semiconductor module100E according to the fourth100H of the second embodiment instead of the power semiconductor module100D. Modification of Third Embodiment In the third embodiment, it is described that the temperature detection circuit50detects the temperature of the thermistor20by detecting the voltage Vc of the capacitor17, the temperature detection circuit50may be configured to detect the temperature of the thermistor20by detecting the voltage of the thermistor20. FIG.25is a circuit diagram illustrating an electrical configuration of a power semiconductor element according to a modification of the third embodiment. With reference toFIG.25, the power semiconductor device110A is different from the power semiconductor device110according to the third embodiment illustrated inFIG.22in that the temperature detection circuit50is connected to the gate terminal15and the detection terminal34of the power semiconductor module100D. The temperature detection circuit50detects an inter-terminal voltage of the thermistor20, and detects a temperature of the thermistor20based on a change in the detected voltage. Hereinafter, a method of detecting the temperature of the thermistor20by the temperature detection circuit50according to the present modification will be described. FIG.26is a waveform diagram illustrating a change in an inter-terminal voltage Vt of the thermistor20in response to a change in the gate-source voltage Vgs. With reference toFIG.26, the gate voltage Vgs is simply illustrated as a rectangular wave, and the gate voltage Vgs rises at timing t11. In the inter-terminal voltage Vt (hereinafter will be simply referred to as the “voltage Vt”) of the thermistor20, a solid line k3indicates a change in the voltage Vt at a relatively low temperature, and a dotted line k4indicates a change in the voltage Vt at a relatively high temperature. As illustrated inFIG.23, as the temperature decreases, the time constant of the RC circuit increases, and thereby, the voltage Vc should reach a predetermined voltage at timing t13. However, as the temperature increases, the time constant of the RC circuit decreases, and thereby, the voltage Vc reaches the predetermined voltage at timing t12earlier than timing t13. Accordingly, if the relationship between the temperature and the time from a timing when the gate voltage Vgs changes to a timing when the voltage Vt reaches the predetermined voltage is determined in advance, the temperature detection circuit50may detect the temperature of the thermistor20by detecting an interval from timing t11when the gate voltage Vgs changes to a timing when the voltage Vt reaches the predetermined voltage on the basis of the relationship. According to the modification of the third embodiment, it is also possible to detect the temperature of the thermistor20without using a dedicated voltage source44as illustrated inFIG.21. Also in the present modification, it is described that the power semiconductor device110A includes the power semiconductor module100D according to the second embodiment, it may include any of the power semiconductor module100E according to the first modification, the power semiconductor module100F according to the second modification, the power semiconductor module100G according to the third modification, and the power semiconductor module100E according to the fourth100H of the second embodiment instead of the power semiconductor module100D. Fourth Embodiment As described above, although it is empirically known that gate oscillation tends to occur in a semiconductor element at a high temperature, and actually, the gate oscillation does not always occur at a high temperature, it tends to occur at the time of switching the semiconductor switching element1. Therefore, the gate oscillation may occur at the time of switching the semiconductor switching element1even at a low temperature. On the other hand, as described above, a filter which is formed from the capacitor17for suppressing gate oscillation may disadvantageously decrease the switching speed, and it is not desirable for the filter to operate constantly. Thus, the fourth embodiment discloses a filter capable of functioning only at a timing of switching the semiconductor switching element1where gate oscillation is likely to occur. FIG.27is an equivalent circuit diagram illustrating an electrical configuration of a power semiconductor module according to a fourth embodiment. With reference toFIG.27, the power semiconductor module100I includes a semiconductor switching element1, a diode2, a gate resistor3, a capacitor17, and a filter-forming switching element60. The filter-forming switching element60(hereinafter may be simply referred to as the “switching element60”) is connected in series to the capacitor17, and the capacitor17and the switching element60connected in series are electrically connected between the gate electrode and the source electrode of the semiconductor switching element1. The switching element60is driven by a driving circuit (not shown) connected to an external terminal62. When the switching element60is turned off, since the switching element60has a high resistance, the filter formed from the capacitor17does not function. On the other hand, when the switching element60is turned on, since the resistance value of the switching element60decreases, the filter formed from the capacitor17functions. FIG.28is waveform diagram illustrating the relationship between the gate voltage Vgs and operations of the filter-forming switching element60. With reference toFIG.28, in the present embodiment, the gate voltage Vgs begins to rise at timing t21, and the gate oscillation occurs immediately after timing t22at which the rise of the gate voltage Vgs ends. At timing t24, the gate voltage Vgs begins to decrease, and the gate oscillation also occurs immediately after timing t25at which a mirror period ends. Therefore, in the present embodiment, the switching element60is turned on during a period (a first period) from timing t22to timing t23during which the semiconductor switching element1is turned on and the gate oscillation is likely to occur, and during a period (a second period) from timing t25to timing t26during which the semiconductor switching element1is turned off and the gate oscillation is likely to occur, and the switching element60is turned off during the other periods. As a result, the capacitance of the filter formed from the capacitor17becomes great only during the period from the timing t22to the timing t23and the period from the timing t25to the timing t26during which the gate oscillation is likely to occur, the filter functions. On the other hand, during the other periods, the switching element60is turned off, the capacitance of the filter becomes small, and thereby, the filter does not function. Thus, it is possible to effectively suppress the gate oscillation by adjusting the ON period of the filter-forming switching element60in response to the timing at which the gate oscillation occurs. The timing at which the gate oscillation occurs may be determined in advance by an evaluation experiment or the like. FIG.29is a plan view schematically illustrating the configuration of a power semiconductor module according to the fourth embodiment. With reference toFIG.29, the power semiconductor module100I includes a semiconductor element D, a drain pattern4, a source pattern5, a gate control pattern6, a source control pattern7, a source wire8, a gate wire9, a source control wire10, an insulating substrate11, and a base plate12. The power semiconductor module100I further includes a capacitor17, a switching element60, a drain pattern70, a gate control pattern76, and wires78and80. One end of the capacitor17is bonded to the drain pattern70by solder or the like, and the other end of the capacitor17is bonded to the gate control pattern6. A drain pad (not shown) of the switching element60is bonded to the drain pattern70by solder or the like. A gate pad72of the switching element60is connected to the gate control pattern76by the wire78, and a source pad74of the switching element60is connected to the source control pattern7by the wire80. The other components are the same as those of the power semiconductor module100illustrated inFIG.1. With such a configuration, the electric circuit illustrated inFIG.27is implemented. As described above, according to the fourth embodiment, it is possible to adjust the ON period of the filter-forming switching element60in response to the timing at which gate oscillation occurs, which makes it possible to effectively suppress the gate oscillation. Fifth Embodiment The fifth embodiment relates to a power converter to which the power semiconductor module according to each of the embodiments described above and the modifications thereof is applied. The present invention is not particularly limited to a specific power converter, and in the fifth embodiment, the power converter will be described as a three-phase inverter to which the present invention is applied. FIG.30is a block diagram illustrating the configuration of a power conversion system to which a power converter according to the fifth embodiment is applied. With reference toFIG.30, the power conversion system includes a power converter200, a load300, and a power supply400. The power supply400is a DC power supply, and is configured to supply DC power to the power converter200. The power supply400may be any power supply. For example, the power supply400may be a DC system, a solar cell or a storage battery, or may be a rectifier circuit or an AC/DC converter connected to an AC system. The power supply100may be a DC/DC converter configured to convert DC power output from a DC system into another DC power. The power converter200is a three-phase inverter connected between the power supply400and the load300, and is configured to convert DC power supplied from the power supply400into AC power and supply the AC power to the load300. The power converter200includes a main conversion circuit201configured to convert DC power into AC power and output the AC power, and a control circuit202configured to output a control signal for controlling the main conversion circuit201to the main conversion circuit201. The load300is a three-phase electric motor driven by the AC power supplied from the power converter200. The load300is not particularly limited, it may be an electric motor mounted on various electric apparatuses such as an electric motor for a hybrid vehicle, an electric vehicle, a railroad vehicle, an elevator, or an air conditioner. Hereinafter, the power converter200will be described in detail. The main conversion circuit201includes switching elements (not shown) and freewheel diodes (not shown). When the switching element is switched, the main conversion circuit201converts DC power supplied from the power supply100into AC power and supplies the AC power to the load300. The main conversion circuit201may have various circuit configurations. The main conversion circuit201according to the fifth embodiment is a two-level three-phase full bridge circuit, and may include six switching elements and six freewheel diodes connected in anti-parallel to the switching elements, respectively. Each switching element and each diode provided in the main conversion circuit201may be configured to include a power semiconductor module100(100A to100I) according to any one of the embodiments mentioned above and modifications thereof. Among the six switching elements, every two switching elements are connected in series so as to form upper and lower arms, and each of the upper and lower arms forms each phase (U phase, V phase and W phase) of the full bridge circuit. The output terminals of the upper and lower arms, in other words, the three output terminals of the main conversion circuit201are connected to the load300. The main conversion circuit201includes a driving circuit (not shown) for driving each switching element. As described in the third embodiment, the driving circuit may be provided separately from the power semiconductor module100(100A to100I) or may be built in the power semiconductor module100(100A to100I). The driving circuit generates a driving signal for driving the switching elements provided in the main conversion circuit201, and supplies the driving signal to control electrodes of the switching elements included in the main conversion circuit201. Specifically, the driving circuit, in accordance with a control signal from the control circuit203, outputs a driving signal for turning on the switching element and a driving signal for turning off the switching element to the control electrode of each switching element. In the case of maintaining the switching element in the ON state, the driving signal is a voltage signal (ON signal) equal to or higher than the threshold voltage of the switching element, and in the case of maintaining the switching element in the OFF state, the driving signal is a voltage signal (OFF signal) equal to or lower than the threshold voltage of the switching element. The control circuit202controls the switching elements of the main conversion circuit201so as to supply a desired power to the load300. Specifically, the control circuit202calculates a time (On time) to turn on each switching element of the main conversion circuit201based on a power to be supplied to the load300. For example, the main conversion circuit201may be controlled by a pulse width modulation (PWM) control which modulates the ON time of the switching element based on a voltage to be output. Then, the control circuit202outputs a control command (control signal) to the driving circuit included in the main conversion circuit201so that an ON signal is output to the switching element which should be turned on at each time or an OFF signal is output to each switching element which should be turned off at each time. The driving circuit outputs an ON signal or an OFF signal to the control electrode of each switching element as the driving signal in accordance with the control signal. In the power converter according to the fifth embodiment, since the power semiconductor module100(100A to100I) according to each of the embodiments mentioned above and the modifications thereof is applied to the switching elements and diodes of the main conversion circuit201, the same effect as that of the power semiconductor modules100(100A to100I) may be achieved. In the present embodiment, as an example, it is described that the present invention is applied to a two-level three-phase inverter, but the present invention is not limited thereto, the present invention may be applied to various power converters. Although the power converter according to the present embodiment is a two-level power converter, the power converter according to the present embodiment may be a three-level power converter or a multi-level power converter. When the power converter supplies power to a single-phase load, the present invention may be applied to a single-phase inverter. When the power converter is configured to supply power to a DC load or the like, the present invention may be applied to a DC/DC converter or an AC/DC converter. The power converter to which the present invention is applied is not limited to the case where the load is an electric motor, and it may be used as, for example, a power supply for an electric discharge machine or a laser machine, or a power supply for an induction cooker or a non-contact power supply system. The power converter to which the present invention is applied may be used as a power conditioner for a solar power generation system, a power storage system, or the like. In each of the embodiments mentioned above and the modifications thereof, it is described that the semiconductor element D (DA, DB) is made of a wide bandgap semiconductor, the present invention is not limited to a power semiconductor module and a power converter in which the semiconductor element is made of a wide bandgap semiconductor, it may include a power semiconductor module and a power converter made of conventional silicon-based semiconductor elements. The embodiments disclosed herein are intended to be carried out in any appropriate combination unless they are technically inconsistent to each other. The embodiments disclosed herein are merely by way of example and not limited thereto. The scope of the present invention is defined by the terms of the claims, rather than the description above, and is intended to include any modifications within the meaning and scope equivalent to the terms of the claims. REFERENCE SIGNS LIST 1,1A,1B: semiconductor switching elements;2,2A,2B: diode;3,3A,3B: gate resistor;4,70: drain pattern;5: source pattern;6,76: gate control pattern;7: source control pattern;8,8A,8B: source wire;9,9A,9B: gate wire;10,10A,10B: source control wire;11: insulating substrate;12: base plate;13: drain terminal;14: source terminal;15: gate terminal;16: source control terminal;17: capacitor;18,31: capacitor arrangement pattern;19,19A,19B,78,80: wire;20: thermistor;23,25: solder;24: back surface pattern;26: insulating film;27,74: source pad;28,72: gate pad;32,33,36,38: arrangement pattern;34: detection terminal;35: thermistor detection terminal;40: driving circuit;42,50: temperature detection circuit;44: voltage source;46: resistance element;48: resistance value detection circuit;60: filter-forming switching element;62: external terminal;100,100A to100I,100Z: power semiconductor module;110,110A,500: power semiconductor device;200: power converter;201: main conversion circuit;202: control circuit;300: load;400: power supply | 59,697 |
11863167 | DETAILED DESCRIPTION Some preferred embodiments of the present invention will now be described in greater detail. However, it should be recognized that the preferred embodiments of the present invention are provided for illustration rather than limiting the present invention. In addition, the present invention can be practiced in a wide range of other embodiments besides those explicitly described, and the scope of the present invention is not expressly limited except as specified in the accompanying claims. Conventionally, as depicted inFIG.1, an external inductive load L1connected to the high voltage power supply line VIN is driven by an external main power switching transistor102and a suitable drive circuit10. The drive circuit10includes a first transistor101and a second transistor103both in switch form to act as an upper drive transistor and lower drive transistor respectively, and the drive circuit structure is used to drive the switching action of the main power switching transistor102. As known to the skilled person in the art, MOSFET devices have been increasingly used in electronic circuits due to their characteristic of easily being driven and to their ability of handling high currents and voltages at high switching frequencies. The main power switching transistor102is generally a power MOSFET when process conditions are permitted. Nonetheless, the power dissipation in the main power switching transistor102can be reduced by increasing the switching speed, but this increases the generation of electromagnetic interference (EMI). The main power switching transistor102, the power MOSFET, whose gate terminal G is driven by the upper and lower drive transistors, which are part of the drive circuit10, where the upper and lower drive transistors are controlled through the control signals S1and S2respectively, the external main power switching transistor102has a grounded source terminal S and a drain terminal D connected to the external inductive load L1, the upper drive transistor (first transistor)101and the lower drive transistor (second transistor)103cannot be turned on or off at the same time. The gate terminal G of the main power switching transistor102is driven by the upper drive transistor and the lower drive transistor. At the moment while the upper drive transistor is turned on, the upper drive transistor charges the gate capacitance CG(including the gate-source capacitance CGSand the gate-drain capacitance CGD) of the main power switching transistor102with its maximum current capability, so that the main power switching transistor102can be rapidly turned on, which will cause the drain-source current of the external main power switching transistor102to have a large rising slope during the turn-on process. That means, as voltage and current slopes increase during switching transients, so do EMI levels, therefore, power drive circuit design requires a reasonable compromise between device characteristics, power loss, and EMI. The switching speed of a MOSFET device is strictly related to the amount of the charge being transferred into the dynamic capacitance CGwithin the gate terminal, which is equal to the sum of the dynamic capacitances between the gate-source and gate-drain, i.e. CG=CGS+CGD. The present invention aims to propose a drive circuitry for power switching transistor of the switching power supply, which can provide a compromise between the driving force and the electromagnetic compatibility (EMC) of the drive circuitry for the power switching transistor through a reasonable driving stage circuit layout design, while ensuring the fast switching of the main power switching transistor and improving its efficiency. In the switching power supply circuit, the EMI performance of the power supply is not only directly related to the PCB layout and transformer structure, but the main power switching transistor has a great influence on the electromagnetic interference (EMI) performance of the power supply as well. In the state of fast cyclic switching of ON-OFF, the drain-source current changes rapidly. If the driving ability of the drive circuit is too strong, the di/dt of the external power switching transistor will be too large, which will lead to poor EMI performance. If the driving ability of the drive circuit is too weak, it is easy to increase the power of the drive transistor and then burn it out. Therefore, a balance or compromise between the driving force and the electromagnetic compatibility (EMC) of the drive circuitry needs to be found when designing the driving circuit. Generally, the upper arm of the driver stage only uses NMOS transistors, so the voltage value of the power supply VCC has a great influence on the driving ability, because the working range of VCC is generally relatively large (8V-28V in this disclosure). When VCC is equal to 8V the driving force is appropriate set, the driving force will be too strong while VCC becomes larger. In the design of drive circuit proposed in this invention, the upper arm of the driver stage utilizes NMOS and PMOS transistors coupled in parallel. When VCC is lower than a certain voltage (10V in this invention), the driving force of the NMOS transistor is relatively weak, and the NMOS and PMOS transistors are driven together. When VCC is greater than a certain voltage (10V in the present invention), the driving force of the NMOS transistor is strong, so the PMOS transistor is designed to turn on only after the driving output OUT is slightly greater than the Miller plateau voltage. In this way, our design finds a balance/compromise between driving force and EMI. Please refer toFIG.2, which illustrates a schematic drive circuit200for a power switching transistor according to the design concept mentioned in the preceding paragraph. According to a preferred embodiment, the external inductive load L1connected to the high voltage power supply line VIN is driven by the external power switching transistor M0and the driving circuit200. The drive circuit200at least includes, a first pull-up drive transistor NMOS_1, a second pull-up drive transistor PMOS_1, a first pull-down drive transistor NMOS_2, and a resistor divider network211, where the first pull-up driver transistor NMOS_1(which is an NMOS transistor) is connected in parallel with the second pull-up drive transistor PMOS_1(which is a PMOS transistor), acted as the upper arm of the driver stage, the drain of the first pull-up drive transistor NMOS_1and the source of the second pull-up drive transistor PMOS_1is coupled to the power supply VCC. The first pull-down drive transistor NMOS_2is connected in series with the first pull-up drive transistor NMOS_1to act as the lower arm of the driver stage, where the drain of the first pull-down drive transistor NMOS_2(which is an NMOS transistor) is connected to source of the first pull-up drive transistor NMOS_1and drain of the second pull-up drive transistor PMOS_1, in one embodiment, the first pull-down driver transistor NMOS_2can be directly or serially coupled to the source of the first pull-up driver transistor NMOS_1through a resistor R1, and the source of the first pull-down drive transistor NMOS_2is grounded (GND), where the resistor divider network211is a voltage divider circuit composed of a plurality of resistors R2, . . . , Rn connected in series. One end of the voltage divider network211is coupled to the source of the first pull-down drive transistor NMOS_2, to the drain of the second pull-up drive transistor PMOS2and the other end of the voltage divider network211is grounded (GND), and the voltage divider network211outputs the sampling voltage VOUT_DIVat the output terminal OUT_DRV when the drivie circuit200is operating. The external power switching transistor M0can be, but is not limited to, an NMOS transistor, the gate of which is connected to the source of the first pull-up drive transistor NMOS_1, the drain of the second pull-up drive transistor PMOS_1and the drain of the first pull-down drive transistor NMOS_2. In one embodiment, the source of the external power switching transistor M0is grounded, and the external inductive load L2connected to the high voltage power supply line VIN is driven by the external power switching transistor M0and the drive circuit200. The sampling voltage VOUT_DIVrepresents gate voltage of the external main power switching transistor M0. In a preferred embodiment, the drive circuit200further includes a non-overlapping signal generation circuit213for branching the input PWM control signal into a plurality of branched control signals and sending them to the locations required by the drive circuit as the basis for the timing control during the operation of the drive circuit200. The two branched control signals generated from the non-overlapping signal generation circuit213shown inFIG.2are non-overlapping control signals VIN_01and VIN_02, where both VIN_01and VIN_02are PWM control signals output from corresponding output terminals IN_01and IN_02of the non-overlapping signal generation circuit213, which can be utilized to control the ON or OFF of the first pull-up drive transistor NMOS_1, the second pull-up drive transistor PMOS_1, and the first pull-down drive transistor NMOS_2by different circuit function selections and circuit paths respectively. Technical details will be discussed in the following paragraphs. In a preferred embodiment, the non-overlapping control signals VIN_01and VIN_02can be further adjusted through a level shifter circuit coupled to the respective branched paths, and the voltage level of VIN_01and VIN_02will be converted from low voltage to high voltage to meet the required voltage level of subsequent circuit modules. The input PWM control signal is fed from the input terminal IN of the non-overlapping signal generation circuit213, and the non-overlapping control signals VIN_01and VIN_02are respectively output through the first output terminal IN_01and the second output terminal IN_02of the non-overlapping signal generation circuit213. In a preferred embodiment, the first pull-up driving transistor NMOS_1is turned on when the control signal is at a high level, and is turned off when the control signal is at a low level. In a preferred embodiment, the second pull-up driving transistor PMOS_1is turned on when the control signal is at a low level, and is turned off when the control signal is at a high level. In a preferred embodiment, the first pull-down driving transistor NMOS_2is turned on when the control signal is at a low level, and is turned off when the control signal is at a high level, because the control signals VIN_01and VIN_02do not overlap. Referring toFIG.3, which shows an exemplary circuit diagram of the non-overlapping signal generation circuit213but not limited thereto. The non-overlapping signal generation circuit213includes NAND gates31,34and39, and inverters32,33,35,36,37,38,40,41,42and43. The first delay line includes NAND gate34and inverters35-38. The second delay line includes NAND gate39and inverters40-43. Control signals PWM OB and PWM0are respectively provided at the outputs of inverters38and43and are fed back to NAND gates34and39. The PWM control signal applied to the input of the non-overlapping signal generation circuit213propagates sequentially through one of the delay lines and then through the remaining delay lines. In order to adjust the voltage levels of the control signals PWM OB and PWM_O, in one embodiment, the control signals PWM OB and PWM0can be adjusted by a level shifter circuit (not shown) for regulating the driving range. For the connection between circuit blocks of voltage limiting circuit215and the switching circuit217shown inFIG.2, their corresponding exemplary circuits is depicted inFIG.4. As shown inFIG.4, the voltage limiting circuit215shown inFIG.4is utilized to input bias current Ib to cascaded N-type and P-type current mirrors, to output from the P-type current mirror with current Ix, and P-type current mirror is connected to a clamp circuit which is appropriately configured the voltage divider with resistors Ra and Rb to connect the NMOS transistor410and the NPN transistor411at the output end of the P-type current mirror. When the NMOS transistor410and the NPN transistor411are turned on, the output voltage of the voltage limiting circuit215is the highest, That is, the voltage limiting circuit215has a clamping function and can be used to limit the output voltage. The switch circuit217includes a PMOS transistor413and an NMOS transistor415, where the drain of the PMOS transistor413is connected to the output terminal of the P-type current mirror of the voltage limiting circuit215and the drain of the NMOS transistor410of the voltage limiting circuit215, and the source of the PMOS transistor413is connected to the drain of the NMOS transistor415, the gate of the PMOS transistor413is connected to the gate of the NMOS transistor415and the control signal input terminal IN_01. The voltage limiting circuit215and the coupled switch circuit217are used to provide a first control path for controlling the first pull-up driving transistor NMOS_1of the driving circuit200(refer toFIG.2). In a preferred embodiment, the voltage limiting circuit215can confine the output clamping voltage based on actual requirements of the designer and the working range of VCC, and its value is a preset voltage value. In a preferred embodiment, the predetermined clamping voltage is 12.5V. The second pull-up driving transistor PMOS_1of the drive circuit200depicted inFIG.2can be controlled by a second control path, which is composed of the first comparison circuit219, the second comparison circuit221and the 2-to-1 selection circuit223. The first pull-down driving transistor NMOS_2of the drive circuit200depicted inFIG.2can be controlled by the control signal VIN_02generated by the second output terminal IN_02of the non-overlapping signal generation circuit213which does not overlap with the control signal VIN_01. Please refer toFIG.5for showing a corresponding exemplary circuit of the 2-to-1 selection circuit223inFIG.2. As shown inFIG.5, the 2-to-1 selection circuit223, in a preferred embodiment, may be a multiplexer. A multiplexer (MUX) is used to select a signal for output from multiple (here is two) digital or analog input signals. The 2-to-1 selection circuit223includes two input terminals a and b, an output terminal and a selection terminal S. Its working principle is that when the selection terminal S inputs is at low level, the output terminal will output the input signal fed from the input terminal a, when the input signal of the selection terminal S is at high level, the output terminal will output the input signal fed from the input terminal b. In a preferred embodiment, the first comparison circuit219shown inFIG.2includes more than one comparator, which is connected to the output terminal OUT_DRV of the drive circuit200to determine whether the output sampling voltage VOUT_DIV(corresponding to gate voltage of M0) is greater than Va or less than Vb. There are possible three scenarios of the detected output sampling voltage VOUT_DIV. Firstly, in the case that VOUT_DIVis greater than Va, the first comparison circuit219outputs a high-level control signal; secondly, in the case that VOUT_DIVis less than Va, the first comparison circuit219also outputs a high-level control signal; thirdly, in the case that VOUT_DIVis between Va and Vb, that is, Vb<VOUT_DIV<Va, the first comparison circuit219outputs a low-level control signal. In a preferred embodiment, the voltage value of Va is 12V, and the voltage value of Vb is 5.5V. In a preferred embodiment, the second comparison circuit221shown inFIG.2includes at least one comparator whose input terminal is connected to VCC to determine whether VCC is greater than Vc. When VCC is greater than Vc, the second comparison circuit221outputs a low-level signal to the selection terminal S of the 2-to-1 selection circuit223, and the output terminal of the 2-to-1 selection circuit223outputs the input signal fed into the input terminal a, that is, the non-overlapping control signal VIN_01generated by the IN_01terminal to control the second pull-up drive transistor PMOS_1; when VCC is less than Vc, the second comparison circuit221outputs a high-level signal to the selection terminal S of the 2-to-1 selection circuit223, and its output terminal outputs the input signal fed into the input terminal b, that is, the control signal generated by the output terminal of the first comparison circuit to control the second pull-up driver transistor PMOS_1. In a preferred embodiment, the voltage value of the Vc is 10V. With reference to the above description of the first comparison circuit219, the second comparison circuit221and the 2-to-1 selection circuit223, specifically, the first pull-up drive transistor NMOS_1of the drive circuit200is controlled by the first control path, which is composed of the switching circuit217connected to its gate and the voltage limiting circuit215connected to the switch circuit217, the switching circuit217is regulated by a first control signal VIN_01(output from output terminal IN_01of the non-overlapping signal generation circuit) to turn on all the time during the Ton period (i.e. when the external PWM control signal is at a high level), enabling that the first pull-up driver NMOS_1is always turned on (ON) and the clamping voltage is output at 12.5V through the voltage limiting circuit215, and drain of the first pull-up drive transistor NMOS_1is connected to the power supply VCC to charge the gate of the external switching power transistor M0. At this time, the first pull-down drive transistor NMOS_2receives the second control signal VIN_02(fed from the second output terminal IN_02of the non-overlapping signal generation circuit213), since it is at a low level, the first pull-down drive transistor NMOS_2is turned off; at the same time, the second pull-up drive transistor PMOS_1of the drive circuit200is controlled by a second control path, which is composed of a 2-to-1 selection circuit223connected to its gat, a first comparison circuit219and a second comparison circuit221connected thereto, during the Ton period (i.e. when the external PWM control signal is at a high level) (i) when the VCC voltage is higher than 10V, the second comparison circuit221outputs a low level signal to the selection terminal S of the 2-to-1 selection circuit223, and its output terminal selects the input signal fed from the input terminal a, that is, the non-overlapping control signal VIN_01generated by the IN_01terminal is used to control the second pull-up driver transistor PMOS_1, so that the second pull-up driver transistor PMOS_1is turned off (OFF), the first pull-up driver transistor NMOS_1connected to the power supply VCC is turned on to charge the gate of the external switching power transistor M0; (ii) when the VCC voltage is lower than 10V, the second comparison circuit221outputs a high-level signal to the selection terminal S of the 2-to-1 selection circuit223, and its output terminal selects the input signal fed from the input terminal b, at this time, the second pull-up driver transistor PMOS_1is in turned on or off state totally depending on the voltage value of the sampling voltage VOUT_DIVbeen inputted into the first comparison circuit219(that is, the gate voltage of the external main power switch transistor M0), when VOUT_DIVis greater than Va, the first comparison circuit219outputs a high-level control signal, so that the second pull-up drive transistor PMOS_1is turned off (OFF); when VOUT_DIVis less than Va, the first comparison circuit219also outputs a high-level control signal, so that the second pull-up driver transistor PMOS_1is turned off (OFF); when VOUT_DIVis between Va and Vb, i.e., Vb<VOUT_DIV<Va, the first comparison circuit219outputs a low-level control signal, so that the second pull-up drive transistor PMOS_1is turned on (ON). When the external PWM control signal is at low level, the first pull-up driver transistor NMOS_1and the second pull-up driver transistor PMOS_1are both turned off (OFF), the first pull-down driver transistor NMOS_2is turned on (ON), and the external power switching transistor M0is discharged through the first pull-down driving transistor NMOS_2. That is to say, when VCC is lower than a certain voltage (10V in the present invention), that is, a threshold voltage value, the driving force of the first pull-up drive NMOS_1is relatively weak, and both the first pull-up drive NMOS_1and the second pull-up drive transistor PMOS_1are driven together, which can prevent the external power switching transistor M0from being burn out due to the power increasing caused by slowly conducting process; when VCC is greater than a certain voltage (10V in this present invention), the driving force of the first pull-up driving transistor NMOS_1is relatively strong, PMOS_1is turned off (OFF) while the driving output VOUT_DIVis less than Vb (5.5V), which can obtain a soft drive effect, and the main power switch transistor M0is not so quickly turned on during the Miller plateau to improve EMI, after VOUT_DIVis slightly larger than the Miller plateau voltage, that is, the drive output 5.5V<VOUT_DIV<12V, PMOS_1is then turned on; when VOUT_DIVis greater than 12V, PMOS_1is turned off (OFF). In one embodiment, Vb (5.5V) is the first reference voltage, and Va (12V) is the second reference voltage. While various embodiments of the present invention have been described above, it should be understood that they have been presented by a way of example and not limitation. Numerous modifications and variations within the scope of the invention are possible. The present invention should only be defined in accordance with the following claims and their equivalents. | 21,880 |
11863168 | DETAILED DESCRIPTION In the following, various embodiments will be described in detail referring to the attached drawings. The embodiments described hereinafter are to be taken as examples only and are not to be construed as limiting. For example, while in embodiments specific arrangements or components are provided, in other embodiments other configurations may be used. Besides features (or for example components, elements, acts, events or the like) explicitly shown and described, in other embodiments additional features may be provided, for example features used in conventional switch devices using phase change materials. For example, embodiments described herein relate to a switch arrangement for supplying power to one or more heaters and operation of the switch arrangement, and other components and features, like spatial arrangement of heaters and phase change material, radio frequency (RF) circuitry using the switch device and the like may be implemented in a conventional manner. Such RF circuitry may be integrated with the described switch devices on the same substrate, but may also be provided separately for example, on one or more separate chip dies, which in some implementations then may be combined with a switch device in a common package. Also, manufacturing implementations like providing phase change material on a substrate like a silicon substrate to implement a phase change switch, providing phase change material in a trench in a silicon substrate for manufacturing the switch device and the like may be performed in any conventional manner. A switch based on a phase change material (PCM) will be referred to as a phase change switch or short PCM switch herein. As explained in the introductory portion, such phase change switches may be set to a crystalline phase state or an amorphous phase change, thus changing the resistance of the phase change material and therefore of the switch by several orders of magnitude. In this way, for example an on resistance of a switch in a range of 1 to 100 Ω may be achieved, whereas an off-resistance may be several orders of magnitude higher, for example at least in the Kiloohm range. Implementation details described with respect to one of the embodiments are also applicable to other embodiments. Embodiments discussed herein include switch arrangements. A switch arrangement generally includes a plurality of switches. Switches may be switched on to be electrically conducting between terminals with a low resistance, or switched off to essentially provide an electrical isolation between terminals. Such switches may be implemented at using for example one or more transistors like bipolar junction transistors, field-effect transistors or insulated gate bipolar transistors in any conventional manner. A set, as used herein, refers to one or more entities. For example, a set of heaters refers to one or more heaters. In other words, in some embodiments a set may include only a single entity, for example a single heater. Turning now to the Figures,FIG.1is a block diagram illustrating a switch device according to an embodiment. The switch device ofFIG.1includes one or more phase change switches, which may be coupled in series or in parallel. Examples will be explained further below. These phase change switches include phase change material11, which depending on its phase state (crystalline or amorphous, see above explanations) provides either a low electrical resistance between terminals10and12or a high electrical resistance between terminals10and12. A set of heaters13is arranged to heat phase change material11to perform a set or reset operation, or in other words to switch one or more phase change switches on or off, as explained above. The set of heaters13is supplied with power from a power source15via a switch arrangement14. Power source15in some embodiments may be a pulsed power source configured to generate pulses of electrical power. Switch arrangement14includes a plurality of switches that may be controlled to provide power from power source15to the set of heaters13selectively, i.e. in various manners. Power source15and switch arrangement14are controlled by a controller16. Controller16may for example be a microcontroller, an application-specific circuit, a programmed microprocessor and the like which is configured to control switch arrangement14and power source15accordingly. Controlling switch arrangement14ofFIG.1to selectively provide power to the set of heaters13from power source15will be further illustrated referring toFIG.2, which illustrates a method according to some embodiments. The method ofFIG.2may for example be implemented in the switch device ofFIG.1, for example by configuring controller16accordingly to control switch arrangement14. The method ofFIG.2may be also implemented using other switch devices like switch devices explained further below. At20to23,FIG.2shows various ways a switch arrangement like switch arrangement14may be controlled. The various possibilities at20to23may be implemented separately in different embodiments, but two or more of these possibilities may also be combined in a single embodiment. At20, the switch arrangement may be controlled to selectively provide current through one or more heaters of the set of heaters in a first direction or in the second direction opposite the first direction. Using different directions of current flow through the heaters in some embodiments may mitigate electromigration issues. At21, the switch arrangement is controlled to form a pulse of electrical power. While in some embodiments power source15or another power source used itself may be a pulsed power source, in other embodiments the switches of switch arrangement14may be opened and closed to form a pulse of power. In this case, a power source used like power source15may be a continuous power source. At22and23, the set of heaters includes a plurality of heaters. At22, heaters are supplied with power sequentially, i.e. not all heaters receive power at the same time, but receive power sequentially one after the other, or one subset after the other. Therefore, “supplying heaters sequentially” also includes the possibility of supplying subsets sequentially, for example in case of four heaters first supplying the first and second heater with power and then supplying the third and fourth heater with power. “Sequentially” as used herein may mean “non-overlapping sequentially”, i.e. only after one heater or subset of heaters has received power, a next heater or next subset received power, possible with a pause time therebetween. In some embodiments, this sequentially supplying heaters with power may be performed during a reset pulse, where generally higher temperatures of the phase change material and therefore higher power to the heaters are provided. At23, heaters are supplied with power simultaneously, for example by coupling heaters of the set of heaters in series or in parallel to receive power from power source15. In some embodiments, the sequentially supplying at22and the simultaneously supplying at23may be performed selectively, such that for example the sequentially supplying at22is performed for a reset operation, whereas the simultaneously supplying at23is performed for a set operation. More specific examples for the various possibilities explained forFIG.2will now be explained with reference toFIGS.3to13. InFIGS.3to13, like, similar or functionally corresponding elements are designated with the same reference numerals and will not be described repeatedly in detail to avoid repetitions. FIG.3is a circuit diagram illustrating a switch device30according to an embodiment. Switch device30shows a single pole single throw (SPST) configuration to selectively couple an input31with an output32. Input31in operation may receive a radio frequency input signal RF-IN, and output32may, when switch device30is on, output an output signal RF-OUT. Reference numeral37generally designates conductive material, for example a structured metal layer. In the example of switch device30, four phase change switches are coupled in parallel between input31and output32. Each phase change switch has a heater33A,33B,33C,33D, respectively, collectively referred to as heaters33, and corresponding a phase change material (not explicitly shown inFIG.3for clarity's sake, see phase change material11ofFIG.1) adjacent to the heater and coupled to conductive material37. By setting or resetting the phase state of the respective phase change material by operating heaters33, a low resistance between input31and output32(switched on state, set state of the phase change material) or a high electrical resistance (switched off state, reset state of the phase change material) may be achieved. For performing set or reset operations, heaters33are supplied with power from a pulse generation unit36, which is an example for a power source. Pulse generation unit36generates pulses of electrical power controlled by a controller (not shown inFIG.3, but see controller16ofFIG.1). A first terminal of pulse generation unit36(for example first pole) is coupled to terminals of heaters33via first switches34A to34E, collectively referred to as first switches34, and a second terminal (for example second pole) of pulse generation unit36is coupled to terminals of heaters33via second switches35A to35E, collectively referred to as second switches35. First switches34and second switches35together form an example of a switch arrangement like switch arrangement14ofFIG.1and may be used to selectively provide heaters33with power. For example, for supplying heaters sequentially with power as at22inFIG.2, one of the first switches34and one of the second switches35may be closed to supply an individual heater with power. This is illustratively shown inFIG.3for heater33B, where first switch34C and second switch35B are closed to provide a current flow through heater33B as indicated via arrows. To supply all four heaters at33A to33D with power sequentially, then for example first switches34B and35A are closed to supply heater33A with power, then switches34C and35B are closed as shown, then switches34B and35C are closed to supply heater33C with power, and finally switches34E and35D are closed to supply heater33D with power. It should be noted that the heaters do not need to be supplied with power in this order, and other orders are also possible. Conversely, to provide all heaters with power simultaneously, for example first switch34E and second switch35A may be closed, leading to a series connection of all heaters33A to33D. Also, subsets of heaters may be supplied with power in in series. For example, by closing switches34C,35A, heaters33A and33B may be supplied with power. Furthermore, the current flow may also be reversed. For example, for a reverse current flow through heater33B, instead of switches34C and35B, switches34B and35C may be closed. Finally, in some embodiments pulses may be formed using first switches34and second switches35, by closing at least one of the switches (in the example shown switch34C or35B) only for a required pulse duration, while a power source instead of pulse generation unit36provides power continuously. The concept of applying power sequentially or simultaneously will be further illustrated referring toFIGS.4A to4C. FIG.4Ashows power and voltage over time for using a single pulse40as a reset pulse (i.e. causing a change from a crystalline phase state to an amorphous phase state) in a case where all heaters, for example heaters33A to33D, are coupled in series (switches35A and34E closed, or switches34A and35E closed). Here, a comparatively high power and voltage is required to rapidly heat the phase change material above its melting point in all four phase change switches. Because of the series connection, the voltage drop over each heater is approximately one fourth of the applied voltage. FIG.4Bshows a case according to an embodiment where four heaters like heaters33A to33D are supplied with power sequentially. InFIG.4B, four pulses41A to41D are applied, one to each heater (i.e. for example as explained above first switches35A and34B are closed, then switches35B and34C are closed etc.). In this case, to achieve the same voltage drop as inFIG.4Afor each heater, the applied voltage only needs to be approximately divided by a factor of four compared toFIG.4A(i.e. the voltage inFIG.4Bis one fourth of the voltage inFIG.4A) and assuming the same current, this means that the applied power is also only one fourth. This may facilitate design of the power source, as it need not be designed for a higher voltages and powers, and may also affect the dimensioning of electrical connections, switches etc. On the other hand, the overall time for resetting all four phase change switches is longer also approximately by a factor of four. As for reset pulses the time duration Δt, however, generally is comparatively short, this is acceptable for many applications. FIG.4Cshows an example for a set pulse42in a case where all heaters are coupled in series (as inFIG.4A). Generally, for a set pulse lower powers are required (heating to lower temperatures) over a longer time in order to effect recrystallization of the phase change material. As here generally lower powers are required, in embodiments the heaters are coupled in series and therefore supplied with power simultaneously, whereas for a reset pulse as shown inFIG.4Bthe heaters are supplied with power sequentially. FIG.3shows a single pole single throw configuration with a single input31and a single output32. Four parallel branches with respective heaters are used. This is merely an example, and other configurations may be used as well. For example, inFIG.3more than four parallel branches or less than four parallel branches may be used. Moreover, other configurations than single pole single throw may be used. Some examples for such other configurations will be described next with reference toFIGS.5to8. FIG.5illustrates a switch device50with an asymmetric single pole double throw (SPDT) configuration with six PCM switches. An input31may be selectively coupled to a first output32A to output a signal RF1, a second output32B to output a signal RF2or both. A path from input31to first output32A has two parallel branches with heaters33A,33B and a corresponding phase change material (again not shown), and a path from input31to second output32B has four parallel branches with heaters33C to33F (and corresponding phase change material). First switches34A to34G and second switches35A to35D are provided. Using the switches, similar to what was explained with reference toFIG.3, heaters33may be selectively supplied with power from pulse generation unit36, for example sequentially, simultaneously and in different directions. For example, inFIG.5first switch34B and second switch35D are shown as closed, such that current flows through heater33D as indicated by arrows. FIG.6shows a symmetric single pole triple throw (SP3T) configuration of a switch device60, with input31, a first output32A for outputting a signal RF1, a second output32B for outputting a signal RF2and a third output32C for outputting a signal RF3. Two parallel branches are provided to each of outputs32A,32B,32C, with two respective heaters33A to33F provided to each branch. First switches34A to34G and35A to35G are provided to selectively provide power to the heaters, as explained above. In the example shown, switches34E and35D are closed to provide a current flow through heater33D as indicated by arrows. FIG.7shows a single pole single throw configuration of a switch devices70with input31and output32. The configuration ofFIG.7has three branches in parallel from input31to an intermediate conducting area71and another three branches from intermediate conducting area71to output32. Each of the branches has a respective heater33A to33F with corresponding phase change material. First switches34A to34F and second switches35A to35F are used to selectively supply power. For example, inFIG.7power is supplied to heater33D by closing switches34D,35C. A configuration with the intermediate conducting area71as shown inFIG.7may also be referred to as a double stacked configuration. FIG.8shows another double stacked configuration of a single pole single throw switch device80with two paths from an output31to intermediate conducting area71and another two paths from intermediate conducting area71to output32. In this case, the switch arrangements and pulse generating units are separate for the path from input31to intermediate conducting area71on the one hand and the path from intermediate conducting area71to output32on the other hand, i.e. a first pulse generating unit36A and a second pulse generating unit36B are provided. Via switches34A to34C and35A to35C, first and second heaters33A,33B may selectively be supplied with power from first pulse generating unit36A, and via switches34D to34F and35D to35F, power may selectively be provided from second pulse generating unit36B to heaters33C,33D. In the example shown, by closing switches34B and35C current flows through heater33B, and by closing switches34D and35D, current flows through heater33C, as indicated by respective arrows. Features from the embodiments ofFIGS.3and5-8may also be combined. For example, a double stacked configuration with intermediate conducting area71may also be provided for SPDT or SP3T configurations. Generally, as can be seen fromFIGS.3and5to8, different numbers of parallel path, stacked configurations like double stacked configurations, different numbers of outputs may be used, e.g. generally single pole multi throw configurations. In other embodiments, also the number of inputs may vary, and two or more inputs may be provided. Furthermore, the specific numbers of parallel paths, inputs and outputs shown is not to be construed as limiting, but merely as examples, and more or less inputs/outputs as shown or more or less parallel paths as shown may also be used. In the embodiments discussed with reference toFIGS.3and5to8, each heater is surrounded by a “H-configuration” of switches, for example heater33A in each of these embodiments by switches34A,34B,35A,35B. In this way, each terminal of the respective heater may be coupled with each terminal of the respective power source, for example pulse generating unit, which gives a high flexibility. In other embodiments, some switches may be omitted. This may reduce flexibilities in some embodiments regarding current flow, but may lead to reduced manufacturing costs as less switches are needed. FIGS.9and10show corresponding examples.FIGS.9and10are modifications of the embodiment ofFIG.3and each show a single pole single throw switch device with four parallel branches and associated heaters33A to33D. The variations shown inFIGS.9and10regarding the number of switches are also applicable to the other embodiments. For example, also the embodiments ofFIGS.5to8may be implemented with a reduced number of switches. In a switch device90ofFIG.9, first switches34A,34B and34C and second switches35A,35B and35C are provided as shown. With this arrangement, power may be provided either sequentially to heaters33A to33D or simultaneously. For providing power to heater33A, switches34A and35A are closed, for providing power to heater33B as shown, switches35A and34B are closed leading to a current flow as indicated by arrows, for providing power to heater33C switches34B and35B are closed, and for providing power to heater33D switches at35B and34C are closed. Unlike for exampleFIG.3, for each heater only one direction of current flow is possible. For providing power to all heaters simultaneously, switches34A and35C are closed such that current flows through the series connection of heaters33A to33D. FIG.10shows a switch device1000according to another embodiment. Here, only second switches35A to305D are provided. In this configuration, heaters may be supplied with power sequentially, by closing switches35A to35B sequentially. In the example ofFIG.10, switch35D is closed, thus providing power to heater33B as illustrated by arrows. Alternatively, for example all switches35A to35D may be closed, thus providing a parallel circuit of heaters of33A to33D and enabling to provide power simultaneously (in this case through a parallel connection and not through series connection as inFIG.9). Also here, the direction of the current flow for each heater cannot be changed by operating the switch arrangement. Next, issues regarding electromigration and changing the direction of current flow will be explained referring toFIGS.11to13. In some embodiments, a higher voltage than a normal supply voltage is used to supply the heaters. Generally, for supplying the heaters, essentially the delivered power, i.e. product of voltage and current, is relevant. Providing a higher voltage (and reduce current) may be beneficial in terms of electromigration as lifetime of a heater limited by electromigration scales with the inverse square of the current, such that reducing the current reduces electromigration, which in turn may increase the lifetime of the heater. An example is shown inFIG.11. Here, a device where the phase change switch is to be incorporated is supplied with a supply voltage of for example 3.3 volt. This is boosted to a higher voltage Vhigh by a circuit1101, for example a charge pump. Vhigh may for example be 10 volt. This higher voltage of 10 volt is then used for heater pulse generation. Electromigration is the transport of material caused by the gradual movement of atoms in a conductor due to the momentum transfer between conducting electrons and diffusing metal atoms. By balancing the current direction, i.e. not always using the same current direction for heating a heater, the effect of electromigration may be reduced. As already mentioned, this may be achieved by switch arrangements for example shown inFIG.3. This is described now in more detail with reference toFIGS.12A and12B. FIG.12Ashows a single heater33with first switches34A,34B and second switches35A,35B. The switches are coupled between a positive supply voltage Vhigh (for example generated as inFIG.11, but not limited thereto) and ground. Pulse generating generation unit36generates pulses. It should be noted that this coupling as shown in inFIG.12Amay also be used in the embodiments ofFIGS.3to10. In other embodiments, pulse generating unit36may be coupled between the switches and Vhigh, or may be coupled as shown inFIGS.3to10to output both Vhigh and a lower voltage like ground. InFIG.12A, switches34A and35B are closed, and switches35A and34B are open. This leads to a current flow from Vhigh to ground through heater33as indicated by an arrow. In case ofFIG.12B, switches34B and35A are closed, and switches34A and35B are open. This leads to a current through heater33in the opposite direction compared toFIG.12A. Therefore, by providing essentially a “H-configuration” of switches to a heater, current flow in both directions through the heater is enabled. In embodiments, then for example the current direction may be changed every n set/reset cycles, where n is one or more. As shown inFIGS.3to8, in case of more than one heater, the “H-configurations” of adjacent heater may “share” switches. For example, inFIG.3switches34B,35B are shared by heaters33A,33B in this sense. Also in this case, the switches may be operated to selectively provide current in a first direction or in a second direction opposite the first direction through the heaters, as already briefly explained with respect toFIG.3. In other embodiments, separate “H-configurations” may be provided for separate heaters. An example is shown inFIG.13, which is essentially a duplicate of the circuit ofFIGS.12A and12B. In the example ofFIG.13, also separate pulse generation units36A,36B are provided for the two heaters33A,33B, and separate switches are provided for the heater (switches34A,34B,35A and35B for heater33A and switches34C,34D,35C and35D for heater33B). In other embodiments, an AC current could be used for heating where the current direction reverses inherently, but this may be difficult to implement for short cycles. Some embodiments are defined by the following examples: Example 1. A phase change switch device, comprising: a phase change material, a set of heaters arranged to heat the phase change material, a power source, and a switch arrangement including a plurality of switches and configured to selectively provide electrical power from the power source to the set of heaters. Example 2. The phase change switch device of example 1, wherein the switch arrangement is configured for at least one of:selectively providing a current either in a first direction through at least one heater of the set of heaters or in a second direction through the at least one heater of the set of heaters, orforming a pulse of electrical power through at least one heater of the set of heaters. Example 3. The phase change switch device of example 1 or 2, wherein the at least one heater comprises a plurality of heaters, wherein the switch arrangement is configured for at least one of:supplying the plurality of heaters sequentially with electrical power from the power source,selectively supplying the plurality of heaters sequentially or simultaneously with electrical power from the power source. Example 4. The device of any one of examples 1 to 3, wherein the power source is a single power source. Example 5. The device of any one of example 1 to 4, wherein the power source comprises a pulse generator. Example 6. The phase change switch device of any one of examples 1 to 5, wherein for at least one heater of the set of heaters, the plurality of switches includes a first switch between a first terminal of the at least one heater and a first terminal of the power source, and a second switch between the first terminal of the at least one heater and a second terminal of the power source. Example 7. The phase change switch device of example 6, wherein for the at least one heater the plurality of switches includes a third switch between a second terminal of the at least one heater and a first terminal of the power source, and a fourth switch between the second terminal of the at least one heater and a second terminal of the power source. Example 8. The phase change switch device of example 7, wherein the at least one heater includes a plurality of heaters of the set of heaters coupled in series, wherein the third and fourth switches of one heater of the plurality of heaters correspond to the first and second switches of a next heater following the one heater in the series coupling. Example 9. The phase change switch device of any one of examples 1 to 6, wherein the set of heaters includes a plurality of heaters coupled in series, wherein the plurality of switches include switches alternatingly coupled between nodes between adjacent heaters in the series coupling and either a first terminal of the power source or a second terminal of the power source. Example 10. The phase change switch device of any one of examples 1 to 5, wherein the set of heaters includes a plurality of heaters, wherein for each of the plurality of heaters the plurality of switches includes a respective switch coupled between a first terminal of the respective heater and a first terminal of the power source, wherein second terminal of the plurality of heaters are coupled to a second terminal of the power source. Example 11. The phase change switch device of any one of examples 1 to 10, wherein the phase change material and the plurality of heaters are configured to form one of a single pole single throw switch device between an input terminal and an output terminal or a single pole multi throw switch device between an input terminal and a plurality of output terminals. Example 12. A method of operating the phase change switch device of any one of examples 1 to 11, comprising: operating the plurality of switches of the switch arrangement for at least one of:selectively providing a current either in a first direction through at least one heater of the set of heaters or in a second direction through the at least one heater of the set of heaters, orforming a pulse of electrical power through at least one heater of the set of heaters. Example 13. A method of operating the phase change switch device of any one of examples 1 to 11, wherein the set of heaters comprises a plurality of heaters, comprising: operating the plurality of switches of the switch arrangement for at least one of:supplying the plurality of heaters sequentially with electrical power from the power source,selectively supplying the plurality of heaters sequentially or simultaneously with electrical power from the power source. Example 14. The method of example 13, wherein operating the plurality of switches of the switch arrangement for selectively supplying the plurality of heaters sequentially or in parallel with electrical power from the power source comprises operating the plurality of switches for sequentially supplying the plurality of heaters to change the phase change material to an amorphous state, and operating the plurality of switches for supplying the plurality of heaters in parallel to change the phase change material to a crystalline state. Example 15. A controller for operating the plurality of switches of the phase change switch device of any one of examples 1 to 12, wherein the controller is configured to operate the switches to perform the method of any one of examples 12 to 14. Example 16. A system, comprising the phase change switch device of any one of examples 1 to 12 and the controller of example 15. Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described without departing from the scope of the present invention. This application is intended to cover any adaptations or variations of the specific embodiments discussed herein. Therefore, it is intended that this invention be limited only by the claims and the equivalents thereof. | 30,370 |
11863169 | Embodiments of the present invention seek to address the above problems. FIG.3Ais a schematic diagram of a current-mode circuit100embodying the present invention. The current-mode circuit100comprises a switch unit10, an adjustment circuit120and a current source IREF. As shown, the switch unit10comprises a field-effect transistor S (which may be considered a switch) connected at its source terminal in series with an impedance R and configured to carry a given current, here labelled IREF. The impedance R is a variable impedance. The adjustment circuit120is configured to adjust an impedance of the variable impedance R to ‘calibrate out’ the effect of any mismatch of the field-effect transistor S (relative to a reference field-effect transistor—not shown) on a predetermined property of the switch unit10which is dependent on the field-effect transistor S. The predetermined property is a property of the switch unit10which is dependent on the physical configuration of the field-effect transistor S for example as determined by its manufacture, including the dimensions of the field-effect transistor S, its threshold voltage and/or saturation current ISS. The adjustment circuit120is configured to measure the predetermined property and adjust the impedance of the variable impedance R so as to adjust the measured property to or towards a reference value. By ‘calibrating out’ the effect any such mismatch on the predetermined property, a switching delay of the field-effect transistor S may be at least partly calibrated. In the case of the current-mode circuit100, the predetermined property for the switch unit10is a potential difference comprising a sum of its gate-source voltage and a potential difference across its series-connected impedance R when the field-effect transistor S is provided with a gate voltage having a defined ON voltage level to turn it ON. The field-effect transistor S and the impedance R are further connected in series with the current source in that order, as shown, the current source defining the given current IREF. The current source defining the given current IREFis labelled IREF(i.e. in the same way as the current it provides) for convenience, and current sources herein will be labelled in a similar manner. In some arrangements the given current IREFmay be a calibrated current or have been pre-calibrated against a calibrated current ICAL. A node M between the variable impedance R and the current source IREFis a measurement node, and a voltage at the measurement node is a measurement voltage VM. The adjustment circuit120comprises a comparator (not shown) configured to compare the measurement voltage VMwith a reference voltage VREF, and is configured to adjust the impedance of the variable impedance R concerned based on the comparison to bring the measurement voltage VMinto or towards a target relationship with the reference voltage VREF. The target relationship is that the measurement voltage VMand the reference voltage VREFare substantially equal, but may comprise a target ratio between those voltages (other than that they are equal). In this way, the predetermined property of the switch unit10may be calibrated to a reference value (effectively defined by the reference voltage VREF). Although only one switch unit10is shown inFIG.3A, it may be that more than one switch unit10is provided. The current-mode circuit100may be considered to comprise at least one switch unit10, wherein for each switch unit10or for at least one of the switch units10the impedance R is a variable impedance. In that case, the adjustment circuit120may be configured, for each switch unit10or for the at least one of the switch units10having the variable impedance R1to adjust an impedance of the variable impedance R to calibrate the predetermined property of the switch unit10concerned in a similar manner as described above. FIG.3Bis a schematic diagram of a current-mode circuit200embodying the present invention. The current-mode circuit200comprises a first switch unit10-1, a second switch unit10-2, an adjustment circuit220and current sources IREF1and IREF2. The switch units10-1and10-2correspond to the switch unit10, and the adjustment circuit220corresponds to the adjustment circuit120, and are thus denoted by similar reference signs. As shown, the switch unit10-1comprises a field-effect transistor S1 (which may be considered a switch) connected at its source terminal in series with an impedance R1and configured to carry a given current, here labelled IREF1. Similarly, the switch unit10-2comprises a field-effect transistor S2 (which may be considered a switch) connected at its source terminal in series with an impedance R2and configured to carry a given current, here labelled IREF2. For each switch unit or for at least one of the switch units the impedance is a variable impedance. To illustrate this, the impedance R1is shown as a variable impedance and the impedance R2is shown as optionally being a variable impedance (i.e., in some arrangements the impedance R2may have a fixed impedance). The adjustment circuit220is configured, for each switch unit or for the at least one of the switch units having the variable impedance, to adjust an impedance of the variable impedance to calibrate the predetermined property of the switch unit. To illustrate this, the adjustment circuit220is shown as controlling the impedance of the impedance R1and optionally controlling the impedance of the impedance R2. As withFIG.3A, the field-effect transistor S1 and the impedance R1are further connected in series with a current source IREF1in that order, the current source IREF1defining the given current IREF1. A node M1 between the variable impedance R1and the current source IREF1is a measurement node, and a voltage at the measurement node is a measurement voltage VM1. Similarly, the field-effect transistor S2 and the impedance R2are further connected in series with a current source IREF2in that order, the current source IREF2defining the given current IREF2. A node M2 between the variable impedance R2and the current source IREF2is a measurement node, and a voltage at the measurement node is a measurement voltage VM2. The adjustment circuit220comprises a comparator (not shown) configured, for each switch unit10-1,10-2to compare the measurement voltage VM1, VM2with a reference voltage, and is configured to adjust the impedance of the variable impedance concerned based on the comparison to bring the measurement voltage into or towards a target relationship (defined similarly as before) with the reference voltage. The reference voltage may be an externally provided reference voltage VREF(as indicated) which is the same for both (or all) of the switch units10-1,10-2. In such a case, the adjustment circuit220may be configured, for each switch unit10-1,10-2, to adjust the impedance of the variable impedance concerned R1, R2based on the comparison to bring the measurement voltage VM1, VM2into or towards the target relationship with the reference voltage VREF. However, as indicated inFIG.3B, it is not essential that an external reference voltage VREFbe provided, and to illustrate this the external reference voltage VREFis indicated as optional. For example, the reference voltage for one of the switch units may be the measurement voltage for the other (or another) one of the switch units. As an example, it may be that the reference voltage for the switch unit10-1is the measurement voltage VM2. In this case, the adjustment circuit220may be configured, for the switch unit10-1, to adjust the impedance of its variable impedance R1based on the comparison to bring its measurement voltage VM1, into or towards the target relationship with its reference voltage VM2. If the target relationship is that the voltages VM1, and VM2are equal, and the currents IREF1and IREF2are equal, along with the gate voltages provided to the switches S1 and S2, and the configurations (e.g., sizes) of the field-effect transistors S1 and S2, it can be seen that in this way the predetermined property of the switch unit10-1may be calibrated to be substantially the same as the predetermined property of the switch unit10-2. That is, the effect of mismatch (threshold voltage mismatch) between the field-effect transistors S1 and S2, including switching delay mismatch, may be substantially reduced or compensated for. Incidentally, although inFIG.3Beach of the switch units10-1,10-2is provided with its own current source IREF1, IREF2, it may be that the measurement nodes M1 and M2 are connected together to form a common measurement node M (not shown), which may be considered a tail node and compared with the common node TAIL inFIG.2. In such a case, the given current may be provided by a common current source IREF(not shown), and the field-effect transistors S1, S2 controlled (by their gate voltages) so that when one of them is ON the other of them is OFF, and vice versa. Thus, the adjustment circuit220may be configured to control the gate voltages of the field-effect transistors S1, S2 to turn them ON one-by-one in turn so that the shared current source IREF(not shown) defines the given current for the switch unit10-1,10-2whose field-effect transistor S1, S2 is ON. Further, the adjustment circuit220may be configured, whilst a given field-effect transistor S1, S2 is ON, to adjust the impedance of the variable impedance R1, R2of that switch unit10-1,10-2. The adjustment circuit220may be configured to control the gate voltages of the field-effect transistors S1, S2 to have an ON voltage level to selectively turn them ON to carry their given current, and the ON voltage level may be the same for both of the switch units10-1,10-2. Indeed, although not shown inFIG.3A, another switch unit10may be provided connected to the measurement node M so that one of the field-effect transistors S may be ON when the other of them is OFF, and vice versa. In this regard, both of the current-mode circuits100and200may be compared with the differential switching circuit (or current-mode circuit) ofFIG.2, i.e. and be considered suitable for use with the DAC ofFIG.1in a similar way. To aid in this comparison, first and second output nodes OUT1 and OUT2 are indicated at the drain terminals of the field-effect transistors S1 and S2 inFIG.3B, and may be compared with OUTA and OUTB inFIG.2. For consistency, an output node OUT is indicated at the drain terminal of the field-effect transistor S inFIG.3A. The impedances R, R1, R2ofFIGS.3A and3Bmay comprise or be resistances (or resistors). Although the impedances R, R1, R2ofFIGS.3A and3Bmay additionally or alternatively comprise capacitances and/or inductances, for simplicity they will be considered as resistances going forwards. FIG.4is a schematic diagram indicating how the impedance R may be implemented as a variable impedance. Similar considerations of course apply to the impedance R1and also R2(when it is a variable impedance). As on the left-hand side, the impedance R may be implemented as variable resistor, and, as in the centre, as a first resistor R1in parallel with a second resistor R2 where at least the second resistor R2 is a variable resistor. That is, the first resistor R1 may be a fixed-resistance resistor. In the context of integrated circuitry, the first resistor R1 may comprise a polysilicon resistor or a diffusion resistor. The second resistor R2, as on the right-hand side, may be implemented as a transistor, such as a field-effect transistor. The gate voltage of the field-effect transistor R2 may be controlled to control its on-resistance. Merely as an example, the first resistor R1 may be a polysilicon resistor having a resistance of 50 ohms and the second resistor R2 may be implemented as a field-effect transistor having an on-resistance which is variable across the range 300 ohms to 1000 ohms. This would enable the resistance of the impedance R to be varied between approximately 43 and 48 ohms. More generally, the first resistor R1 may have a resistance of X ohms and the second resistor R2 may be controllable to have a resistance within a defined range of resistances. A mid-range resistance of the defined range of resistances may be Y ohms, where 5≤Y/X≤20, or where 10≤Y/X≤14. The range of resistances may be approximately from 5.X ohms to 25.X ohms, or approximately from 6.X ohms to 20.X ohms. Of course, increasing the resistance R increases the associated capacitance which will detrimentally affect the switching speed of the associated field-effect transistor S. Also, limiting the range of resistance R limits the calibration range. The adjustment circuits120,220ofFIGS.3A and3Bmay comprise digital units or engines (not shown) and, for each impedance R, R1, R2which is a variable impedance, a digital-to-analogue converter (not shown) connected to control the impedance dependent on a digital input signal. The adjustment circuits120,220may then be configured to control the digital input signals for the digital-to-analogue converters concerned to adjust the impedances. For example, in the case of the right-hand implementation ofFIG.4, the digital-to-analogue converter may control the gate voltage of the transistor R2. Looking back toFIGS.3A and3B, it will be appreciated that the use of the impedances R in the switch units10enables calibration of the switch units10(having the field-effect transistors or switches S) without for example needing to calibrate a gate or bulk voltage. This technique may have particular application where the field-effect transistors S are FinFET transistors, i.e. fin field-effect transistors. In particular, the voltage-controlled resistors (see R1, R2 inFIG.4, where R2 is a FET) enable a combination of a fixed ‘poly’ resistor R1 in parallel with an NMOS switch R2 to be used to calibrate the switch units10even in the case of FinFETs, to compensate for Vth (threshold voltage) mismatch. The NMOS switch R2 on-resistance Ron may be much higher than the resistance of the poly resistor R1 and this enables it to compensate for Ron variations due to temperature variations with sufficient calibration range. FIG.5is a schematic diagram of a current-mode circuit300embodying the present invention, being an expanded implementation of the current-mode circuits100and200. The current-mode circuit300comprises first to fourth switch units10A,10B,10C and10D, an optional fifth switch unit10E, an adjustment circuit320and a shared current source IREF. Looking back toFIG.3A, the switch units10A to10E correspond to the switch unit10, the adjustment circuit320corresponds to the adjustment circuit120, and the shared current source IREFcorresponds to the current source IREF, are thus denoted by similar reference signs. For consistency with the switch unit10ofFIG.3A, the switch unit10A comprises a field-effect transistor SA connected at its source terminal in series with a variable impedance R1A, R2A. In line withFIG.4, the variable impedance is implemented as a first resistor R1Ain parallel with a second resistor R2Awith the first resistor R1Aimplemented as a polysilicon resistor (of fixed resistance) and the second resistor R2Aimplemented as a transistor. The field-effect transistor SA and the variable impedance R1A, R2Aare further connected in series with the shared current source IREFwhich provides a defined current IREF, via a shared measurement node M. The switch units10B,100,10D, and10E are each configured in the same way as the switch unit10A, with like elements denoted in the same way but with the suffix A replaced with B, C, D or E depending on the switch unit. As such, duplicate description is omitted. All of the switch units10A,10B,10C,10D, and10E share the shared measurement node M and the shared current source IREF, so that the shared measurement node M may be considered a tail node comparable to the common node TAIL inFIG.2. First and second output nodes OUTA and OUTB are provided, for comparison withFIG.2. The output node OUTA is connected to the drain terminals of the field-effect transistors SA and SB, and the output node OUTB is connected to the drain terminals of the field-effect transistors SC and SD. The shared current source IREFis connected between the shared measurement node M and ground GND. The adjustment circuit320comprises a comparator322, a digital engine (digital circuit or digital unit)324, a switch controller326and DACs A to E corresponding to the switch units10A to10E, respectively. The DACs A to E are configured to output voltage signals VAto VE, respectively, under control by the digital engine324. The voltage signals VAto VEcontrol the second resistors R2Ato R2E(implemented as field-effect transistors), respectively, to control their on-resistances. The switch controller326is configured to output gate (voltage) signals GA to GE, for controlling the gate terminals of the field-effect transistors SA to SE, respectively, under control by the digital engine324. It is recalled that the switch unit10E is optional, and thus the DAC E, voltage signal VEand gate signal GE may be considered similarly optional. The adjustment circuit320further comprises DACs I and R, corresponding to the shared current source IREFand the comparator322, respectively. The DAC I is configured to output a voltage signal VIto control the shared current source IREFand thus a value of the given current IREF. As will become apparent, in some arrangements the shared current source IREFmay be configured to provide the given current IREFhaving a default (and non-variable) value, in which case the DAC I need not be provided. The DAC R is configured to output a reference voltage signal VREFto be provided to one of the input terminals of the comparator322, with the other input terminal of the comparator322connected to receive a measurement voltage VMprovided at the shared measurement node M. As will become apparent, in some arrangements the comparator322may be provided with a reference voltage VREFhaving a default (and non-variable) value, in which case the DAC R also need not be provided. Also indicated inFIG.5are optional comparator circuitry330(marked “Compare”) and an optional calibrated current source340. The calibrated current source340provides a calibrated current ICAL. The comparator may be considered separate from the adjustment circuit320or part of the adjustment circuit320. As will become apparent, the comparator circuitry330may be used to compare the calibrated current ICALwith the given current IREF, with an output comparison-result signal COM being used by the digital engine324to control the voltage signal VIand thus adjust (or tune) the given current IREFto become the same (within a degree of accuracy) as the calibrated current ICAL. Where the given current IREFis tuned in this way, the switch unit10E, the DAC E and their associated control signals VEand GE may be provided. Operation of the current-mode circuit300may be understood in connection withFIGS.6to10. FIG.6is a schematic diagram of a method400for use in calibrating the given current IREFto be the same as the calibrated current ICAL. Thus, method400assumes that the switch unit10E, the DAC E and their associated control signals VEand GE are provided. As above, where the shared current source IREFis configured to provide the given current IREFhaving a default (and non-variable) value, the switch unit10E, the DAC E and their associated control signals VEand GE need not be provided, and the method400need not be carried out. Method400comprises steps S402, S403, S404, S405, S406and S407, and may be carried out by the adjustment circuit320(and the comparator circuitry330). At step S402, the digital engine324controls the switch controller326to set the gate signals GA to GE such that field-effect transistor SE is ON and field-effect transistors SA to SD are OFF. Thus, the given current IREFis carried by the switch unit10E. At step S403, the digital engine324provides the DACs E and I with default (digital) values, e.g. midscale values, so that the variable impedance R1E, R2Eadopts a default resistance value and the given current IREFsimilarly adopts a default value. At step S404, the comparator circuitry330is employed to compare the calibrated current ICALwith the given current IREF, as mentioned above, with the output comparison-result signal COM being provided to the digital engine324so that it can determine if the calibrated current ICALis the same as the given current IREF. If the calibrated current ICALis not the same as the given current IREF(S405, NO), the method400proceeds to step S406where the digital engine324adjusts the digital value provided to the DAC I, based on the comparison-result signal COM, to bring the given current IREFcloser to the calibrated current ICAL. Steps S404, S405, NO and S406are then repeated until the calibrated current ICALis the same as the given current IREF(S405, YES), for example to one 1LSB change in the DAC I, in which case the method400proceed to step S407. In step S407, the existing digital value provided to the DAC I is set or recorded as being the calibrated DAC I value, i.e. which causes the given current IREFto be the same as the calibrated current ICAL. FIG.7is a schematic diagram of parts500of the current-mode circuit300ofFIG.5. In particular,FIG.7presents an example detailed implementation of the comparator circuitry330to aid in an understanding of the current-mode circuit300and the operation of method400. Comparator circuitry330comprises nodes331,333,335,337and339, a capacitor332, a switch334, a comparator336and two input terminals T1 and T2. The calibrated current source340is shown implemented as a transistor connected to apply the calibrated current ICALat input T1 and the shared current source IREFis shown implemented as a transistor and connected to apply the given current IREFat the input T2. Also shown inFIG.7is the digital engine324connected to receive the signal COM from an output terminal T3 of the comparator circuitry330(and of the comparator336) and to output a digital control signal to the DAC I, which in turn outputs the voltage signal VIto control the shared current source IREFin line withFIG.5. It will be understood that the digital engine324is also capable of generating other control signals as inFIG.5, however those other control signals are omitted here for simplicity. The two inputs T1 and T2 are connected to node331, which may be considered a test node. The test node331is connected to one of the input terminals of the comparator336and the nodes333and335, and the other input terminal of the comparator36is connected to nodes337and339and a voltage source (not shown) to maintain that node at a target voltage level (Vcm). The capacitor332and the switch334are connected in parallel with one another between the two input terminals of the comparator336, with the capacitor332connected between nodes333and337, and the switch334connected between nodes335and339. In operation of the comparator circuitry330, the switch334is turned ON or closed (for example by the digital engine324or other control circuitry not shown) which connects the node335, and therefore the node331, to the node339which is held at the target voltage level (Vcm). Thus, the capacitor332is discharged and the test node331is biased to the target voltage level (Vcm). The switch334is then turned OFF or opened (for example by the digital engine324or other control circuitry not shown) and the difference between the currents at T1 and T2 (connected at the node331) will start integrating over (i.e. charging—positively or negatively) the capacitor332. Depending on the difference between the currents at T1 and T2, a voltage at the node331will move up or down. After a given test period (a time period chosen to be suitable for the capacitor332to be charged to a sufficient extent), the output of the comparator336will be high or low depending on the difference between the currents at T1 and T2 (which leads to a difference between the voltages at its two inputs). The comparator336thus outputs control signal COM at the output T3 (which is either high or low depending on the difference between the currents at T1 and T2) to the digital engine324. The digital engine324is configured to receive the control signal COM and to output a digital control signal for causing DAC I to adjust its voltage signal VIto adjust the given current IREFin line with steps S404, S405and S406ofFIG.6. This process may be iterated, for example in a successive approximation way (e.g., binary search). The process may be iterated until the output of the comparator336(the control signal COM) changes state (i.e. changes from low to high or vice versa) with one 1LSB change in the DAC I, for example. At that point, the currents at T1 and T2 are deemed to be calibrated to be in a defined relationship with each other (for example equal) to within the required accuracy. For example, the difference between the currents is then less than a threshold current difference. The test period (the length of time that the capacitor is allowed to charge) can be increased or decreased depending on the desired accuracy/resolution vs. speed of operation of the comparator circuitry330. FIG.8is a schematic diagram of a method600for use in calibrating the value of the reference voltage signal VREF. Thus, method600assumes that the DAC R is provided. As above, where the reference voltage signal VREFis configured to have a default (and non-variable) value, the DAC R also need not be provided and the method600need not be carried out. Method600comprises steps S602, S603, S604, S605, S606and S607, and may be carried out by the adjustment circuit320. At step S602, the digital engine324controls the switch controller326to set the gate signals GA to GE such that field-effect transistor SE is ON and field-effect transistors SA to SD are OFF. Thus, the given current IREFis again carried by the switch unit10E. At step S603, the digital engine324provides the DACs R and E with default (digital) values, e.g., midscale values, so that the variable impedance R1E, R2Eadopts a default resistance value and the reference voltage signal VREFsimilarly adopts a default value. The digital engine324also provides the DAC I with its calibrated value, assuming that method400has been carried out, so that the given current IREFis the same as the calibrated current ICAL. At step S604, the comparator322is employed to compare the reference voltage signal VREFwith the measurement voltage VM, which relates to the switch unit10E since the other switch units are OFF. An output comparison-result signal is provided from the comparator322to the digital engine324as indicated so that it can determine if the reference voltage signal VREFis the same as the measurement voltage VM. If the reference voltage signal VREFis not the same as the measurement voltage VM(S605, NO), the method600proceeds to step S606where the digital engine324adjusts the digital value provided to the DAC R, based on the comparison-result signal from the comparator322, to bring the voltage signal VREFcloser to the measurement voltage VM. Steps S604, S605, NO and S606are then repeated until the reference voltage signal VREFis the same as the measurement voltage VM(S605, YES), for example to one 1LSB change in the DAC R, in which case the method600proceed to step S607. In step S607, the existing digital value provided to the DAC R is set or recorded as being the calibrated DAC R value, i.e. which causes the reference voltage signal VREFto be the same as the measurement voltage VM. In this way, the reference voltage signal VREFmay be taken to have a calibrated value. FIG.9is a schematic diagram of a method700for use in calibrating one or all of the switch units10A to10D. Method700comprises steps S701to S708, and may be carried out by the adjustment circuit320. At step S701, the digital engine324sets the value of a variable X to any one of A, B, C and D, to enable the switch units10A to10D to be calibrated one-by-one. Of course, if only one of the switch units is to be calibrated the variable X may be fixed accordingly. For convenience of explanation, it will be assumed that all of the switch units10A to10D are to be calibrated, and that, in the first instance of step S701, the digital engine324sets the value of a variable X to A (to calibrate switch unit10A first), and the first pass through steps S702to S708will be described accordingly. At step S702, the digital engine324controls the switch controller326to set the gate signals GA to GE such that field-effect transistor SA is ON and field-effect transistors SE and SB to SD are OFF. Thus, the given current IREFis carried by the switch unit10A. At step S703, the digital engine324provides the DACs R and I with their calibrated values, assuming that methods400and600have been carried out, so that the reference voltage signal VREFhas its calibrated value and the given current IREFis the same as the calibrated current ICAL. The digital engine324also provides the DAC A with a default (digital) value, e.g., a midscale value, so that the variable impedance R1A, R2Aadopts a default resistance value. At step S704, the comparator322is employed to compare the measurement voltage VM(which relates to the switch unit10A since the other switch units are OFF) with the reference voltage signal VREF. An output comparison-result signal is provided from the comparator322to the digital engine324as indicated so that it can determine if the measurement voltage VMis the same as the reference voltage signal VREF. If the measurement voltage VMis not the same as the reference voltage signal VREF(S705, NO), the method700proceeds to step S706where the digital engine324adjusts the digital value provided to the DAC A, based on the comparison-result signal from the comparator322, to bring the measurement voltage VMcloser to the voltage signal VREF. Steps S704, S705, NO and S706are then repeated until the measurement voltage VMis the same as the reference voltage signal VREF(S705, YES), for example to one 1LSB change in the DAC A, in which case the method700proceed to step S707. In step S707, the existing digital value provided to the DAC A is set or recorded as being the calibrated DAC A value, i.e., which causes the predetermined property of the switch unit10A to have a calibrated value, and to be the same as that of the switch unit10E. The method then proceeds to step S708, where it is checked to see if the the predetermined properties of all of the field-effect transistors SA to SD (i.e. of all of the switch units10A to10D) that are intended to be calibrated have been calibrated. If not (S708, NO), the method returns to step S701where the digital engine324sets the value of variable X to a new one of A, B, C and D, and then passes through steps S702to S708again. For example, for the second pass through steps S702to S708the value of the variable may be set to B, with it being set to C and then D for third and fourth passes, respectively. Once all of the switch units10A to10D that are intended to be calibrated have been calibrated (S708, YES), the method ends. In this way, the method700may obtain calibrated values for the DACs B to D (as well as for DAC A) which cause the predetermined properties of all of the field-effect transistors SA to SD (i.e. of all of the switch units10A to10D) to have calibrated values, and to be the same as one another (within the 1LSB DAC accuracy) and as that of the calibrated field-effect transistor SE (i.e. the switch unit10E). FIG.10is a schematic diagram of a method800for use in configuring the current-mode circuit300for calibrated operation. Method800comprises steps S801and S802and may be carried out by the adjustment circuit320. At step S801, the digital engine324provides the DACs A to D with their calibrated values, so that the predetermined properties of all of the field-effect transistors SA to SD (i.e. of all of the switch units10A to10D) have their calibrated values, and are the same as one another (within the 1LSB DAC accuracy). The DAC I is also provided with its calibrated value assuming that method400has been carried out. At step S802, which corresponds to calibrated operation, the digital engine324controls the switch controller326to control the gate signals GA to GD so that the current-mode circuit300carries out its intended function. For example, when the current-mode circuit300is for use with a DAC in line with the differential switching circuit (or current-mode circuit) ofFIG.2, the gate signals GA to GD may be controlled by a thermometer-coded signal (dependent on data supplied to the DAC) so that when one of the field-effect transistors SA to SD is ON, the others are OFF, and so that from each clock cycle to the next (in synchronisation with which the gate signals change state), the field-effect transistor which is ON turns OFF and one of the field-effect transistors which are OFF turns ON, so that any distortion generated by the switching is data independent. Looking back toFIG.5, it is recalled that the calibrated current source340provides a calibrated current ICALThe calibrated current source340may itself be calibrated, to calibrate the calibrated current ICAL, by virtue of a technique disclosed in EP3618282A1, the entire contents of which are incorporated herein by reference. The technique disclosed in EP3618282A1 enables a plurality of output current sources with (currents having) different magnitudes to be calibrated, so that the relationship in magnitude between (the currents of) each output current source is calibrated as well. The technique involves using a reference or ‘golden’ current source to calibrate a plurality of candidate current sources, and then using this plurality of candidate sources to further calibrate output current sources for use in specific applications. For example, one of those output current sources (once calibrated) may be used as the calibrated current source340. FIG.11is a schematic diagram of calibration circuitry850, being a simplified version of the calibration circuitry300of FIG. 5 of EP3618282A1. For simplicity of explanation, a single ‘golden’ current source820, four candidate current sources CCS1 to CCS4 and output current sources OCS801to806will be considered. As described in EP3618282A1, any number of candidate current sources or output current sources may be used. In the following example, it is assumed that each output current source should output a current that is four times as large as the next output current source i.e., with the first output current source OCS801having an output current Im that is four times as large as the second output current I4 from a second output current source OCS802and so on and so forth. Such an application may be of use in a segmented DAC. This may be summarized as follows: Im=4*I4=16*I3=64*I2=256*I1=1024*I0. In the present example, the four candidate current sources CCS1 to CCS4, each outputting a candidate current CC1 to CC4 respectively, are firstly calibrated against the ‘golden’ current source820, outputting golden current Ic. This is achieved by sequentially comparing each candidate current source with the ‘golden’ current source using comparator circuitry830. Comparator circuit830may be configured the same as comparator circuitry330, and operate in a similar manner, so that duplicate description may be omitted. The control signal COM output by the comparator circuitry330may be used to adjust the candidate currents output by each candidate current source. For example, candidate current source CCS1 outputs candidate current CC1 which may be compared with ‘golden’ current IC. Candidate current source CCS1 may then be adjusted by virtue of its control signal B1 (which may be a signal for controlling a variable impedance R where the current sources are implemented as switch units in line with the switch units10described earlier herein), using the control signal COM, such that the candidate current CC1 and the ‘golden’ current Ic are substantially equal. This process may be repeated for each candidate current source in turn. Next, a first output current source OCS801is calibrated. The four calibrated candidate current sources CCS1 to CCS4 are summed by connecting their outputs together and are collectively compared to the first output current Im of the first output current source OCS801(i.e. CC1+CC2+CC3+CC4 is compared with Im). The first output current source OCS801is configured, in this example, to have an output current Im of 4*Ic, that is four times the ‘golden’ current. The first output current source OCS801is adjusted by virtue of its control signal B-MSB using the control signal COM output by the comparator circuitry330until the first output current Im and the combined candidate currents (CC1+CC2+CC3+CC4) are substantially equal. Again, the control signal B-MSB may be a signal for controlling a variable impedance R where the current source OCS801is implemented as a switch unit in line with the switch units10described earlier herein. Next, in order to calibrate the remaining output current sources, the second output current I4 of a second output current source OCS802is compared to the current of a single candidate current source (e.g. CC1), and adjusted until the currents I4 equals that current CC1. This results in the second output current I2 having a magnitude ¼ of that of the first output current Im, since only one of the four candidate currents is used for the comparison. The output COM of the comparator circuitry330is used as the control signal based on which the control signal B-LSB4 of the second output current source OCS802is adjusted until the second output current I4 is substantially equal to the candidate current CC1. Next, the calibrated second output current I4 is compared to the sum of all four candidate currents (i.e. CC1+CC2+CC3+CC4 is compared with I4). However, this time, the output COM of the comparator circuitry330is used as the basis for adjusting the candidate current sources collectively, via a collective control signal input GV (which may be a gate voltage signal), such that their combination is substantially equal to the calibrated second output current I4. This step effectively reduces the magnitude of the candidate currents by a factor of four, since the calibrated second output current I4 was previously made substantially equal to only one candidate currents. Similar steps are then undertaken for the remaining output current sources. For example, the third output current I3 from the third output current source OCS803is compared to a single candidate current (i.e. CC1) and the control signal B-LSB3 adjusted until I3=CC1. Once substantially equal, the calibrated third output current I3 is compared to the sum of all four candidate currents (i.e. CC1+CC2+CC3+CC4 is compared with I3), and the candidate current sources are collectively adjusted using collective control input GV, so that their sum is substantially equal to the calibrated third output current I3. This, again, effectively reduces the magnitude of the candidate currents by a factor of four since the calibrated third output current I3 was previously made substantially equal to only one candidate current. In a similar manner, I2, I1 and I0 may be calibrated. The result is calibrated output currents with a defined relationship (1:4) one to the next. As above, any one of the calibrated output current sources may be used as the calibrated current source340. As mentioned earlier, the use of the impedances R in the switch units10enables calibration of the switches (field-effect transistors) S without for example needing to control a gate or bulk voltage. This technique has particular application where the field-effect transistors S are FinFET transistors, i.e. fin field-effect transistors. Instead of controlling the Vth of the switch transistors S through their bulk, the calibration of mismatch between clock switches S (seeFIG.5) is achieved through regulating the tail voltage Vm of the switch S with a voltage-controlled resistor R1, R2. In the case of a DAC having multiple DAC slices (each corresponding toFIG.5, and to a current source2and differential switching circuit4pair inFIG.1), clock switches S of each DAC slice may be calibrated by comparing the tail voltage Vm of the slice versus the adjusted reference tail voltage VREFfor that slice using a dedicated calibration DAC (e.g. DAC A inFIG.5for switch SA). This may be carried out until the tail voltage Vm for the switch S concerned (e.g. SA) becomes equal to adjusted tail reference voltage VREFwithin 1LSB precision in the calibration DAC (e.g. DAC A). Effectively the overall resistance R1, R2 (for each switch S) is adjusted to compensate for the mismatch on the switches S. The digital calibration engine324may determine whether to increment or decrement the calibration DAC (e.g. DAC A, for switch SA) in binary search method to find the right calibration code until the tail voltage Vm of that switch S becomes equal to adjusted tail reference voltage VREFwithin 1LSB precision. This calibration across the switches SA to SD reduces switching delay mismatch and in turn improves the linearity of the overall DAC. Incidentally, although the current-mode circuits have been described herein in relation to DAC functionality, it will be appreciated that they may also be used within an ADC.FIG.12is a schematic circuit diagram of a four-phase (i.e. multiphase) current-mode (current-steering) sampler900, which corresponds to the sampler42of FIG. 10 of EP-A1-2211468, the entire contents of which are incorporated herein by reference. The sampler forms the front-end of an ADC. The sampler900is configured to receive a differential input current signal, modeled as a current source IN whose magnitude varies with the input signal. For differential signaling, sampler900effectively has two matching (or corresponding or complementary) sections954and956for the two differential inputs. Accordingly, there is a first set of output streams IOUTAto IOUTDin section954and a second set of matching output streams IOUTBAto IOUTBDin section954, where IOUTB means, and wherein IOUTAis paired with IOUTBA, IOUTB is paired with IOUTBB, and so on and so forth. Focusing on the first section954by way of example (because the second section956operates analogously to the first section54), there are provided four n-channel FETs958Ato958D(i.e. one per stream or path) with their source terminals connected together at a common tail node960. The aforementioned current source IINis connected between common tail node960and an equivalent common tail node966of section56. A further current source IDC962is connected between the common tail node960and ground supply, and carries a constant DC current loc. The gate terminals of the four transistors958Ato958Dare driven by four clock signals θ0to θ3, respectively, provided from a VCO (not shown). As mentioned above, section56is structurally similar to section54and thus comprises transistors964A to964D, common tail node966and current source IDC968. The clock signals θ0to θ3are assumed to be time-interleaved raised cosine waveforms provided as four voltage waveforms from the VCO. The use of four clock signals in the present case is due to a four-way-interleaving design of ADC circuitry described in more detail in EP-A1-2211468, but it will be appreciated that this is not essential. Clock signals θ0to θ3are 90° out of phase with one another, such that θ0is at 0° phase, θ1is at 90° phase, θ2is at 180° phase, and θ3is at 270° phase. The effect of sampling circuitry900, under control of clock signals θ0to θ3, is that the output currents IOUTAto IOUTDare four trains (or streams) of current pulses, the series of pulses in each train having the same period as one of the clock signals θ0to θ3, and the pulses of all four trains together being time-interleaved with one another as an effective overall train of pulses at a quarter of the period of one of the clock signals (or at four times the sampling frequency of one of the clock signals). By comparing the current-mode circuit300with, for example, the first section954, it will be understood that each of the field-effect transistors958Ato958Dmay be replaced by a switch unit corresponding to switch unit10A, and that the tail node960may serve as a measurement node M for use with the adjustment circuit320. Thus, such switch units may be calibrated using the techniques described earlier herein. The second section956may be configured and calibrated in a similar fashion. Such an arrangement embodies the present invention. FIG.13is a schematic diagram of a DAC1000embodying the present invention. The DAC1000comprises any of the current-mode circuits100,200,300disclosed herein (including a modified version of the sampler900as described above). The DAC1000may output an analogue signal, as shown, based on an input digital signal. FIG.14is a schematic diagram of an ADC2000embodying the present invention. The ADC2000comprises any of the current-mode circuits100,200,300disclosed herein (including a modified version of the sampler900as described above). The ADC2000may output a digital signal, as shown, based on an input analogue signal. Any of the circuitry disclosed herein may be implemented as integrated circuitry or as an integrated circuit, for example as (or as part of) and IC chip, such as a flip chip. FIG. is a schematic diagram of integrated circuitry3000embodying the present invention. The integrated circuitry3000may comprise the DAC1000and/or the ADC2000and/or any of the current-mode circuits100,200,300disclosed herein (including a modified version of the sampler900as described above). Integrated circuitry3000may be representative of some or all of an IC chip. The present invention extends to integrated circuitry and IC chips as mentioned above, circuit boards comprising such IC chips, and communication networks (for example, internet fiber-optic networks and wireless networks) and network equipment of such networks, comprising such circuit boards. In any of the above aspects, the various features may be implemented in hardware, or as software modules running on one or more processors/computers. The invention also provides a computer program or a computer program product comprising instructions which, when executed by a computer, cause the computer to carry out any of the methods/method steps described herein, and a non-transitory computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out any of the methods/method steps described herein. A computer program embodying the invention may be stored on a non-transitory computer-readable medium, or it could, for example, be in the form of a signal such as a downloadable data signal provided from an Internet website, or it could be in any other form. The present invention may be embodied in many different ways in the light of the above disclosure, within the spirit and scope of the appended claims. | 47,172 |
11863170 | DETAILED DESCRIPTION Aspects of the present disclosure relate to unwanted peak reduction in equalizers. In some communication systems, such as Ethernet networking communication, high speed digital data is transmitted over differential pairs of wires. Differential pairs of wires present a communication channel in which the signal loss or attenuation increases as the frequency of the signal increases. A serializer/deserializer (SerDes) pair includes a serializer located at the transmitter and a deserializer located at the receiver, where the serializer converts parallel data into serial data for transmission over the communication channel (e.g., over the differential pair of wires) and the deserializer converts the serial data back into parallel data at the receiver. An equalizer compensates for distortions introduced by the communication channel. In particular, a SerDes includes an equalizer (or separate equalizers at the transmitter and at the receiver) to boost (or amplify) the signal in a frequency-dependent manner, where higher frequency signals are boosted or amplified more than lower frequency signals. As a result, under ideal circumstances, the equalizer perfectly compensates for the frequency-dependent attenuation caused by the communication channel. In other words, assuming a perfect equalizer, the combination of the communication channel and the equalizer has a uniform or flat frequency response (equalized or uniform). In practice, the magnitude of the frequency-dependent loss or attenuation depends on the length of the channel (e.g., the length of the wires), where longer channels result in higher loss or greater attenuation. Therefore, in order for the equalizer to be able to support a wide range of operating conditions (e.g., different network cable lengths) the equalizer may be be programmable or tunable to match the particular frequency-dependent attenuation caused by the communication channel. As a specific example, a tunable equalizer can provide a different boost to higher frequency signals depending on the length of the network cable plugged into a network device that includes the serializer or deserializer. FIG.1Ais circuit diagram illustrating one example of an equalizer that may be included in a SerDes. In the example shown inFIG.1A, a first negative (n)-type metal oxide semiconductor field effect transistor (NMOS) transistor101and a second NMOS transistor102form a differential pair. While the example shown inFIG.1Auses NMOS transistors in the differential pair, the disclosure is not limited thereto and may similarly be implemented using positive (p)-type metal oxide semiconductor field effect transistors (PMOS) or other types of transistors such as bipolar junction transistors (BJT) and junction-gate field-effect transistors (JFET). The gate electrodes of the first NMOS transistor101and the second NMOS transistor102are connected to respective equalizer inputs to the equalizer100, including a non-inverting input labeled Vin+ and an inverting input Vin− (or vinp and vinm), and correspond to the differential pair of wires that transmit the signal that is to be equalized by the equalizer100. Each NMOS transistor is connected with a corresponding current source (first current source111and second current source112) at the source electrode of the NMOS transistor and a corresponding load (first load121and second load122) at the drain electrode of the NMOS transistor, where the load, the NMOS transistor, and the current source are connected in series between a voltage source (Vdd) and ground. A degeneration resistance130(represented by a single resistor with a switch) is connected between the source electrodes of the NMOS transistors101and102(resistive degeneration) to set the gain or boost of the equalizer100. The degeneration resistance130may include a resistor bank that includes, for example, a plurality of resistors connected in parallel, where the switches may connect or disconnect particular resistors to the source electrodes of the NMOS transistors101and102, thereby allowing the resistance between the source electrodes of the NMOS transistors101and102to be varied in accordance with the setting of the switches. Likewise, a degeneration capacitance140(represented by a capacitor with a switch) is also connected between the source electrodes of the NMOS transistors (capacitive degeneration) to set the shape of the peaking effect (e.g., the frequency-dependent boost of the equalizer100). The capacitor bank may similarly include a plurality of capacitors arranged in parallel, such that individual capacitors can be connected or disconnected to the source electrodes of the NMOS transistors to vary the capacitance between the source electrodes. As such, by selecting different combinations of resistors and capacitors in the degeneration resistance130and the degeneration capacitance140, the equalizer100can be programmed or configured to various boost settings or equalization settings. For channels which require no equalization, the degeneration capacitance140is turned off (e.g., by disconnecting the capacitance using the switch). The output of the equalizer100is read from the drain electrodes of the NMOS transistors as a pair of voltage signals—a negative output signal Vout− (or voutm) and a positive output signal Vout+ (or voutp). FIG.1Bis a graph depicting the frequency-dependent boosts provided by an equalizer at a plurality of different boost settings (or equalization settings), where each curve corresponds to a different setting. As shown inFIG.1B, the equalizer provides substantially no boost (e.g., zero gain) at low frequencies (e.g., below about 1 GHz) and steadily increases the boost with increasing frequency of input signals, with a peak at about 20 GHz, and quickly decreasing boost after the peak. While some comparative equalizer circuits such as that shown inFIG.1Acan provide 15 dB or more boost, these equalizer circuits cannot be tuned to a low value such as 0 dB (no boost gain) due to parasitic capacitances arising from current sources, switches used to program the degeneration capacitors, and metal routing. The parasitic capacitances is represented inFIG.1Aby aggregated parasitic capacitance values Cpar. As shown inFIG.1B, even at the lowest boost setting (with the degeneration capacitance140disconnected) indicated at the parasitic capacitance Cpar causes a non-zero boost or unwanted peak 150 at 20 GHz, such that the range of boost values at about 20 GHz, as shown inFIG.1B, is between about 2 to 8. As a more general explanation of this phenomenon,FIG.1Cdepicts an analog amplifier160with resistance R in the degeneration. As shown inFIG.1C, an input signal (Vin+) to the amplifier is applied to the gate electrode of an NMOS transistor161. The drain electrode of the NMOS transistor161is connected to a voltage source Vdd through a load163. The source electrode of the NMOS transistor161may be referred to as the degeneration node (or degeneration) of the NMOS transistor161. A current source165is connected in parallel with the resistance R167between the degeneration node and ground. The effective transconductance (Gm) of this analog amplifier160is given by: Gm=gm1+gm·R where gm is the transconductance of the NMOS transistor161. However, the circuit shown inFIG.1Cdoes not account for parasitic capacitance arising from the physical structure of the circuit (e.g., due to the current source165and due to metal routing). FIG.1Ddepicts an analog amplifier with resistance R in the degeneration and explicitly showing parasitic capacitance Cpar at the degeneration node. As shown inFIG.1D, an input signal (Vinp) to the amplifier is applied to the gate electrode of an NMOS transistor171. The drain electrode of the NMOS transistor171is connected to a voltage source Vdd through a load173. The source electrode of the NMOS transistor171may be referred to as the degeneration node (or degeneration) of the NMOS transistor171. A current source175is connected in parallel with the resistance R177between the degeneration node and ground. The parasitic capacitance Cpar of the circuit is collectively shown as capacitance C179, connected between the degeneration node and ground. In the presence of this parasitic capacitance C, the effective transconductance is: Gm=gm(1+R·jωC)1+gm·R+R·jωC where j represents the imaginary unit √{square root over (−1)}, ω is the frequency of the signal, and gm is the transconductance of the NMOS transistor171. The additional pole-zero pair arising from the parasitic degeneration capacitor179causes a peaking in the frequency response of the amplifier170. Therefore, referring back toFIG.1AandFIG.1B, the parasitic capacitance Cpar limits the programmable range of boost values of this equalizer and therefore limits the range of channels (e.g., frequencies) and channel lengths that can be supported by a receiver in a communication system due to the equalizer being unable to correctly compensate for the distortion introduced by the channel. For example, this unwanted peaking can over equalize (e.g., boost portions of the signal that do not need to be boosted), thereby leading to signal distortion, which is contrary to the desired goal of using the equalizer to remove distortion. Therefore, aspects of the present disclosure relate to an equalizer circuit that can be tuned to low boost values such as 0 dB, which is especially helpful in the case of short reach channels (e.g., shorter cables having less attenuation). In more detail, an equalizer circuit according to the present disclosure includes a main transconductance (gm) cell that provides the programmable boost to higher frequency signals and an additional gm cell or replica gm cell that is a substantial replica of the main gm cell, but with capacitive degeneration in the opposite direction as the main gm cell. As a result, by adding the output of the main gm cell (the main stage output) and the output of the additional gm cell (the replica stage output), only high frequency gain is attenuated and therefore the low frequency gain of the entire equalizer is maintained at a low boost value such as 0 dB. In more detail, in some examples of the present disclosure, the additional gm cell (or arm) is substantially identical to the main gm cell. The capacitive degeneration present in the additional gm cell is fixed at a minimum value to cancel the peaking that arises from the parasitic capacitance of the main gm cell. For longer channels (e.g., longer wires), the degeneration capacitor in the main gm cell is switched to a higher value while keeping the capacitance in the additional gm cell at the minimum setting. Accordingly, an equalizer circuit according to the present disclosure can provide a wider range of boost values such that it is suitable for use with a wider range of channels (e.g., different frequencies and different reach of the channels or different lengths of the differential wires). Technical advantages of the present disclosure include, but are not limited to, cancelling high frequency unwanted peaking that arises from only parasitic capacitance, thereby attaining flat frequency response for a minimum boost setting. This makes it possible to use a minimum boost setting for very low loss channels (e.g., very short reach channels or short network cables). An equalizer that can provide a full range of boost values or boost settings from 0 dB (or a substantially flat response over the working frequency range) to a high boost (e.g., 15 dB) provides a technical advantage in that it allows the same equalizer design to be used in a wide range of use cases. For example, the same equalizer circuit can be used to equalize long reach channels (e.g., network cables extending across a data center or between buildings) as well as short reach channels (e.g., network cables connecting between devices on a same physical rack). As such, an equalizer circuit according to the present disclosure can replace multiple different equalizer designs that are tailored for different specific use cases (e.g., different equalizer circuits for different channel lengths) or different communications hardware (e.g., different ports on a network switch) that are tailored for specific use cases (e.g., long reach connections versus short reach connections through the inclusion of an equalizer or exclusion of an equalizer, respectively). Accordingly, another advantage of the wide boost range of an equalizer according to the present disclosure is the reduction in the number of different integrated circuits that need to be produced for different use cases, thereby reducing waste and reducing planning requirements (e.g., avoiding having to forecast which particular equalizer designs will be in higher demand) and also increasing flexibility in the design of communication systems (e.g., allowing the same ports on communication devices such as network switches and routers to be used for short reach channels as well as long reach channels). Another technical advantage is that the use of the additional gm cell or replica cell can be leveraged to cancel high frequency noise in the input signal. A further technical advantage of the present disclosure is that comparative techniques for reducing peaking result in a trade-off involving degradation of gain bandwidth product or where the comparative equalizer does not work well across process corners. In particular, such comparative techniques require tedious analog simulations because these techniques are very sensitive to semiconductor manufacturing process variations. In contrast, the present disclosure a circuit design that provides an equalizer that works well across different process corners, without requiring extensive analog simulation. FIG.2is circuit diagram illustrating one example of an equalizer circuit200according to the present disclosure that reduces unwanted peaking. As shown inFIG.2, an equalizer circuit200according to the present disclosure includes a main transconductance (gm) cell210(or main stage) and an additional or replica gm cell250(or replica stage). The main gm cell or main stage210is substantially similar to the equalizer circuit shown inFIG.1A. In the example of the present disclosure shown inFIG.2, the main stage210includes a first NMOS transistor201and a second NMOS transistor202that form a main stage differential pair. The gate electrodes of the first NMOS transistor201and the second NMOS transistor202are connected to respective equalizer inputs to the equalizer, including a non-inverting input Vin+ and an inverting input Vin− (or vinp and vinm), and correspond to the differential pair of wires that transmit the signal that is to be equalized by the equalizer circuit200. Each NMOS transistor is connected with a corresponding current source (first current source211and second current source212) at the source electrode of the NMOS transistor and a corresponding load (first load221and second load222) at the drain electrode of the NMOS transistor, where the load, the NMOS transistor, and the current source are connected in series between a voltage source (Vdd) and ground. A main stage degeneration resistance230(represented by a single resistor231with a switch232) is connected between the source electrodes of the NMOS transistors (resistive degeneration) to set the gain or boost of the equalizer circuit200. The main stage degeneration resistance230may be implemented as a resistor bank231that includes, for example, a plurality of resistors connected in parallel, where the switches232may connect or disconnect particular resistors to the source electrodes of the NMOS transistors, thereby allowing the resistance between the source electrodes of the NMOS transistors to be varied in accordance with the setting of the switches. Alternatively, the main stage degeneration resistance230may be implemented with a fixed resistor (e.g., a fixed resistance). Likewise, a main stage degeneration capacitance240(represented by a capacitor241with a switch242) is also connected between the source electrodes of the NMOS transistors (capacitive degeneration) to set the shape of the peaking effect (e.g., the frequency-dependent boost of the equalizer circuit200). The main stage degeneration capacitance240may be implemented as a capacitor bank241that includes a plurality of capacitors arranged in parallel, such that individual capacitors can be connected or disconnected to the source electrodes of the NMOS transistors by switches242to vary the capacitance between the source electrodes. As such, by selecting different combinations of main stage degeneration resistance230and main stage degeneration capacitance240, the equalizer circuit200can be programmed or configured to various equalization settings. For channels which require no equalization, the main stage degeneration capacitance240is turned off (e.g., the capacitors of a capacitor bank are disconnected). As shown inFIG.2, in a manner similar to that of the equalizer shown inFIG.1A, the various components of the main stage210of the equalizer circuit200result in a parasitic capacitance, as represented by parasitic capacitors Cpar connected to the degeneration nodes of the first NMOS transistor201and the second NMOS transistor202. The output of the main stage210(the main stage output) of the equalizer circuit200is read from the drain electrodes of the first NMOS transistor201and the second NMOS transistor202of the main stage210, where the main stage output is connected to the output of equalizer circuit200as a whole, represented as a pair of signals—a main stage negative output signal Vout− (or voutm) and a main stage positive output signal Vout+ (or voutp). In order to counteract the effect of the parasitic capacitance, the output of the replica gm cell or replica stage250(the replica stage output) is connected to the output of the equalizer circuit200as a whole (the equalizer output) in an opposite direction from the main stage output. In the example shown inFIG.2, a replica stage negative output Voutr− is connected to the main stage positive output Vout+ and a replica stage positive output Voutr+ is connected to the main stage negative output Vout− such that the replica stage output cancels out the unwanted peaking in the signal in the main stage output. The replica stage250includes a plurality of circuit elements that are substantially identical to corresponding circuit elements of the main stage210, including a replica stage differential pair including a first NMOS transistor251and a second NMOS transistor252, a first current source261, a second current source262, a replica stage degeneration resistance280, and a replica stage degeneration capacitance290. In some examples according to the present disclosure, the dimensions of the circuit elements, such as the lengths and widths of the NMOS transistors and transistors of the current sources, the number of resistors in the resistor bank, the dimensions of the capacitors in the capacitor bank, the dimensions of the switches, the lengths and shapes of the metal wires or other connections between circuit elements, and the like, all match or are substantially the same as those of the main stage210and the replica stage250, such that the parasitic capacitance present in the replica stage250is substantially the same as the parasitic capacitance present in the main stage210. One difference between the main stage210and the replica stage is that, in replica stage250, the degeneration capacitance and resistance are both permanently disconnected from the circuit (e.g., the switches282of a resistor bank of the replica stage degeneration resistance280and the switches292of a capacitor bank of the replica stage degeneration capacitance290are permanently kept open or disconnected or where dummy connections are formed between the degeneration nodes of the first NMOS transistor251and the second NMOS transistor252and the replica stage degeneration resistance280and replica stage degeneration capacitance290, where the dummy connections of the switches282and292are permanently open and do not form a conductive path between the degeneration nodes through the degeneration resistance or the degeneration capacitance). As a result, the shape of the peak of the boost provided by the replica stage250represents only the influence of the parasitic capacitance, that is, the unwanted peaking at high frequency. As noted above, the replica stage degeneration resistance280and the replica stage degeneration capacitance290, including their respective switches282and292, are both physically formed in the circuit, even though they are permanently disconnected (e.g., where the switches282and292are permanently open), such that their contributions to the parasitic capacitance also appear in the replica. As noted above, the outputs Voutr+ and Voutr− of the replica stage250(the replica stage output) are connected in reverse or in an opposite direction from the main stage210, where the replica stage positive output Voutr+ is connected to the main stage negative output Vout− and the replica stage negative output Voutr− is connected to the main stage positive output Vout+. The output of the equalizer circuit200(the equalizer output) is read from the drain electrodes of the transistors as a pair of signals—a negative output signal Vout− (or voutm) and a positive output signal Vout+ (or voutp). As shown inFIG.2, the source electrode of the first NMOS transistor251supplied with the positive or non-inverting equalizer input Vin+ of the replica stage250is connected to the source electrode of the second NMOS transistor202supplied with the negative or inverting equalizer input Vin− of the main stage210in order to generate the positive output signal Vout+ of the equalizer circuit200. Likewise, as shown inFIG.2, the source electrode of the second NMOS transistor252supplied with the inverting equalizer input Vin− of the replica stage250is connected to the source electrode of the first NMOS transistor201supplied with the non-inverting equalizer input Vin+ of the main stage210in order to generate the negative output signal Vout− of the equalizer circuit200. As a result, the high frequency current flowing in the replica is subtracted from the current of the main stage, thereby leading to high frequency cancellation. This high frequency current is a strong function of the parasitic capacitance in the degeneration node. In addition, because there is no degeneration resistance in the replica (e.g., the replica stage degeneration resistance280is permanently disconnected from the replica stage250), there is no DC gain cancellation (e.g., no reduction in the gain of the overall equalizer circuit200for low frequency signals). In more detail, the total output current I is the sum of a first output current I1flowing through the first load221and a second output current I2flowing through the second load222: I=I1+I2 where I1=Gmmain·vinp and I2=Gmreplica·vinm Defining Vin=vinp=−vinm, then: I =Vin·(Gmmain−Gmreplica) Gmmain=gm(1+R·jωC)1+gm·R+R·jωCGmreplica=gm(jωC)gm+jωC where j represents the imaginary unit √{square root over (−1)}, ω is the frequency of the signal, C is the parasitic capacitance, and gm is the transconductance of the NMOS transistors (assumed to be substantially the same by virtue of being a replica, e.g., where the NMOS transistors are formed with the same dimensions on the same semiconductor substrate). For the sake of simplicity, the capacitance contributed by the main stage degeneration capacitance240is omitted from the analysis herein. Accordingly, the output current I can be computed as follows: I=Vin·gm(gm+jωRC)(gm+jωC) Therefore, the replica stage250reduces or removes the unwanted peak (or unwanted frequency dependence) arising from the parasitic capacitance. When the main stage degeneration capacitance240is connected, a desired frequency dependence or peak is introduced into the output current I in accordance with the capacitance value of the main stage degeneration capacitance240. FIG.3is a graph depicting the frequency-dependent boosts provided by an equalizer at a plurality of different boost settings according to one example of the present disclosure. As shown inFIG.3, the unwanted peaking of the main stage210arising from the parasitic capacitance is cancelled by the opposite peaking of the replica stage250to achieve flat response while keeping direct current (DC) gain (e.g., low frequency gain) unchanged. For example, at zero boost setting, shown by the lowest line inFIG.3, the boost remains flat through to about 20 GHz, in contrast to the lowest line ofFIG.1B, which shows an unwanted peak at about 20 GHz. In addition, for high boost setting, when the main stage degeneration capacitance240is set to max value, the replica stage degeneration capacitance290is still kept off (e.g., the switch of the capacitor bank of the replica stage degeneration capacitance290is disconnected), so that the equalizer circuit200continues to provide a high frequency boost from the main stage210from its main stage degeneration capacitance240. Therefore, the required boost (e.g., for long reach channels) remains intact, as shown by other lines (e.g., the topmost curve) inFIG.3. Because the equalizer circuit200shown inFIG.2can achieve flat as well as boosted frequency responses, the same circuit can be programmed to provide a wider range of possible boost values than the equalizer shown inFIG.1A, including 0 dB boost over an entire working frequency range up to, for example, about 20 GHz, as shown inFIG.3. Therefore, an equalizer according to examples of the present disclosure can be programmed work across wide range of channels, including short reach channels that do not require equalization or that require only small amounts of equalization (e.g., small amounts of boost at high frequencies). As noted above, an equalizer circuit may be implemented at the transmitter or transmit side and/or at the receiver or receive side of a communication channel. For example, in the case of an equalizer circuit implemented on the transmit side of the communication channel, the equalizer output would be connected to a differential pair of the communication channel (e.g., possibly through additional analog amplifier stages) through which the transmitter transmits data signals. In the case of an equalizer implemented in an integrated circuit on the receive side of the communication channel, the equalizer input would be connected to a differential pair of the communication channel (e.g., possibly through additional analog amplifier stages) through a receive port of the integrated circuit through which the integrated circuit receives data signals. The integrated circuit may be a communications integrated circuit that includes additional functionality, such as analog amplifier stages, a digital-to-analog converter (DAC), an analog-to-digital converter (ADC), an encoder, a decoder, and the like. WhileFIG.2depicts some examples of equalizer circuits200according to the present disclosure, embodiments are not limited thereto and may also include other arrangements. FIG.4is a circuit diagram of an equalizer circuit400according to one example of the present disclosure. The equalizer circuit400is substantially similar to the equalizer circuit200but omits the capacitor bank from both the main stage410and the replica stage450. For example, an equalizer circuit400shown inFIG.4would consume less area in an integrated circuit than the equalizer circuit200shown inFIG.2, and may be useful in circumstances where the frequency of the peak may be set by the parasitic capacitance and/or by an additional, fixed degeneration capacitance in accordance with a known frequency-dependent distortion of a communication channel, while retaining adaptability to set different levels of boost according to the length of the communication channel using a main stage degeneration resistance430including a resistor bank including a plurality of resistors431with a plurality of switches432(represented inFIG.4by a single resistor and a single switch) such that the resistance value of the main stage degeneration resistance430can be programmed or set by toggling the switches432on or off. In more detail, in the example of the present disclosure shown inFIG.4, the main stage410includes a first NMOS transistor401and a second NMOS transistor402that form a main stage differential pair. The gate electrodes of the first NMOS transistor401and the second NMOS transistor402are connected to respective equalizer inputs to the equalizer, labeled Vin+ and Vin− (or vinp and vinm), respectively and correspond to the differential pair of wires that transmit the signal that is to be equalized by the equalizer circuit400. Each NMOS transistor is connected with a corresponding current source (first current source411and second current source412) at the source electrode of the NMOS transistor and a corresponding load (first load421and second load422) at the drain electrode of the NMOS transistor, where the load, the NMOS transistor, and the current source are connected in series between a voltage source (Vdd) and ground. A main stage degeneration resistance430(represented by a single resistor431with a switch432) is connected between the source electrodes of the NMOS transistors (resistive degeneration) to set the gain or boost of the equalizer circuit400. The main stage degeneration resistance430may include a resistor bank including, for example, a plurality of resistors connected in parallel, where the switches432may connect or disconnect particular resistors431to the source electrodes of the NMOS transistors, thereby allowing the resistance between the source electrodes of the NMOS transistors to be varied in accordance with the setting of the switches. As such, by selecting different combinations of resistors431, the equalizer circuit400can be programmed or configured to various equalization settings. As shown inFIG.4, in a manner similar to that of the equalizer shown inFIG.2, the various components of the main stage410of the equalizer circuit400result in a parasitic capacitance, as represented by parasitic capacitors Cpar connected to the degeneration nodes of the first NMOS transistor401and the second NMOS transistor402. The output of the equalizer circuit400(the equalizer output) is read from the drain electrodes of the main stage transistors as a pair of signals—a main stage negative output signal Vout− (or voutm) and a main stage positive output signal Vout+ (or voutp). In order to counteract the effect of the parasitic capacitance, the output of the replica gm cell or replica stage450(the replica stage output) is connected to the output of the equalizer circuit400in an opposite direction from the output of the main stage410(the main stage output) such that the replica stage450cancels out the unwanted peaking in the signal from main stage410. In the example shown inFIG.4, a replica stage positive output Voutr+ is connected to the main stage negative output Vout− and a replica stage negative output Vout− is connected to the main stage positive output Vout+. The replica stage450includes a plurality of circuit elements that are substantially identical to corresponding circuit elements of the main stage410, including a replica stage differential pair including a first NMOS transistor451and a second NMOS transistor452, a first current source461, a second current source462, and a replica stage degeneration resistance480including resistors481and switches482, where the switches482are permanently disconnected (e.g., permanently open). For example, in a manner similar to that described above, the dimensions and shapes of the circuit elements of the replica stage are substantially the same as those of the corresponding circuit elements of the main stage410and in substantially the same relative positions such that the parasitic capacitance of the replica stage450is substantially the same as that of the main stage410. More specifically, as shown inFIG.4, the source electrode of the first NMOS transistor451supplied with the non-inverting equalizer input Vin+ of the replica stage450is connected to the source electrode of the second NMOS transistor402supplied with the inverting equalizer input Vin− of the main stage410in order to generate the positive output signal Vout+ of the equalizer circuit400. Likewise, as shown inFIG.4, the source electrode of the second NMOS transistor452supplied with the negative input Vin− of the replica stage450is connected to the source electrode of the first NMOS transistor401supplied with the positive input Vin+ of the main stage410in order to generate the negative output signal Vout− of the equalizer circuit400. As a result, the high frequency current flowing in the replica is subtracted from the current of the main amp leading to high frequency cancellation. This high frequency current is a strong function of the parasitic capacitance in the degeneration node. In addition, because there is no degeneration resistance in the replica (e.g., the switch or switches482are permanently open such that the resistor or resistors481of the replica stage degeneration resistance480are permanently disconnected from the replica stage450), there is no DC gain cancellation (e.g., no reduction in the gain of the overall equalizer circuit400for low frequency signals). As noted above, an equalizer circuit may be implemented at the transmitter or transmit side and/or at the receiver or receive side of a communication channel. For example, in the case of an equalizer circuit implemented on the transmit side of the communication channel, the equalizer output would be connected to a differential pair of the communication channel (e.g., possibly through additional analog amplifier stages) through which the transmitter transmits data signals. In the case of an equalizer implemented in an integrated circuit on the receive side of the communication channel, the equalizer input would be connected to a differential pair of the communication channel (e.g., possibly through additional analog amplifier stages) through a receive port of the integrated circuit through which the integrated circuit receives data signals. The integrated circuit may be a communications integrated circuit that includes additional functionality, such as analog amplifier stages, a digital-to-analog converter (DAC), an analog-to-digital converter (ADC), an encoder, a decoder, and the like. FIG.5illustrates an equalizer circuit500according to one example of the present disclosure. In more detail, the main stage510and the replica stage550of the equalizer circuit500ofFIG.5have the form of a complementary metal oxide semiconductor (CMOS) differential amplifier. For example, compared to the equalizer circuit400shown inFIG.4where the non-inverting input Vin+ and the inverting input Vin− are supplied to gate electrodes of NMOS transistors401,402,451, and452, the inputs to the equalizer circuit500ofFIG.5, including the non-inverting input Vin+ and the inverting input Vin−, are supplied to the gate electrodes of both NMOS transistors and PMOS transistors. In more detail, in the example shown inFIG.5, the non-inverting input Vin+ is connected to both a first NMOS transistor511and a first PMOS transistor513of the main stage510, where the first PMOS transistor513is complementary to the first NMOS transistor511. Likewise, the inverting input Vin− is connected to both a second NMOS transistor512and a second PMOS transistor514of the main stage510, where the second PMOS transistor514is complementary to the second NMOS transistor512. The first NMOS transistor511and the second NMOS transistor512form a main stage differential pair (e.g., a first main stage differential pair) and the first PMOS transistor513and second PMOS transistor514form a second main stage differential pair, where the transistor parameters of the transistors of a differential pair are controlled to be substantially the same. A first current source521is connected to the source electrode of the first NMOS transistor511, a second current source522is connected to the source electrode of the second NMOS transistor512, a third current source523is connected to the source electrode of the first PMOS transistor513, and a fourth current source524is connected to the source electrode of the second PMOS transistor514. A first main stage degeneration resistance530(represented by a resistor531and a switch532) and a second main stage degeneration resistance540(represented by a resistor541and a switch542) control the gain of the main stage510(e.g., individual resistors of the resistor bank540can be connected or disconnected to program the resistor bank540to provide a particular resistance, which sets the gain of the main stage510of the equalizer circuit500). The main stage510exhibits parasitic capacitance, as illustrated by four parasitic capacitances Cpar. A first output of the main stage510is taken from a point between the drain electrode of the first NMOS transistor511and the drain electrode of the first PMOS transistor513and a second output of the main stage510is taken from a point between the drain electrode of the second NMOS transistor512and the drain electrode of the second PMOS transistor514. The parasitic capacitance of the main stage510results in unwanted peaking in the frequency response, such that the current output Iout+ and Iout− of the main stage510, acting alone, cannot be programmed to have a flat frequency response (e.g., higher frequencies will be boosted more than low frequencies). To address the unwanted peaking, the equalizer circuit500according to one example of the present disclosure includes a replica stage550that has its outputs connected to the main stage positive output Iout+ and the main stage negative output Iout− of the main stage510in a direction opposite that of the main stage510such that the replica stage550cancels out the unwanted peaking and the output of the equalizer circuit500as a whole (including both the main stage510and the replica stage550) can be configured to generate an output with the unwanted peaking arising from the parasitic capacitance Cpar canceled out or reduced. In particular, a replica stage negative output current Ioutr− is connected to main stage positive output Iout+ and a replica stage positive output current Ioutr+ is connected to the main stage negative output Iout−. The replica stage550includes a plurality of circuit elements that are substantially identical to corresponding circuit elements of the main stage510, including a replica stage differential pair including a first NMOS transistor561and a second NMOS transistor562, a replica stage second differential pair including a first PMOS transistor563and a second PMOS transistor564, a first current source571, a second current source572, a third current source573, a fourth current source574, a permanently disconnected first replica stage degeneration resistance580(represented by a resistor581and a permanently open switch582), and a permanently disconnected second replica stage degeneration resistance590(represented by a resistor591and a permanently open switch592). For example, in a manner similar to that described above, the dimensions and shapes of the circuit elements of the replica stage are substantially the same as those of the corresponding circuit elements of the main stage510and in substantially the same relative positions such that the parasitic capacitance of the replica stage550is substantially the same as that of the main stage510. In more detail, in the example shown inFIG.5, the non-inverting input Vin+ is connected to both a first NMOS transistor561and a first PMOS transistor563of the replica stage550, where the first PMOS transistor563is complementary to the first NMOS transistor561. Likewise, the inverting input Vin− is connected to both a second NMOS transistor562and a second PMOS transistor564of the replica stage550, where the second PMOS transistor564is complementary to the second NMOS transistor562. A first current source571is connected to the source electrode of the first NMOS transistor561, a second current source572is connected to the source electrode of the second NMOS transistor562, a third current source573is connected to the source electrode of the first PMOS transistor563, and a fourth current source574is connected to the source electrode of the second PMOS transistor564. First replica stage degeneration resistance580and second replica stage degeneration resistance590set the gain of the replica stage550are disconnected such that the gain of the replica stage matches the gain of the main stage when the resistors are disconnected. The replica stage550also exhibits parasitic capacitance, as illustrated by four parasitic capacitances Cpar and, in a manner similar to that described above, the first replica stage degeneration resistance580and the second replica stage degeneration resistance590are included in order for the parasitic capacitance arising from the resistor bank to appear in the replica stage550, such that the parasitic capacitance of the replica stage550matches (e.g., is substantially the same as) the parasitic capacitance of the main stage510. A node between the drain electrode of the first NMOS transistor561and the drain electrode of the first PMOS transistor563of the replica stage550is connected to a node between the drain electrode of the first NMOS transistor511and the drain electrode of the first PMOS transistor513of the main stage510such that the replica stage550cancels or reduced unwanted peaking in the first output Iout− of the equalizer circuit500. Similarly, a node between the drain electrode of the second NMOS transistor562and the drain electrode of the second PMOS transistor564of the replica stage550is connected to a node between the drain electrode of the second NMOS transistor512and the drain electrode of the second PMOS transistor514of the main stage510such that the replica stage550cancels or reduced unwanted peaking in the second output Iout+ of the equalizer circuit500. As noted above, an equalizer circuit may be implemented at the transmitter or transmit side and/or at the receiver or receive side of a communication channel. For example, in the case of an equalizer circuit implemented in an integrated circuit on the transmit side of the communication channel, the equalizer output would be connected to a differential pair of the communication channel (e.g., possibly through additional analog amplifier stages) through a transmit port of the integrated circuit through which the integrated circuit transmits data signals. In the case of an equalizer implemented in an integrated circuit on the receive side of the communication channel, the equalizer input would be connected to a differential pair of the communication channel (e.g., possibly through additional analog amplifier stages) through a receive port of the integrated circuit through which the integrated circuit receives data signals. The integrated circuit may be a communications integrated circuit that includes additional functionality, such as analog amplifier stages, a digital-to-analog converter (DAC), an analog-to-digital converter (ADC), an encoder, a decoder, and the like. The equalizer circuit500ofFIG.5presents another example of an equalizer circuit according to the present disclosure that includes both a main stage and a replica stage, where the replica stage cancels or reduces unwanted peaking arising from parasitic capacitance in the main stage. Accordingly, aspects of the present disclosure relate to equalizer circuits that include replica stage circuits configured to cancel out or reduce unwanted peaking in the frequency-dependent boost or gain in a main stage circuit. In some circumstances, the unwanted peaking arises, at least in part, due to parasitic capacitance present in the main stage circuit. FIG.6illustrates an example set of processes600used during the design, verification, and fabrication of an article of manufacture such as an integrated circuit to transform and verify design data and instructions that represent the integrated circuit. Each of these processes can be structured and enabled as multiple modules or operations. The term ‘EDA’ signifies the term ‘Electronic Design Automation.’ These processes start with the creation of a product idea610with information supplied by a designer, information which is transformed to create an article of manufacture that uses a set of EDA processes612. When the design is finalized, the design is taped-out634, which is when artwork (e.g., geometric patterns) for the integrated circuit is sent to a fabrication facility to manufacture the mask set, which is then used to manufacture the integrated circuit. After tape-out, a semiconductor die is fabricated636and packaging and assembly processes638are performed to produce the finished integrated circuit640. Specifications for a circuit or electronic structure may range from low-level transistor material layouts to high-level description languages. A high-level of representation may be used to design circuits and systems, using a hardware description language (‘HDL’) such as VHDL, Verilog, SystemVerilog, SystemC, MyHDL or OpenVera. The HDL description can be transformed to a logic-level register transfer level (‘RTL’) description, a gate-level description, a layout-level description, or a mask-level description. Each lower representation level that is a more detailed description adds more useful detail into the design description, for example, more details for the modules that include the description. The lower levels of representation that are more detailed descriptions can be generated by a computer, derived from a design library, or created by another design automation process. An example of a specification language at a lower level of representation language for specifying more detailed descriptions is SPICE, which is used for detailed descriptions of circuits with many analog components. Descriptions at each level of representation are enabled for use by the corresponding systems of that layer (e.g., a formal verification system). A design process may use a sequence depicted inFIG.6. The processes described by be enabled by EDA products (or EDA systems). During system design614, functionality of an integrated circuit to be manufactured is specified. The design may be optimized for desired characteristics such as power consumption, performance, area (physical and/or lines of code), and reduction of costs, etc. Partitioning of the design into different types of modules or components can occur at this stage. During logic design and functional verification616, modules or components in the circuit are specified in one or more description languages and the specification is checked for functional accuracy. For example, the components of the circuit may be verified to generate outputs that match the requirements of the specification of the circuit or system being designed. Functional verification may use simulators and other programs such as testbench generators, static HDL checkers, and formal verifiers. In some embodiments, special systems of components referred to as ‘emulators’ or ‘prototyping systems’ are used to speed up the functional verification. During synthesis and design for test618, HDL code is transformed to a netlist. In some embodiments, a netlist may be a graph structure where edges of the graph structure represent components of a circuit and where the nodes of the graph structure represent how the components are interconnected. Both the HDL code and the netlist are hierarchical articles of manufacture that can be used by an EDA product to verify that the integrated circuit, when manufactured, performs according to the specified design. The netlist can be optimized for a target semiconductor manufacturing technology. Additionally, the finished integrated circuit may be tested to verify that the integrated circuit satisfies the requirements of the specification. During netlist verification620, the netlist is checked for compliance with timing constraints and for correspondence with the HDL code. During design planning622, an overall floor plan for the integrated circuit is constructed and analyzed for timing and top-level routing. During layout or physical implementation624, physical placement (positioning of circuit components such as transistors or capacitors) and routing (connection of the circuit components by multiple conductors) occurs, and the selection of cells from a library to enable specific logic functions can be performed. As used herein, the term ‘cell’ may specify a set of transistors, other components, and interconnections that provides a Boolean logic function (e.g., AND, OR, NOT, XOR) or a storage function (such as a flipflop or latch). As used herein, a circuit ‘block’ may refer to two or more cells. Both a cell and a circuit block can be referred to as a module or component and are enabled as both physical structures and in simulations. Parameters are specified for selected cells (based on ‘standard cells’) such as size and made accessible in a database for use by EDA products. A computer-readable design of an equalizer circuit according to the present disclosure may be included within a library of available pre-designed cells or circuit blocks or circuit portions stored on a computer-readable medium (e.g., in a digital representation of an equalizer circuit). This allows the design of an equalizer circuit according to the present disclosure to be placed as a circuit block within a design of an integrated circuit (e.g., a digital representation of the integrated circuit). For example, an equalizer circuit specified by the computer-readable design may be incorporated into the design of an analog or mixed-signal integrated circuit for communications (e.g., an integrated circuit for network communications such as Ethernet, having a plurality of ports corresponding to differential pairs, such as an input or receive differential pair and an output or transmit differential pair). During analysis and extraction626, the circuit function is verified at the layout level, which permits refinement of the layout design. During physical verification628, the layout design is checked to ensure that manufacturing constraints are correct, such as DRC constraints, electrical constraints, lithographic constraints, and that circuitry function matches the HDL design specification. During resolution enhancement630, the geometry of the layout is transformed to improve how the circuit design is manufactured. During tape-out, data is created to be used (after lithographic enhancements are applied if appropriate) for production of lithography masks. During mask data preparation632, the ‘tape-out’ data is used to produce lithography masks that are used to produce finished integrated circuits. A storage subsystem of a computer system (such as computer system700ofFIG.7) may be used to store the programs and data structures that are used by some or all of the EDA products described herein, and products used for development of cells for the library and for physical and logical design that use the library. FIG.7illustrates an example machine of a computer system700within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative implementations, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine may operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system700includes a processing device702, a main memory704(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), a static memory706(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device718, which communicate with each other via a bus730. Processing device702represents one or more processors such as a microprocessor, a central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device702may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device702may be configured to execute instructions726for performing the operations and steps described herein. The computer system700may further include a network interface device708to communicate over the network720. The computer system700also may include a video display unit710(e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device712(e.g., a keyboard), a cursor control device714(e.g., a mouse), a graphics processing unit722, a signal generation device716(e.g., a speaker), graphics processing unit722, video processing unit728, and audio processing unit732. The data storage device718may include a machine-readable storage medium724(also known as a non-transitory computer-readable medium) on which is stored one or more sets of instructions726or software embodying any one or more of the methodologies or functions described herein. The instructions726may also reside, completely or at least partially, within the main memory704and/or within the processing device702during execution thereof by the computer system700, the main memory704and the processing device702also constituting machine-readable storage media. In some implementations, the instructions726include instructions to implement functionality corresponding to the present disclosure. While the machine-readable storage medium724is shown in an example implementation to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine and the processing device702to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm may be a sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Such quantities may take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. Such signals may be referred to as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the present disclosure, it is appreciated that throughout the description, certain terms refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage devices. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may include a computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various other systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the method. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein. The present disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc. In the foregoing disclosure, implementations of the disclosure have been described with reference to specific example implementations thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of implementations of the disclosure as set forth in the following claims. Where the disclosure refers to some elements in the singular tense, more than one element can be depicted in the figures and like elements are labeled with like numerals. The disclosure and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 59,920 |
11863171 | DETAILED DESCRIPTION According to one embodiment, electronic circuitry includes a semiconductor switching element; and a driving circuit configured to supply a current to a control terminal of the semiconductor switching element and to adjust a magnitude of the current supplied to the control terminal based on a voltage at the control terminal. Some embodiments of the present invention are described below with reference to drawings. In the drawings, the same components are denoted by the same reference numerals, and descriptions of the components are appropriately omitted. An embodiment of a power conversion device is described below with reference to the drawings. In the following, main components of electronic circuitry, an electronic system, and a driving device are mainly described; however, the electronic circuitry, the electronic system, and the driving device each may include components and functions that are not illustrated or described. The following description does not exclude the components and functions that not illustrated or described. First Embodiment FIG.1is a block diagram of electronic circuitry1according to a first embodiment. The electronic circuitry1includes a semiconductor switching element Q, a driving circuit110driving the semiconductor switching element Q, and a voltage supply circuit120supplying an operation voltage to the driving circuit110. The driving circuit110includes a first circuit150having an adjustable impedance, and the first circuit150is connected to a control terminal of the semiconductor switching element Q. In the following, overview of the electronic circuitry1is first described. The semiconductor switching element Q is usable as a semiconductor relay connected to a middle of a wire connecting, for example, a power supply and a load device (for example, DC-DC converter). The driving circuit110generates a current to be supplied to the control terminal (gate terminal G) of the semiconductor switching element Q based on the voltage supplied from the voltage supply circuit120, and supplies the generated current to the control terminal of the semiconductor switching element Q. The supplied current charges a parasitic capacitance Cgs between a gate and a source of the semiconductor switching element Q. As a result, the voltage at the control terminal of the semiconductor switching element Q increases. An increase rate (gradient) depends on a magnitude of the current supplied to the control terminal. The driving circuit110supplies a current (first current) having a low magnitude at start of operation. Therefore, the voltage at the control terminal increases at a low rate. After the supply of the current is started, when the voltage at the control terminal becomes a value (first reference value) less than a threshold voltage, namely, is close to the threshold voltage, the driving circuit110increases the current to be supplied, to a second current. This increases the voltage at the control terminal at a high rate, and the voltage at the control terminal rapidly increases in a short time. During the period, the control voltage reaches the threshold voltage, and the semiconductor switching element Q is turned on. When the voltage at the control terminal reaches a value (second reference value) greater than the threshold voltage, the driving circuit110reduces the current to a third current. The third current may have the magnitude same as the magnitude of the original current (first current). During a period when the control voltage is close to the threshold voltage, increasing the magnitude of the current to be supplied makes it possible to reduce a time when the voltage at the control terminal is close to the threshold voltage. As a result, it is possible to prevent occurrence of chattering in the control voltage caused by mixing of a noise signal or the like at the time of turning-on operation. This makes it possible to prevent occurrence of erroneous operation in which the semiconductor switching element Q is repeatedly turned on and off at the control voltage close to the threshold voltage. Further, during a period other than the period when the control voltage is close to the threshold voltage, the control voltage increases at a low rate. This makes it possible to prevent a large current from flowing into the semiconductor switching element Q when the semiconductor switching element Q is turn on, and the like, and to safely start up the semiconductor switching element. As described above, the electronic circuitry according to the present embodiment prevents occurrence of chattering at the voltage close to the threshold voltage while increasing the control voltage at a low through rate. In the following, the electronic circuitry1inFIG.1is described in more detail. A voltage supply circuit120inFIG.1supplies the operation voltage for the driving circuit110. The operation voltage to be supplied is a direct-current voltage. The voltage supply circuit120may rectify a voltage of an unillustrated alternating-current power supply, decrease or increase the rectified voltage, thereby generating a voltage to be supplied to the driving circuit110. Alternatively, the voltage supply circuit120may be a photocoupler generating a voltage (current) from a received optical signal. The semiconductor switching element Q is an MOS transistor such as a power MOSFET. Alternatively, the semiconductor switching element Q may be other types of semiconductor transistor such as an IGBT. InFIG.1, an example in which the semiconductor switching element Q is an N-type power MOSFET is illustrated; however, the semiconductor switching element Q may be a P-type power MOSFET. The semiconductor switching element Q includes a parasitic diode E between a drain terminal D (second terminal) and a source terminal S (first terminal), a parasitic capacitance Cds between the drain terminal D and the source terminal S, the parasitic capacitance Cgs between the gate terminal G and the source terminal S, and a parasitic capacitance Cgd between the gate terminal G and the drain terminal D. As an example, the drain terminal D can be connected to a negative output terminal of the power supply, and the source terminal S can be connected to a negative input terminal of a load device (for example, DC-DC converter). The driving circuit110generates a current having a magnitude corresponding to the voltage (control voltage or gate voltage) at the gate terminal G of the semiconductor switching element Q and gate resistances Rg1and Rg2, based on the voltage supplied from the voltage supply circuit120. The driving circuit110supplies the generated current to the gate terminal G. The driving circuit110adjusts or switches the magnitude of the current to be supplied, based on a value of the gate voltage of the semiconductor switching element Q. The supplied current is charged in the parasitic capacitance Cgs, and the gate voltage increases. The driving circuit110supplies the first current during a period (first period) until the gate voltage reaches the first reference value less than the threshold voltage of the semiconductor switching element Q. When the gate voltage exceeds the first reference value, the driving circuit110changes the current to be supplied, to the second current that has a magnitude greater than the magnitude of the first current. The driving circuit110supplies the second current during a period (second period) until the gate voltage reaches the second reference value greater than the threshold voltage of the semiconductor switching element Q. When the gate voltage reaches the second reference value greater than the threshold voltage of the semiconductor switching element Q, the driving circuit110changes the current to be supplied, to the third current having a magnitude less than the magnitude of the second current. The third current may have the magnitude same as or different from the magnitude of the first current. A period when the third current is used after the second period corresponds to a third period. As an example, the third period may be a period until turning-off operation of the semiconductor switching element Q is started after the second period, or may be a period after a predetermined time elapses from end of the second period. As a result, the gate voltage increases at a high rate during the period (area) when the gate voltage is close to the threshold voltage, and the gate voltage increases at a low rate in the other area. This prevents occurrence of chattering in the gate voltage close to the threshold voltage. A specific configuration of the driving circuit110is described below. The driving circuit110includes a node PGDconnected to a positive terminal of the voltage supply circuit120, and a node NGDconnected to a negative terminal of the voltage supply circuit120. Resistances (divided resistances) Rd1and Rd2are connected in series between the nodes PGDand NGD. A connection node between the divided resistances Rd2and Rd1corresponds to a node N1. A capacitance Cg1and a capacitance Cg2are connected in series between the nodes PGDand NGD. A connection node between the capacitances Cg1and Cg2corresponds to a node N2. The capacitances Cg1and Cg2are respectively connected in parallel with the divided resistances Rd1and Rd2. The capacitance Cg1holds the first voltage, and the capacitance Cg2holds the second voltage. The first circuit150is connected between the nodes PGDand NGD. The first circuit150includes a switch Qg1(first switch), a resistive element Rg1(first resistive element), a switch Qg3(second switch), a switch Qg2(third switch), a resistive element Rg2(second resistive element), and a switch Qg4(fourth switch) that are connected in series. Controlling on/off of each of the switches Qg1to Qg4makes it possible to adjust the impedance (resistance) of the first circuit150. The switch Qg1, the resistive element Rg1, and the switch Qg3connected in series are connected in parallel with the capacitance Cg1. The switch Qg2, the resistive element Rg2, and the switch Qg4connected in series are connected in parallel with the capacitance Cg2. A connection node between the switch Qg3and the switch Qg2corresponds to a node N3. The switch Qg1and the resistive element Rg1are connected in series between a first terminal of the capacitance Cg1(or potential at first terminal of capacitance Cg1) and the gate terminal G. The switch Qg3is connected between a first terminal of the capacitance Cg2(or second potential at first terminal of capacitance Cg2) and the gate terminal G. The switch Qg2and the resistive element Rg2are connected in series between a second terminal of the capacitance Cg1and the source terminal S (first terminal) of the semiconductor switching element Q. The switch Qg4is connected between a second terminal of the capacitance Cg2and the source terminal S of the semiconductor switching element Q. The driving circuit110includes a node XGDconnected to the gate terminal G of the semiconductor switching element Q, and a node YGDconnected to the source terminal S of the semiconductor switching element Q. The driving circuit110adjusts the impedance (resistance) of the first circuit150by controlling on/off of each of the switches Qg1to Qg4, thereby controlling the magnitude of the current to be supplied to the gate terminal of the semiconductor switching element Q. FIG.2illustrates a path PT1of a current flowing when the switches Qg1and Qg2are turned on and the switches Qg3and Qg4are turned off. The path PT1is used in a case where the impedance of the first circuit150is increased and a small current is supplied. The path PT1is used during the above-described first period (period until gate voltage Vgs reaches first reference value after control signal to turn on switches Q1and Q2is input to each of switches Q1and Q2) and during the above-described third period (period after gate voltage Vgs reaches second reference value). A current having a magnitude corresponding to the impedance (resistance) of the path PT1is supplied to the gate terminal G based on the voltage (first voltage) held by the capacitance Cg1. The path PT1passes through the resistive elements Rg1and Rg2in an outward route and a return route of the current that is output from the capacitance Cg1and charges the parasitic capacitance Cgs, respectively. Since the path PT1passes through the resistive elements Rg1and Rg2, the path PT1has a high resistance value or a high impedance value, and the current supplied to the gate terminal G is reduced. Therefore, the parasitic capacitance Cgs of the semiconductor switching element Q is charged at a low speed. As a result, the gate voltage increases at a low rate. FIG.3illustrates a path PT2of a current flowing when the switches Qg3and Qg4are turned on and the switches Qg1and Qg2are turned off. The path PT2is used in a case where the impedance of the first circuit150is reduced and a large current is supplied. More specifically, the path PT2is used during the above-described second period (period until gate voltage Vgs reaches second reference value after gate voltage Vgs reaches first reference value). A current corresponding to an impedance (resistance) of the path PT2is supplied to the gate terminal G based on the voltage (second voltage) held by the capacitance Cg2. The path PT2does not pass through the resistive elements Rg1and Rg2. The path PT2has a resistance value or an impedance value lower than the resistance value or the impedance value of the path PT1, and the current supplied to the gate terminal G is increased. Therefore, the parasitic capacitance Cgs of the semiconductor switching element Q is charged at a high speed, and the gate voltage increases at a high rate. Note that the voltage (potential difference) held by the capacitance Cg2and the voltage (potential difference) held by the capacitance Cg1may be equal to each other or different from each other as long as the magnitude of the current to be supplied to the gate terminal G is adjustable to a desired magnitude. The driving circuit110puts the switches Qg1to Qg4into the state illustrated inFIG.2during the first period and during the third period, and puts the switches Qg1to Qg4into the state illustrated inFIG.3during the second period. As a specific configuration example to control the switches Qg1to Qg4based on the control voltage, a control circuit that detects the gate voltage and controls the driving circuit110based on the detected gate voltage may be provided. The control circuit compares the gate voltage with the first reference value and the second reference value, and controls the switches Qg1to Qg4based on a result of the comparison.FIG.4illustrates a configuration example in this case. FIG.4illustrates an example of electronic circuitry1A including a control circuit130. When receiving a startup signal from an external circuit, the control circuit130turns on the switches Qg1and Qg2and turns off the switches Qg3and Qg4(seeFIG.2). When the gate voltage reaches the first reference value (value less than threshold voltage), the control circuit130turns off the switches Qg1and Qg2, and turns on the switches Qg3and Qg4(seeFIG.3). When the gate voltage reaches the second reference value (value greater than threshold voltage), the control circuit130again turns on the switches Qg1and Qg2and turns off the switches Qg3and Qg4(seeFIG.2). As a specific example of the control circuit130, the control circuit130may include a voltage detection circuit detecting the gate voltage, and may further include a first comparator circuit that compares the first reference value and the gate voltage and generates control signals for the switches Qg1and Qg4based on a result of the comparison. The control circuit130may further include a second comparator circuit that compares the second reference value and the gate voltage and generates control signals for the switches Qg1to Qg4based on a result of the comparison. As another configuration example to control the switches Qg1to Qg4, lengths of the first period and the second period may be previously set, and the driving circuit110may control the switches Qg1to Qg4based on an elapsed time from start of the operation. For example, a first timer detecting lapse of the first period and a second timer detecting lapse of the second period are provided in the electronic circuitry1. In response to input of the startup signal, the electronic circuitry1turns on the switches Qg1and Qg2and turns off the switches Qg3and Qg4, as well as starts up the first timer and the second timer. The length of the first period is set to the first timer, and a total length of the first period and the second period is set to the second timer. When the first timer times out, a timeout signal is output, and the driving circuit110turns off the switches Qg1and Qg2and turns on the switches Qg3and Qg4in response to the timeout signal. When the second timer times out, a timeout signal is output, and the driving circuit110turns on the switches Qg1and Qg2and turns off the switches Qg3and Qg4in response to the timeout signal. Likewise, for the third period as well, a timer detecting lapse of the third period may be provided. A specific configuration to control the switches Qg1to Qg4may be realized by a method other than the above-described method. FIG.5is a timing chart of the electronic circuitry1inFIG.1. More specifically,FIG.5is as follows. FIG.5Ais a timing chart of the control signal (on/off signal) for the switches Qg1and Qg2. FIG.5Bis a timing chart of the control signal for the switches Qg3and Qg4. FIG.5Cis a timing chart of the gate voltage (voltage Vgs) of the semiconductor switching element Q. FIG.5Dis a timing chart of the current supplied to the gate terminal G of the semiconductor switching element Q. FIG.5Eis a timing chart of a drain-source voltage Vds of the semiconductor switching element Q. During the first period until the gate voltage reaches a first reference value Vr1less than a threshold Vth after start of operation, the parasitic capacitance Cgs of the semiconductor switching element Q is charged at a low speed through the resistive elements Rg1and Rg2by turning on the switches Qg1and Qg2and turning off the switches Qg3and Qg4. In other words, the gate voltage is gently increased (at low rate). During the first period, a drain current Id does not flow through the semiconductor switching element Q, and the drain-source voltage Vds is maintained at a high value. When the gate voltage reaches the first reference value Vr1, the switches Qg1and Qg2are turned off and the switches Qg3and Qg4are turned off, and the parasitic capacitance Cgs of the semiconductor switching element Q is charged at a high speed for a short time (during second period). The gate voltage is increased at a high rate, and exceeds the threshold Vth in a short time. At this time, the drain current Id is increased with a large gradient, and the drain-source voltage Vds is accordingly reduced with a large gradient. After lapse of the second period, namely, after the gate voltage reaches the second reference value Vr2greater than the threshold Vth, the switches Qg1and Qg2are turned on and the switches Qg3and Qg4are turned off as in the first period. As a result, the parasitic capacitance Cgs of the semiconductor switching element Q is charged at a low speed. In other words, the gate voltage is gently increased (at low rate). At this time, the gradient of the drain current Id is reduced, and the reduction gradient of the drain-source voltage Vds is accordingly reduced. Thereafter, the magnitude of the current supplied to the gate terminal G is converged and the gate voltage Vgs is converged to a predetermined value based on a charging amount of the parasitic capacitance Cgs of the semiconductor switching element Q. FIG.6is a block diagram illustrating an example of an electronic system2using the electronic circuitry1inFIG.1. In the example inFIG.6, a semiconductor relay (semiconductor switching element Q) is provided as a switch between a rectifier320that rectifies an alternating-current voltage supplied from a commercial power supply310(distribution board or the like) and a multicell converter330(DC-DC converter) as a load device. At this time, the driving circuit110inFIG.1is used to control the semiconductor switching element Q. The multicell converter330reduces or increases the direct-current voltage rectified by the rectifier320, and outputs the reduced or increased voltage to a device on a post stage. The multicell converter330may output a voltage equivalent to the input voltage. In place of the commercial power supply310and the rectifier320, a direct-current power supply such as a storage battery may be used. The configuration of the multicell converter330illustrated inFIG.6is illustrative, and the configuration is not particularly limited. The example of the multicell converter330illustrated inFIG.6includes a plurality of cell blocks each including a plurality of cells in which input terminals are connected in series and output terminals are connected in parallel. The input terminals of the plurality of cell blocks are connected in parallel between terminals of the rectifier320, and output terminals of the plurality of cell blocks are connected in series between output terminals of the multicell converter330. A control power supply340generates an operation voltage for the driving circuit110in the electronic circuitry1by using the alternating-current voltage supplied from the commercial power supply310. The control power supply340may include at least one of the voltage supply circuit120inFIG.1and the control circuit130inFIG.5. The control power supply340provides the generated operation voltage to the driving circuit110. Further, the control power supply340controls each of the cells in the multicell converter330. The control power supply340may supply a startup signal controlling startup of the driving circuit110, to the driving circuit110or the electronic circuitry1. The drain terminal D (second terminal) of the semiconductor switching element Q is electrically connected to a negative output terminal NO of the commercial power supply310or a negative output terminal of the rectifier320. The source terminal S of the semiconductor switching element Q is connected to a negative input terminal NI of the multicell converter330. At startup of the electronic system inFIG.6, it is necessary to turn on the semiconductor switching element Q. The driving circuit110starts turning-on operation of the semiconductor switching element Q under the control of the control power supply340(seeFIG.5). At this time, a noise signal is generated from the multicell converter330or an unillustrated peripheral device, and the generated noise signal may be input to the gate terminal G of the semiconductor switching element Q or the like. Even in a case where the noise signal is input to the gate terminal G or the like, the gate voltage is increased at a high speed in a short time during the period when the gate voltage is close to the threshold voltage. Therefore, generation of chattering caused by the noise signal is prevented. During a period other than the short period when the gate voltage is increased at a high speed, the gate voltage is increased at a low speed. Therefore, the semiconductor switching element Q can be safely turned on, and the electronic system2can be safely started up. As described above, according to the present embodiment, at the timing when the gate voltage of the semiconductor switching element is close to the threshold voltage, the gate voltage is increased at a high speed in a short time so as to exceed the threshold voltage. This makes it possible to avoid occurrence of chattering caused by mixing of the noise signal when the gate voltage is close to the threshold voltage. (Modification 1) The path PT1inFIG.2includes the two resistive elements Rg1and Rg2; however, the resistive element Rg2may be removed. In place of the resistive element Rg2or together with the resistive element Rg2, the resistive element Rg1may be removed. In a case where at least one of the resistive elements Rg1and Rg2is removed, the internal resistance of at least one of the switches Qg1and Qg2may be adjusted to set the impedance to a desired value, and the current having the desired magnitude may be supplied to the gate terminal G. This makes it possible to reduce the number of elements and to reduce a circuit area. (Modification 2) InFIG.1, the two capacitances Cg1and Cg2may be removed. In this case, the voltage (first voltage) generated by the divided resistance Rd1and the voltage (second voltage) generated by the divided resistance Rd2may be directly provided to the first circuit150. This makes it possible to reduce the number of elements and to reduce a circuit area. (Modification 3) InFIG.1, the two divided resistances Rd1and Rd2may be removed, and the first voltage and the second voltage may be respectively directly charged to the capacitance Cg1and the capacitance Cg2. This makes it possible to reduce the number of elements and to reduce a circuit area. Second Embodiment FIG.7is a block diagram of electronic circuitry1B according to a second embodiment. A driving circuit110B is provided with resistive elements Rg3and Rg4. The resistive elements Rg3and Rg4are disposed along the path PT2illustrated inFIG.3described above. In other words, the resistive elements Rg3and Rg4are included in a path of a current when the switches Qg3and Qg4are turned on during the second period. The resistive element Rg3and the switch Qg3are connected in series between the first terminal of the capacitance Cg2and the gate terminal G. The resistive element Rg4and the switch Qg4are connected in series between the second terminal of the capacitance Cg2and the source terminal S (first terminal). The resistive element Rg3is connected between the switch Qg3and the switch Qg2. The resistive element Rg3is connected between the node N3and the switch Qg3. The switch Qg1, the resistive element Rg1, the switch Qg3, and the resistive element Rg3connected in series are connected in parallel with the capacitance Cg1. The resistive element Rg4is connected to one end of the switch Qg4, and the other end of the switch Qg4is connected to the resistive element Rg2. In addition, the switch Qg2, the resistive element Rg2, the switch Qg4, and the resistive element Rg4connected in series are connected in parallel with the capacitance Cg2. When the resistive elements Rg3and Rg4are added along the path PT2, the resistance (impedance) of the path PT2is increased. This makes it possible to adjust (suppress) the increase rate of the gate voltage (magnitude of current supplied to gate terminal) during the second period, to a desired value. Variable resistive elements may be used as the resistive elements Rg3and Rg4, and resistance values of the resistive elements Rg3and Rg4may be adjusted. This makes it possible to more flexibly adjust the increase rate of the gate voltage. InFIG.7, the two resistive elements Rg3and Rg4are added; however, only at least one of the resistive elements may be added. Further, the impedance of the first circuit150may be adjusted by adjusting internal resistance values of the switches Qg3and Qg4in place of addition of the two resistive elements. This also makes it possible to adjust the increase rate of the gate voltage (magnitude of current supplied to gate terminal) during the second period. As described above, according to the second embodiment, the increase rate of the gate voltage during the second period can be adjusted (suppressed) to the desired value by adding the resistive elements to the path PT2. Third Embodiment FIG.8is a block diagram of electronic circuitry1C according to a third embodiment. A driving circuit110C includes a plurality of switches Qg1, a plurality of resistive elements Rg1, a plurality of switches Qg3, a plurality of resistive elements Rg3, a plurality of switches Qg2, a plurality of resistive elements Rg2, a plurality of switches Qg4, and a plurality of resistive elements Rg4. Each of the switches Qg1to Qg4are turned on or off independent of turning-on or off of the other switches Qg1to Qg4of the same type. In the driving circuit110C, a plurality of series connections each including one switch Qg1, one resistive element Rg1, one switch Qg3, and one resistive element Rg3are connected in parallel with the capacitance Cg1. Likewise, a plurality of series connections each including one switch Qg2, one resistive element Rg2, one switch Qg4, and one resistive element Rg4are connected in parallel with the capacitance Cg2. In other words, a plurality of series connections each including one switch Qg1and one resistive element Rg1are connected in parallel between the first terminal of the capacitance Cg1and the gate terminal G. A plurality of series connections each including one switch Qg3and one resistive element Rg3are connected in parallel between the first terminal of the capacitance Cg2and the gate terminal G. A plurality of series connections each including one switch Qg2and one resistive element Rg2are connected in parallel between the second terminal of the capacitance Cg1and the source terminal S. A plurality of series connections each including one switch Qg4and one resistive element Rg4are connected in parallel between the second terminal of the capacitance Cg2and the source terminal S. The other configurations are similar to the configurations of the electronic circuitry1A inFIG.7according to the second embodiment. With the configuration inFIG.8, it is possible to more finely adjust the resistance (impedance) of the path PT1of the current during the first period and during the third period and the resistance (impedance) of the path PT2of the current during the second period. For example, to reduce the increase rate (gradient) of the gate voltage during the second period, it is sufficient to reduce the number of switches to be turned on in at least one group out of the plurality of switches Qg3and the plurality of switches Qg4. In contrast, to increase the increase rate (gradient) of the gate voltage during the second period, it is sufficient to increase the number of switches to be turned on in at least one group out of the plurality of switches Qg3and the plurality of switches Qg4. Likewise, to reduce the increase rate (gradient) of the gate voltage during the first period or during the third period, it is sufficient to reduce the number of switches to be turned on in at least one group out of the plurality of switches Qg1and the plurality of switches Qg2. In contrast, to increase the increase rate (gradient) of the gate voltage during the first period or during the third period, it is sufficient to increase the number of switches to be turned on in at least one group out of the plurality of switches Qg1and the plurality of switches Qg2. As described above, according to the third embodiment, it is possible to more flexibly adjust the increase rate of the gate voltage during the first period and during the third period and the increase rate of the gate voltage during the second period, to the desired values. Fourth Embodiment FIG.9is a block diagram of electronic circuitry1D according to a fourth embodiment. Differences with the electronic circuitry1C inFIG.8according to the third embodiment are described. In a driving circuit110D, the divided resistance Rd2and the capacitance Cg2are not provided. Further, the switches Qg4and the resistive elements Rg4are not provided. The switches Qg2and the resistive elements Rg2are used not only during the first period and during the third period but also during the second period. As an example, to reduce the increase rate of the gate voltage during the first period and during the third period, the number of switches to be turned on among the plurality of switches Qg2is reduced. At this time, the increase rate of the gate voltage during the first period and during the third period may be adjusted by also adjusting the number of switches to be turned on among the plurality of switches Qg1. The increase rate of the gate voltage during the first period and during the third period may be more flexibly adjusted by adjusting the number of switches to be turned on among the plurality of switches Qg1and among the plurality of switches Qg2. The resistance values of the resistive elements Rg1and the resistance values of the resistive elements Rg3may be equal to or different from each other. To make the increase rate of the gate voltage during the second period greater than the increase rate during the first period and during the third period, it is sufficient to make the number of switches to be turned on among the plurality of switches Qg2during the second period greater than the number of switches to be turned on during the first period and during the third period. At this time, the increase rate of the gate voltage during the second period may be adjusted by also adjusting the number of switches to be turned on among the switches Qg3. The increase rate of the gate voltage during the second period may be more flexibly adjusted by adjusting the number of switches to be turned on among the switches Qg2and among the switches Qg3. The resistance values of the resistive elements Rg2and the resistance values of the resistive elements Rg3may be equal to or different from each other. As described above, according to the fourth embodiment, the divided resistance Rd2, the capacitance Cg2, the switches Qg4, and the resistive elements Rg4are removed, and the switches Qg2and the resistive elements Rg2are shared by the first period to the third period. This makes it possible to reduce the number of elements and to reduce a size of the electronic circuitry or a size of the driving circuit. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions. | 34,556 |
11863172 | DETAILED DESCRIPTION OF THE EMBODIMENTS The preferred embodiments of the present invention will be described in detail below with reference to the drawings, but the present invention is not only limited to these embodiments. Any alternatives, modifications, equivalent methods and solutions made within the spirit and scope of the present invention shall fall within the scope of protection of the present invention. In order to enable the public to have a thorough understanding of the present invention, specific details are described in the following preferred embodiments of the present invention, but those skilled in the art can fully understand the present invention without the depiction of these details. In the following paragraphs, the present invention is described in detail by way of examples with reference to the drawings. It should be noted that the drawings all adopt a relatively simplified form and all use non-precise proportions, which are only aimed to conveniently and clearly assist in explaining the embodiments of the present invention. FIGURE is a circuit block diagram of a single live line switch circuit according to the present invention. In an embodiment, the single live line switch circuit includes two switch circuits between an input end of a single live line and an output end of the single live line, such as the switch circuit1(denoted as “panel1”) and the switch circuit2(denoted as “panel2”) in FIGURE. Each switch circuit includes a single live line connecting end, a first switch unit, two wire channels, an on-state power obtaining circuit, an off-state power obtaining circuit, and an energy storage element. For example, in FIGURE, the single live line connecting end in panel1is point A, and the single live line connecting end in panel2is point B. In an embodiment, the on-state power obtaining circuit is connected to the single live line connecting end. The first switch unit includes a fixed connecting end and a movable connecting end. The fixed connecting end is connected to the on-state power obtaining circuit. The two wire channels are provided with a first connecting end and a second connecting end, respectively. The movable connecting end of the first switch unit is in contact with the first connecting end or the second connecting end. The off-state power obtaining circuit is connected to nodes on the two wire channels, respectively. Herein, the first switch unit can be realized by an intelligently optimized relay or a double-throw switch. For example, the intelligently optimized relay is a movable connecting end with two contact points, and the two contact points are connected to the nodes on the two wire channels. When the single live line switch circuit controls a load to be turned on, the on-state power obtaining circuit is configured to store energy for the energy storage element. When the single live line switch circuit controls the load to be turned off, the off-state power obtaining circuit is configured to store energy for the energy storage element. The single live line connecting ends of the two switch circuits are connected to the input end (such as point A) of the single live line and the output end (such as point B) of the single live line, respectively. The two wire channels of one of the two switch circuits are correspondingly connected to the two wire channels of the other one of the two switch circuits, and the two wire channels are such as the channel1and the channel2in FIGURE, so that the two switch circuits are connected between the input end of the single live line and the output end of the single live line. The switching action of the first switch unit controls the load to be turned on or off. In an embodiment, the on-state power obtaining circuit in each switch circuit includes the second switch Q11, the on-state power obtaining and controlling circuit1, and a first conducting element such as the diode D11. Herein, the circuit in the panel1is taken as an example, the followings are the same. The first end of the second switch is connected to the single live line connecting end point A, and the second end of the second switch is connected to the fixed connecting end of the first switch unit. The on-state power obtaining and controlling circuit is connected to the energy storage element C1. The on-state power obtaining and controlling circuit is connected to the control end of the second switch Q11to control a switching state of the second switch to further control the stabilization of the voltage of the energy storage element C1, so as to provide a stable power supply voltage to a control chip. The anode of the first diode is connected to the second end of the second switch, and the cathode of the first diode is connected to the energy storage element. Herein, the second switch Q11includes a field effect transistor, or the second switch includes a bipolar transistor and a diode. Herein, the on-state power obtaining circuit has a conventional circuit structure, which includes a comparator, an error amplifier, a switch tube and other components. The second switch includes the field effect transistor, or the second switch includes the bipolar transistor and the diode. In an embodiment, the off-state power obtaining circuit in each switch circuit includes a second conducting element such as the diode D12, a third conducting element such as the diode D13, and the off-state power obtaining and controlling circuit1. The anode of the second diode D12and the anode of the third diode D13are respectively connected to the nodes on the two wire channels, such as points C and D in FIGURE. The cathode of the second diode D12and the cathode of the third diode D13are both connected to the off-state power obtaining and controlling circuit1. The output end of the off-state power obtaining and controlling circuit1is connected to the energy storage element C1. The off-state power obtaining and controlling circuit1is configured to control the stabilization of the voltage of the energy storage element C1, so as to provide the stable power supply voltage to the control chip. Herein, the off-state power obtaining circuit has a conventional circuit structure, which includes a comparator, an error amplifier, a switch tube and other components. In an embodiment, each single live line switch circuit further includes a driver circuit and a control chip, such as the driver circuit1and the control chip1in FIGURE. The energy storage element C1provides a working voltage to the control chip, and the control chip generates a switch control signal and transmits the switch control signal to the driver circuit to control a switching state of the first switch unit. Herein, the control chip may include a single chip microcomputer, or a wireless control chip, or an microprogrammed control unit (MCU). In an embodiment, the single live line switch circuit further includes a switch panel. The switch panel may be a panel fixed on the switch circuit, and may also be a wireless control panel. The switch panel is provided with a panel indication mark corresponding to the first switch unit, and the indication mark is communicated with the control chip. The user controls the load to be turned on or off through the indication mark. Herein, the panel indication mark may be a button, a touch button, a wireless button, a wireless touch button, and the like. According to the structure of the above switch circuits, the basic principle of the present invention to control the load and obtain power is as follows. The movable connecting end of the first switch unit in the panel1is connected to either the channel1(denoted as “S11is turned on”) or the channel2(denoted as “S12is turned on”), and the movable connecting end of the first switch unit in the panel2is connected to either the channel1(denoted as “S21is turned on”) or the channel2(denoted as “S22is turned on”). The switching-on conditions and power obtaining situations in the four cases are analyzed below. In the case where S11in the panel1is turned on, if S21in the panel2is turned on, then the load is turned on, and both the panel1and the panel2obtain power through the on-state power obtaining circuit; and if S22in the panel2is turned on, then the load is turned off, and both the panel1and the panel2obtain power through the off-state power obtaining circuit. For example, during the positive half cycle of the AC input power, the switch Q21in the panel2is turned on, and the panel2obtains power through a pathway constituted by a body diode of the switch Q11in the panel1and the switch Q21in the panel2; during the negative half cycle of the AC input power, the switch Q11in the panel1is turned on, and the panel1obtains power through a pathway constituted by a body diode of the switch Q21in the panel2and the switch Q11in the panel1. After that, the energy storage elements C1and C2are charged by a single live line electric energy to supply for the control chip or the driver circuit in the panel to use. In the case where S12in the panel1is turned on, if S21in the panel2is turned on, then the load is turned off, and both the panel1and the panel2obtain power through the off-state power obtaining circuit; and if S22in the panel2is turned on, then the load is turned on, and both the panel1and the panel2obtain power through the on-state power obtaining circuit in the same way as above. Similarly, in the case where S21or S22in the panel2is turned on, both the panel1and the panel2can obtain power independently through the on-state power obtaining circuit or the off-state power obtaining circuit. In this way, it can be realized that the panel1and the panel2are controlled to obtain power independently, and that the lamp load is controlled to be turned on or off independently by the switch panels at different places. The single live line switch circuit of the present invention improves and optimizes the intelligent switch without changing the structure of the original single live line and is effectively applied in intelligent home applications. The above mentioned embodiments do not constitute a limitation on the scope of protection of the technical solution of the present invention. Any modification, equivalent replacement, improvement and others made within the spirit and principle of the above embodiments shall fall within the scope of protection of the technical solution of the present invention. | 10,435 |
11863173 | DETAILED DESCRIPTION As explained above, it can be difficult to directly attach a contoured capacitive touch surface to a PCB. The capacitive sensing electrode of the capacitance that is varying and is to be sensed may not be readily placed on the PCB itself. Further, manufacturing of a system (e.g., a tool) in which a sensing electrode is integrated into the system's housing and electrically connected to a sensing circuit can be difficult if connectors to the sensing electrode are needed. Long cables may be required, thereby making it difficult or impossible to have defined and constant conditions as the parasitic capacitance of such connections are dependent on the placement of the cables and may vary from system to system and be susceptible to vibration or displacement over time due to mechanical stress (e.g., vibration, acceleration, etc.). The capacitive sensing electrode(s) in the examples disclosed herein is not directly, electrically (i.e., galvanically) connected to the integrated circuit (IC) that senses the variable capacitance. Instead, capacitive coupling between the IC and the sensing electrode is employed to assess capacitance. FIG.1shows an example of a cross-section of the housing100of a tool (machine, device, etc.) for which operation of the tool is controlled by sensing the variable capacitance between a user's hand130and the tool. The example housing100includes an outer surface98and an inner surface99. The housing100may be constructed from plastic or other type of non-conductive material. The housing100is shown in this example as circular in cross-section, but the cross-section of the housing100may have other shapes as well. One or more sensing electrodes are embedded within the housing100between the outer and inner surfaces98,99. The example ofFIG.1includes four sensing electrodes121,122,123and124, although the number of sensing electrodes can be other than four (1, 2, 3, etc.). Each electrode121-124comprises a conductive member such as a wire, a conductive trace on a circuit board, a conductive plate, etc. The housing100includes regions of non-conductive material disposed between the sensing electrodes121-124and the outer surface98, which form dielectric regions of capacitors described below. The example ofFIG.1also shows four connecting electrodes101,102,103, and104.FIG.1includes four connecting electrodes101-104, although the number of connecting electrodes can be other than four (1, 2, 3, etc.). Each connecting electrode101-104is attached to the inner surface99of the housing100. Adhesive, screws, mechanical pressure, or other types of attachment mechanisms may be used to attach the connecting electrodes101-104to inner surface99. Each connecting electrode101-104is radially and longitudinally aligned with a respective one of the sensing electrodes121-124and separated by a region of non-conductive material of the housing100so that a capacitor is formed between the electrode and the respective sensing electrode (e.g., electrode101and sensing electrode121). In that regard, the sensing electrodes121-124are each capacitively coupled to a respective connecting electrode of electrodes101-104without being directly electrically coupled. The inner surface99defines a volume97. A capacitive sensing circuit110is provided in this example within the volume97defined by the inner surface. The capacitive sensing circuit110may be fabricated as an IC and mounted on a PCB. In this example, the capacitive sensing circuit110galvanically connects to the connecting electrodes101-104, but not to the sensing electrodes121-124. Capacitive sensing circuit110has sense ports117-120. Each sense port is electrically connected to a corresponding connecting electrode. For example, sense port117is connected to connecting electrode101by way of conductor111. Sense port118of capacitive sensing circuit110is connected to connecting electrode102by way of conductor112. Sense port119of capacitive sensing circuit110is connected to connecting electrode103by way of conductor113. Sense port120of capacitive sensing circuit110is connected to connecting electrode104by way of conductor114. Conductors111-114may comprise conductive wires, conductive springs, or other types of conductive mechanisms. Capacitor C1represents the capacitance between sensing electrode121and connecting electrode101. Capacitor C2represents the capacitance between sensing electrode122and connecting electrode102. Capacitor C3represents the capacitance between sensing electrode123and connecting electrode103. Capacitor C4represents the capacitance between sensing electrode124and connecting electrode104. Each of the capacitances C1-C4is a fixed value, that is, the capacitances of C1-C4do not vary. The magnitude of the capacitance of C1-C4is a function of the type of dielectric material comprising the housing100between the electrodes of each capacitor, the distance between the corresponding electrodes, the overlapping area of the corresponding electrodes, etc. In one example, the capacitances of C1-C4generally all have the same capacitance value but can be different from each other in other implementations. FIG.1illustrates a user's hand130near or in contact with the outer surface98of housing100. Capacitance is created between the user's hand130and one or more of the sensing electrodes121-124depending on where the user's hand is in relation to the housing and the sensing electrodes. Further, the capacitance formed between the user's hand and the sensing electrode(s) varies based on how the user grips the housing, the amount of grip pressure applied, etc. In the example shown, variable capacitances C5, C6, C7, and C8are created between sensing electrodes121,122,123, and124and the user's hand. Because the user's hand130in this example is on the size of the housing opposite sensing electrode124, the capacitance value of variable capacitance C8is significantly lower than of C5-C7. The capacitive sensing circuit110detects and/or measures the variable capacitances C5-C8. In more detail, the capacitive sensing circuit110measures the capacitance at each of the sense ports117-120. The measured capacitance at a port may include contributions from the respective variable capacitance, the respective fixed capacitance, the conductor coupling the fixed capacitance to the sense port, and/or other sources. For example, the measured capacitance at sense port117may include contributions from variable capacitance C5, capacitance C1, and conductor111. In one example, the capacitive sensing circuit110outputs a signal indicating whether the user is gripping the housing based on the measurements at the sense ports117-120and, in turn, based on of the variable capacitances C5-C8. The capacitive sensing circuit's output signal may be provided to electronics that controls the operation of the tool. As noted above, the capacitive sensing circuit110is not galvanically connected to the sensing electrodes. Instead, the capacitive sensing circuit110is electrically (galvanically) connected to the connecting electrodes, which are capacitively, but not directly electrically/galvanically, coupled to the sensing electrodes. If the capacitive sensing circuit110was electrically connected directly to the sensing electrodes121-124, the wiring through the housing itself may be complicated during production and introduce varying amounts of stray capacitance as explained above. By not having a galvanic connection between the sensing electrodes and the capacitive sensing circuit, such problems are alleviated. Furthermore, where the housing100includes deformable materials such as rubberized plastic, if conductive connections extended through deformable portions into rigid portions, they may experience sheer stress at the interface when the housing100deforms. Some examples avoid this by capacitively the coupling sensing electrodes121-124to the connecting electrodes101-104instead of coupling them using conductive connections. In such examples, deformable materials are safely and reliability used throughout the housing100including between the sensing electrodes121-124and the connecting electrodes101-104and between the sensing electrodes121-124and the outer surface98. FIG.2illustrates a portion of housing100with one of the sensing electrodes (sensing electrode121) and one of the connecting electrodes (connecting electrode101). Connecting electrode101is electrically connected to the capacitive sensing circuit110via conductor111. The capacitive sensing circuit110is shown in this example an IC mounted on a PCB202.FIG.2also illustrates that a capacitance Cb is formed between the user's body and the same ground potential used by the capacitive sensing circuit110. FIG.3illustrates an equivalent electrical circuit model of the various capacitances. Fixed capacitance C1between sensing electrode121and connecting electrode101is in series with variable capacitance C5between sensing electrode121and the user's hand130. Similarly, fixed capacitance C2between sensing electrode122and connecting electrode102is in series with variable capacitance C6. Fixed capacitance C3between sensing electrode123and connecting electrode103is in series with variable capacitance7Fixed capacitance C4between sensing electrode124and connecting electrode104is in series with variable capacitance C8. The variable capacitances C5-C8represent the capacitance formed between the sensing electrodes and the user's hand130, and depending on how the user grips the housing, the four variable capacitances C5-C8will have varying capacitance values. For example, in FIG.1, the three variable capacitances C5-C7will have a substantially higher capacitance value than for variable capacitance C8due to the position of the user's hand130. The use of multiple sensing electrodes121-124permits the capacitive sensing circuit110to discriminate between a person gripping the tool and the tool simply being placed on a conductive surface such as a table top. Placing the tool on surface such may cause a capacitance C5or C6or C7or C8to be created but generally not two or more of such capacitances. For example, if housing100inFIG.1were placed on a table top where the user's thumb is otherwise located, capacitance C7may be created between the table top and sensing electrode123. As will be explained below, capacitive sensing circuit110measures the equivalent capacitance between each of its sense ports117-120and ground. Responsive to the capacitive sensing circuit110determining the measured capacitance associated with that sense nodes to be greater than a threshold, the capacitive sensing circuit determines that a user has gripped the tool at an area of the housing near the corresponding sensing electrode. If the tool is simply placed on a table top, the measured capacitance for the two or more (e.g., three) sensing electrodes not adjacent the table top will be will be less than the measured capacitance for the sensing electrode adjacent the table top, and thus the capacitive sensing circuit110will not output a signal indicating a use has gripped the tool. FIG.4illustrates one of the many possible connecting electrodes (e.g., connecting electrode121) connected to an example capacitive sensing circuit110. Fixed capacitance C1, variable capacitance C5, and capacitance Cb are shown in series in this example connected to the capacitive sensing circuit110, and the other fixed and variable capacitances of the examples above have been omitted for sake of clarity. In this example, the capacitive sensing circuit110comprises a control circuit420, a charge transfer capacitor Ctrans, and switches S1and S2. The capacitive sensing circuit420implements a charge transfer technique to determine the effective capacitance between each of ports117-120(only port117is illustrated inFIG.4) and ground. The effective capacitance includes variable capacitance C5. In some examples, control circuit420is a finite state machine. Control circuit420asserts control signals421and422to control the open/closed (on/off) state of switches S1and S2, respectively. When switch S1is closed and switch S2is open, the capacitance at port117is charged using a reference voltage (VREF). During a discharge phase, switch S1is opened and switch S2is closed thereby causing the charged capacitance at port117to discharge current through the control circuit420. The charge from the capacitance at port117is used to charge the charge transfer capacitor Ctrans. Control circuit420calculates the amount of charge transferred between the capacitance at port117to the charge transfer capacitor Ctrans. In one example, the number of charge transfer cycles (e.g., using a counter to measure) needed for the voltage on the capacitor Ctrans to reach a predetermined voltage threshold determines the capacitance. In another example, a predetermined/fixed number of charge transfer cycles is performed and the resulting voltage on the capacitor Ctrans is measured (e.g., via an analog-to-digital converter) and mapped to a capacitance value. Other techniques besides charge transfer can be implemented as well to determine the capacitance. Control circuit420then closes switch S1and opens switch S2to again charge the capacitance at port117. Control circuit420operates the switches S1and S2to repeatedly charge the capacitance at port117, and then transfer the charge from the onto charge transfer capacitor Ctrans while determining the amount of charge transferred in each cycle. The amount of charge transferred from the capacitance at port117is a function of the effective capacitance Ceff of the capacitance at port117, which in turn is a function of the capacitance of variable capacitance C5. Each change/discharge cycle takes a fraction of a second (e.g., hundreds or thousands of charge/discharge cycles each second). A predetermined number of charge/discharge cycles (e.g., 100) may be implemented by control circuit420to determine the effective capacitance of the capacitance at port117. In some examples, the capacitive sensing circuit110first determines a reference capacitance at the port117at a time when the user is known to be beyond the detection range. The reference capacitance may include contributions from the fixed capacitance C1, the conductor111, and/or other sources and may include a baseline contribution from variable capacitance C5. Additionally, or in the alternative, the capacitive sensing circuit110may be provided a reference capacitance at manufacturing or validation that is stored in control circuit420. In operation, the capacitive sensing circuit110subsequently measures a capacitance at the port117when the user's hand130may be in proximity to the tool as described above. The subsequent capacitance measurement may be compared to the baseline contribution to determine the change in the variable capacitance C5by factoring out (e.g., subtracting) the contributions from the fixed capacitance C1, the conductor111, and other sources. FIG.5shows an example of a tool in the form of a hand-held drill500. The drill500includes a housing510. The drill500has a motor controller520that controls the speed of a motor521. The motor521turns a drill bit505. A user's hand is shown gripping the handle515of the drill. The capacitive sensing circuit110is connected to connecting electrodes121and123(and there may additional connecting electrodes). The capacitive sensing circuit110detects that a user has gripped the tool and outputs a signal525to the motor controller520. In one example, the motor controller520will prevent the motor from turning the drill bit505if the output signal525from the capacitive sensing circuit110indicates that capacitive sensing circuit110is not detecting a person gripping the drill500. The capacitive sensing circuit110in this example functions as a safety mechanism to prevent the drill bit from turning unless a person is actively holding the drill's handle515. While a drill is illustrated inFIG.5, other types of tools can employ the capacitive sensing technique described herein, such as joy stick operation of a crane, door handle of a car, etc. That it is detected that a person is touching (e.g., gripping) the joy stick, door handle, etc. will permit the operation of the corresponding functional hardware (crane, car door, etc.). Further, the capacitive sensing technique described herein can be applied to other types of sensing applications besides grip detection, such as buttons, wheels, sliders, etc. “The term “couple” is used throughout the specification. The term may cover connections, communications, or signal paths that enable a functional relationship consistent with the description of the present disclosure. For example, if device A generates a signal to control device B to perform an action, in a first example device A is coupled to device B, or in a second example device A is coupled to device B through intervening component C if intervening component C does not substantially alter the functional relationship between device A and device B such that device B is controlled by device A via the control signal generated by device A. | 17,133 |
11863174 | DETAILED DESCRIPTION OF THE EMBODIMENT It should be noted that, wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. The touch detection circuit of the present disclosure is applied to, for example, a capacitive switch capable of cancelling the noise interference. The touch detection circuit is especially suitable to an application operated under large fluctuation of environmental parameter and high external noises. By cancelling the baseline voltage, the detection accuracy is improved. Referring toFIG.2, it is a schematic block diagram of a touch detection circuit200according to one embodiment of the present disclosure. The touch detection circuit200includes a detection capacitor20, a charging circuit21c, a discharging circuit21d, a comparing circuit23, a counter25and a processor27, wherein the processor27is, for example, a digital signal processor (DSP) or application specific integrated circuit (ASIC) that performs the function thereof using hardware and/or firmware. In one aspect, the charging circuit21c, the discharging circuit21d, the comparing circuit23, the counter25and the processor27together form a detection chip electrically connected to the detection capacitor20. The detection capacitor20is generally in a form of electrode, and has capacitance Csef to form a capacitor voltage Vc cross terminals thereof when receiving electricity. The detection capacitor20is arranged on a component, such as, but not limited to, a door handle, an appliance switch or a lamp switch, for detecting a conductor (e.g., a hand). When the conductor approaches or touches the detection capacitor20, the capacitance Csef is changed and such capacitance change is used as the detecting mechanism of a sensitive switch. As shown inFIG.2, one end of the detection capacitor20is connected to a ground voltage, and the other end thereof is connected to the charging circuit21c, the discharging circuit21dand the comparing circuit23. The capacitor voltage Vc changes (shown as ΔVc inFIG.4) with the charging and discharging process. When the capacitance Csef is changed, charging and discharging times are also changed and such time change is used as the mechanism of identifying a touch event. The charging circuit21cincludes a variable current source21c1and a switching element21c3cascaded together, wherein the switching element21c3is, for example, a transistor switch. In one non-limiting aspect, the variable current source21c1includes multiple current sources31and multiple current switches33to form a current bank as shown inFIG.3, wherein the current switches33are, for example, transistor switches. Please referring toFIG.4together, it is a schematic diagram of charging and discharging of a touch detection circuit200according to one embodiment of the present disclosure. The charging circuit21cis used to charge the detection capacitor20within a first charging interval t1using a first charging current Ic1, and charge the detection capacitor20within a second charging interval t2using a second charging current Ic2smaller than the first charging current Ic1. For example, the first charging interval t1and the second charging interval t2form one complete charging interval. Within the complete charging interval, the charging circuit21ccharges the detection capacitor20using the first charging current Ic1at first, and then charges the detection capacitor20using the second charging current Ic2. InFIGS.2and4, the charging and discharging current is shown as Icd. When the detection capacitor20is being charged, Icd is shown to have positive values; whereas, when detection capacitor20is being discharged, Icd is shown to have negative values, or vice versa. The positive and negative values herein are only intended to illustrate a current flow direction, but not to limit the present disclosure. Charging/discharging the detection capacitor20using a large current can shorten the charging interval but can have lower sensitive and noise immunity; whereas, charging/discharging the detection capacitor20using a small current can have higher sensitivity, but a longer charging interval is required to extend the scanning time. The present disclosure takes features of both, and considers the first charging interval t1as a charging reference time that has a substantially constant value even when the capacitance Csef is changed by an external conductor. The processor27controls the variable current source21c1(i.e. controlling the first charging current Ic1and the second charging current Ic2) and the switching element21c3of the charging circuit21cto cause the second charging interval t2to be longer than the first charging interval t1. That is, the processor27controls the charging reference time (i.e. t1) to be shorter than a half of the charging interval. Preferable, under the circuit limitation, the first charging interval t1is set as short as possible, and the second charging interval t2is set as long as possible. In the aspect shown inFIG.3, the processor27alters the first charging current Ic1and the second charging current Ic2by changing conducting or connecting states between the multiple current switches33and the multiple current sources31, e.g., more current switches33being conducted, higher charging current being generated. The discharging circuit21dincludes a variable current source21d1and a switching element21d3cascaded together, wherein the switching element21d3is, for example, a transistor switch. Similarly, in one non-limiting aspect the variable current source21d1includes multiple current sources31and multiple current switches33as shown inFIG.3. Please referring toFIG.4again, the discharging circuit21dis used to discharge the detection capacitor20within a first discharging interval t3using a first discharging current Id1, and discharge the detection capacitor20within a second discharging interval t4using a second discharging current Id2smaller than the first discharging current Id1. For example, the first discharging interval t3and the second discharging interval t4form one complete discharging interval. Within the complete discharging interval, the discharging circuit21ddischarges the detection capacitor20using the first discharging current Id1at first, and then discharges the detection capacitor20using the second discharging current Id2. In the present disclosure, the first discharging interval t3is considered as a discharging reference time that has a substantially constant value even when the capacitance Csef is changed by an external conductor. The processor27controls the variable current source21d1(i.e. controlling the first discharging current Id1and the second discharging current Id2) and the switching element21d3of the discharging circuit21dto cause the second discharging interval t4to be longer than the first discharging interval t3. That is, the processor27controls the discharging reference time (i.e. t3) to be shorter than a half of the discharging interval. Preferable, under the circuit limitation, the first discharging interval t3is set as short as possible, and the second discharging interval t4is set as long as possible. In the aspect shown inFIG.3, the processor27alters the first discharging current Id1and the second discharging current Id2by changing conducting or connecting states between the multiple current switches33and the multiple current sources31, e.g., more current switches33being conducted, higher discharging current being generated. The comparing circuit23compares the capacitor voltage Vc with a first reference voltage VHand a second reference voltage VL(e.g., smaller than the first reference voltage VH) to conduct/connect the charging circuit21cto the detection capacitor20or conduct/connect the discharging circuit21dto the detection capacitor20. For example, when the charging circuit21ccharges the detection capacitor20to cause the capacitor voltage Vc to reach the first reference voltage VH, the output signal of the comparing circuit23dis-conducts the switching element21c3and conducts the switching element21d3to cause the discharging circuit21dto discharge the detection capacitor20; whereas, when the capacitor voltage Vc is discharged to reach the second reference voltage VL, the output signal of the comparing circuit23dis-conducts the switching element21d3and conducts the switching element21c3to cause the charging circuit21cto charge the detection capacitor20; and the detection capacitor20is charged and discharged repeatedly in this way. In one non-limiting aspect, the comparing circuit23includes two comparators respectively taking the first reference voltage VHand the second reference voltage VLas an input signal of one of two input terminals, and the other input terminal of the two comparators is coupled to the capacitor voltage Vc. The output of one of the two comparators is used to control ON/OFF of the switching element21c3, and the output of the other one of the two comparators is used to control ON/OFF of the switching element21d3. In another non-limiting aspect, the comparing circuit23includes one comparator and one multiplexer. One input terminal of the comparator receives the first reference voltage VHor the second reference voltage VLvia the multiplexer, and the other input terminal of the comparator is coupled to the capacitor voltage Vc. The output of the comparator is used to control ON/OFF of the switching elements21c3and21d3. It should be mentioned that a structure of the comparing circuit23is not limited to those mentioned herein as long as it is able to compare the capacitor voltage Vc with the first reference voltage VHand the second reference voltage VLto accordingly control charging or discharging by controlling ON/OFF of the switching elements21c3and21d3. InFIG.2, an inverter in the discharging circuit21dis used to indicate the switching elements21c3and21d3are not turned on/off together, but not to limit the present disclosure. For example, the inverter may be arranged in the charging circuit21c, or the comparing circuit23sends out opposite signals to respectively control the switching elements21c3and21d3without using an inverter in the charging or discharging circuit. The counter25(or called timer) sequentially counts/times lengths of the first charging interval t1, the second charging interval t2, the first discharging interval t3and the second discharging interval t4, and a summation of t1to t4is used as a detection cycle. In one aspect, the processor27identifies a touch event according to the second charging interval t2and the second discharging interval t4, without according to the first charging interval t1and the first discharging interval t3. As mentioned above, the charging reference time (i.e. t1) and the discharging reference time (i.e. t3) do not change with approaching of a conductor, and thus they are considered as baseline time that reflects the baseline voltage of the detection capacitor20. Accordingly, although the counter25is counting the whole detection cycle (t1+t2+t3+t4), the processor27subtracts the charging reference time t1and the discharging reference time t3from the detection cycle (t1+t2+t3+t4) to generate a time of interest (TOI), i.e. a summation of the second charging interval and the second discharging interval (t2+t4). The processor27identifies whether a touch event occurs according to a variation of TOI (t2+t4) between successive detection cycles. For example, when the variation of TIO (t2+t4) is larger than a variation threshold, the processor27confirms the occurrence of a touch event and then sends a control signal Sc to open a door or turn on/off an appliance or lamp according to different applications; on the contrary, it means no conductor being approaching. For example,FIG.4shows that when a touch event occurs, the variation of capacitor voltage ΔVc is changed from the solid line to the dashed line to cause the TOI (t2+t4) to be extended to (t2′+t4′). Accordingly, when a value of (t2′+t4′)−(t2+t4) exceeds the variation threshold, the processor27confirms the occurrence of a touch event. In addition, the processor27may identify whether a touch event occurs according to a comparison result of comparing the variation of detection cycle (t1+t2+t3+t4) and a predetermined threshold, i.e. calculating (t1+t2′+t3+t4′)−(t1+t2+t3+t4). In addition, when the variation of a single TOI or a single detection cycle caused by the change of capacitance Csef is too small, the processor27further identifies whether a touch event occurs according to the variation of multiple TOI, i.e. N×(t2+t4) or multiple detection cycles N×(t1+t2+t3+t4). In one aspect, when identifying that the detection cycle (t1+t2+t3+t4) is equal to or close to a noise cycle (or detection frequency equal to or close to noise frequency), the processor27further changes the first charging current Ic1and the first discharging current Id1(or also changing the second charging current Ic2and the second discharging current Id2) to alter the detection cycle such that the noise frequency band is avoided to improve the detection accuracy. For example, the processor27is further embedded with a time domain-frequency domain conversion algorithm for calculating the noise frequency. The method of calculating the noise frequency or cycle is known to the art, and thus not described herein. Referring toFIG.5, it is a flow chart of an operating method of a touch detection circuit200according to one embodiment of the present disclosure, including the steps of: charging, using a charging circuit21c, a detection capacitor20using a first charging current Ic1, and counting a first charging interval t1using a counter25(Step S51); charging, using the charging circuit21c, the detection capacitor20using a second charging current Ic2, smaller than the first charging current Ic1, and counting a second charging interval t2using the counter25(Step S52); discharging, using a discharging circuit21d, the detection capacitor20using a first discharging current Id1, and counting a first discharging interval t3using the counter25(Step S53); discharging, using the discharging circuit21d, the detection capacitor20using a second discharging current Id2, smaller than the first discharging current Id1, and counting a second discharging interval t4using the counter25(Step S54); and identifying a touch event according to a time variation of multiple summations of the second charging interval t2and the second discharging interval t4. The time variation of multiple summations of the second charging interval t2and the second discharging interval t4is N×(t2′+t4′)−N×(t2+t4). As mentioned above, using multiple charging and discharging intervals is to avoid the scenario that the variation of a single charging and discharging interval is smaller than detection sensitivity. In the present disclosure, N is larger than or equal to 1. Details of this operating method have been illustrated above, and thus are not repeated herein. As mentioned above, the processor27may identify a touch event according to the variation of a summation (t2+t4) of the second charging interval t2and the second discharging interval t4between successive detection cycles, and the first charging interval t1and the first discharging interval t3are used as the baseline time but not for identifying the touch event. The operating method of this embodiment further includes the step of: comparing, using a comparing circuit23, a capacitor voltage Vc of the detection capacitor20with a first reference voltage VHand a second reference voltage VLto determine whether to charge or discharge the detection capacitor20. In some aspects, the comparing circuit23further includes a flip-flop to provide a “1” or “0” level for being counted by the counter25according to the output of the comparator included in the comparing circuit23. It should be mentioned that the value in the above embodiment, e.g., a length of charging and discharging shown inFIG.4, is only intended to illustrate but not to limit the present disclosure. It should be mentioned that although the above embodiments are illustrated in a way that two different currents are used to charge the detection capacitor20within a charging interval and two different currents are used to discharge the detection capacitor20within a discharging interval, the present disclosure is not limited thereto. In other aspects, more than two different currents are used to charge the detection capacitor20within the charging interval and more than two different currents are used to discharge the detection capacitor20within the discharging interval. The processor27identifies a touch event according to charging and discharging intervals corresponding to the minimum charging current and the minimum discharging current. It should be mentioned that although the above embodiments are illustrated in a way that the touch detection circuit200includes a single self-capacitive electrode (e.g., forming the detection capacitor20), the present disclosure is not limited thereto. In other aspects, the touch detection circuit200includes multiple parallel self-capacitive electrodes each being connected to the respective charging circuit, discharging circuit, comparing circuit and counter as shown inFIG.2. The operation of each self-capacitive electrode is identical to the descriptions mentioned above. The counting results of multiple counters are sent to the same processor27. When identifying that the counted time variation associated with at least one self-capacitive electrode or with a predetermined number of self-capacitive electrodes exceeds a variation threshold, the occurrence of a touch event is confirmed. It should be mentioned that although the present disclosure is illustrated using the touch detection circuit, the touch detection circuit is not only used to detect a touch. When a conductor approaches the detection capacitor20(to influence the detection capacitor), even though the conductor is not actually in contact with the detection capacitor20(or the component arranged with the detection capacitor20), the touch detection circuit still detects an approaching conductor as long as the variation of charging and discharging interval (i.e. indicating variation of capacitance) exceeds a threshold, wherein a detectable distance is determined according to the threshold being set. That is, a touch event detected by the touch detection circuit200of the present disclosure includes the object touch and the object proximity. As mentioned above, the conventional capacitive switch is easily affected by environmental change and noises to degrade the detection accuracy. Accordingly, the present disclosure further provides a touch detection circuit (e.g.,FIG.2) and an operating method thereof (e.g.,FIG.5) that charge and discharge a detection capacitor using a large current and a small current. The charging and discharging interval associated with the larger current is considered as baseline time and cancelled in identifying the touch event. Furthermore, when a frequency of charging and discharging the detection capacitor is close to the noise frequency, the frequency of charging and discharging is changed by changing the charging and discharging currents to avoid the noise frequency band. Although the disclosure has been explained in relation to its preferred embodiment, it is not used to limit the disclosure. It is to be understood that many other possible modifications and variations can be made by those skilled in the art without departing from the spirit and scope of the disclosure as hereinafter claimed. | 19,689 |
11863175 | DETAILED DESCRIPTION Embodiments described below in the context of the apparatus are analogously valid for the respective methods, and vice versa. Furthermore, it will be understood that the embodiments described below may be combined, for example, a part of one embodiment may be combined with a part of another embodiment. It should be understood that the terms “on”, “over”, “top”, “bottom”, “down”, “side”, “back”, “left”, “right”, “front”, “lateral”, “side”, “up”, “down” etc., when used in the following description are used for convenience and to aid understanding of relative positions or directions, and not intended to limit the orientation of any device, or structure or any part of any device or structure. In addition, the singular terms “a”, “an”, and “the” include plural references unless context clearly indicates otherwise. Similarly, the word “or” is intended to include “and” unless the context clearly indicates otherwise. As described herein, a processor (or a processing unit or a host processing unit or a host processor etc) may be understood as any kind of a logic implementing entity, which may be special purpose circuitry or a processor executing software stored in a memory, firmware, or any combination thereof. Thus, the processor may be a hard-wired logic circuit or a programmable logic circuit such as a programmable processor (e.g. Programmable Logic Controller (PLC)), e.g. a microprocessor (e.g. a Complex Instruction Set Computer (CISC) processor or a Reduced Instruction Set Computer (RISC) processor). The processor may also be a processor executing software, e.g. any kind of computer program, e.g. a computer program using a virtual machine code such as e.g. Java. Various embodiments generally relate to an analog input device. In particular, various embodiments generally relate to a force-sensitive analog input device or a pressure-sensitive analog input device. According to various embodiments, the input device may include, but not limited to, a controller, a keypad, a keyboard, a mouse, a joystick, or a steering wheel. According to various embodiments, the input device may include a matrix of analog buttons or keys, each may be configured to vary an input signal based on an amount of depressing force or pressure applied by an user on the respective analog button or key. Accordingly, the respective analog button or key may provide a variable force-sensitive or pressure-sensitive analog input depending on the force or pressure applied to the analog button or key. According to various embodiments, varying an amount of depressing force or pressure applied on the respective analog button or key may vary an extent or degree of depression experienced by the analog button or key. Accordingly, varying the extent or degree of depression of the respective analog button or key may vary an analog output signal of the respective analog button or key from the analog input device. According to various embodiments, the analog output signal may be processed by a processor to generate a corresponding application event in an application. Various embodiments generally relate to a computing system for receiving and processing analog input and a method of processing analog input for the computer system. The computing system may refer to an information handling system or a functional system capable of performing substantial computation. The computing system may include a processing unit, random-access memory, disk storage, input and output devices etc. According to various embodiments, the computing system may include a host processor and the analog input device of the various embodiments. According to various embodiments, the user may provide an analog input via the analog input device such that an analog input signal may be sent via the analog input device to the host processor and the host processor may process the analog input signal to generate a corresponding application event in response to the analogy input signal and provide a corresponding output. The corresponding output may include, but not limited to, a text and/or graphic display, sound, lightings, or haptic feedback. The following examples pertain to various embodiments. Example 1 is an analog input device including:at least one mounting panel;a matrix of analog push button assemblies mounted to the at least one mounting panel, each analog push button assembly including an analog pressure sensor, wherein the analog pressure sensor includesa pressure reception arrangement having an optical sensing sub-arrangement configured to measure an amount of light varied according to a pressure sensed at the pressure reception arrangement and an output terminal for outputting an analog signal corresponding to the amount of light measured, anda plunger element configured to exert the pressure on the pressure reception arrangement when the analog push button assembly is pressed by a user's finger;a multiplexer including an input side and an output side, wherein the input side is coupled to the output terminals of the matrix of analog push button assemblies,an analog-to-digital converter which is coupled to the output side of the multiplexer;a processor which is coupled to the analog-to-digital converter and which is configured to output a data packet including a button identity (ID) of the push button assembly pressed by the user's finger and a digital-step-value corresponding to the analog signal from the push button assembly; anda communication interface configured to transmit the data packet to a host computing device. In Example 2, the subject matter of Example 1 may optionally include an analog filter coupled in an electrical connection between the pressure sensor and the analog-to-digital converter. In Example 3, the subject matter of Example 1 or 2 may optionally include a lighting arrangement including at least one light source controlled by the processor. In Example 4, the subject matter of any one of Examples 1 to 3 may optionally include that the analog signal is an analog voltage. In Example 5, the subject matter of any one of Examples 1 to 4 may optionally include that the pressure reception arrangement may include a biasing element which is arranged between the plunger element and the at least one mounting panel and which bias the plunger element away from the at least one mounting panel in a biasing direction. In Example 6, the subject matter of Example 5 may optionally include that the optical sensing sub-arrangement may includea light emitter which is disposed at an intermediate level between the plunger element and the at least one mounting panel and which is oriented to emit light along a light path perpendicular to the biasing direction of the biasing element;a light sensor which is disposed in the light path and which is configured to generate the analog signal based on the amount of light sensed by the light sensor for outputting via the output terminal, anda light blocking element which is associated with the plunger element in a manner so as to be movable together with the plunger element along a movement direction parallel to the biasing direction and which is extending towards the mounting panel to intersect the light path between the light emitter and the light sensor, wherein the light blocking element includes a cut-out profile which varies the amount of light passing through the light blocking element as the light blocking element moves transversely across the light path when the plunger element is moved towards the at least one mounting panel. Example 7 is a computing system for receiving and processing analog input, the computing system includinga host processor; andthe input device according to any one of Examples 1 to 6 connected to the host processor via the communication interface,wherein the host processor is configured to receive the data packet from the input device, to determine an amount of depression of a respective push button assembly based on the digital-step-value corresponding to the analog signal from the push button assembly, and to generate a corresponding predetermined application event in an application based on the determined amount of depression of the respective push button assembly and an input setting for the application. In Example 8, the subject matter of Example 7 may optionally include that the host processor may be further configured to transform the determined amount of depression onto a non-linear scale prior to generating the corresponding predetermined application event. In Example 9, the subject matter of Example 7 or 8 may optionally include that the corresponding predetermined application event may include a continuous variable action, and wherein the host processor is configured to generate a state of the continuous variable action according to the determined amount of depression. In Example 10, the subject matter of Example 7 or 8 may optionally include that the corresponding predetermined application event may include a discrete action, and wherein the host processor is configured to generate the discrete action when the determined amount of depression is equal to or greater than a pre-set depression level. In Example 11, the subject matter of Example 7 or 8 may optionally include that the corresponding predetermined application event may include a first discrete action and a second discrete action, and wherein the host processor is configured to generate the first discrete action when the determined amount of depression is equal to a first pre-set depression level or between the first pre-set depression level and a second pre-set depression level, and to generate the second discrete action when the determined amount of depression is equal to or greater than the second pre-set depression level. In Example 12, the subject matter of any one of Examples 7 to 11 may optionally include that the host processor may be configured to toggle between a first input setting and a second input setting for the application based on a user input command via a physical modifier key or a virtual modifier key, and wherein a first corresponding predetermined application event associated with the first input setting is different from a second corresponding predetermined application event associated with the second input setting. Example 13 is a method of processing analog input for a computing system according to claim7, the method including:generating, via the pressure reception arrangement, the analog signal corresponding to the amount of light measured as a measure of the pressure exerted on the pressure reception arrangement when the push button assembly is pressed by a user's finger;digitizing, via the analog-to-digital converter, the analog signal into the corresponding digital-step-value;outputting, via the processor, the data packet including button identity (ID) of the push button assembly pressed by the user's finger and a digital-step-value corresponding to the analog signal from the push button assembly;transmitting, via the communication interface, the data packet from the processor of the input device to the host processor of the computing system;determining, via the host processor, the amount of depression of the respective push button assembly based on the corresponding digital-step-value from the data packet received; andgenerating, via the host processor, the corresponding predetermined application event in the application based on the determined amount of depression of the respective push button assembly and the input setting for the application. In Example 14, the subject matter of Example 13 may optionally include transforming the determined amount of depression onto a non-linear scale prior to generating the corresponding predetermined application event. In Example 15, the subject matter of Example 13 or 14 may optionally include that the corresponding predetermined application event may include a continuous variable action, and wherein generating the corresponding predetermined event may include generating a state of the continuous variable action according to the determined amount of depression. In Example 16, the subject matter of Example 13 or 14 may optionally include that the corresponding predetermined application event may include a discrete action, and wherein generating the corresponding predetermined event may include generating the discrete action when the determined amount of depression is equal to or greater than a pre-set depression level. In Example 17, the subject matter of Example 13 or 14 may optionally include that the corresponding predetermined application event may include a first discrete action and a second discrete action, and wherein generating the corresponding predetermined event may include generating the first discrete action when the determined amount of depression is equal to a first pre-set depression level or between the first pre-set depression level and a second pre-set depression level, and to generate the second discrete action when the determined amount of depression is equal to or greater than the second pre-set depression level. In Example 18, the subject matter of any one of Examples 13 to 17 may optionally include toggling between a first input setting and a second input setting for the application based on a user input command via a physical modifier key or a virtual modifier key, and wherein a first corresponding predetermined application event associated with the first input setting is different from a second corresponding predetermined application event associated with the second input setting. FIG.1shows a schematic diagram of an analog input device100according to various embodiments. According to various embodiments, the analog input device100may include at least one mounting panel110. According to various embodiments, the at least one mounting panel110may be part of an internal support structure of the analog input device100. According to various embodiments, the at least one mounting panel110may also be an internal printed circuit board (PCB) of the analog input device100. According to various embodiments, the analog input device100may include a matrix of analog push button assemblies120mounted to the at least one mounting panel110. According to various embodiments, the analog input device100may include two or more or a plurality of analog input button assemblies120. For example, when the analog input device100is a mouse, the analogy input device100may include two or more analog click buttons. When the analog input device100is a keypad having 15-25 keys, the analog input device100may include up to 15-25 analog keys. When the analog input device100is a gaming controller having four or more buttons, the analog input device100may include two or three or four or more analog buttons. When the analog input device100is a keyboard, the analog input device100may include a plurality of analog keys. According to various embodiments, each analog push button assembly120may include a pressure sensor121having a plunger element122and a pressure reception arrangement124. The plunger element122may interact with the pressure reception arrangement124in a manner so as to exert a pressure or a force on the pressure reception arrangement124when the analog push button assembly120is being pressed by a user's finger. According to various embodiments, the pressure sensor121may be a swappable single unitary key-switch or may be a non-separable integrated built-in arrangement of the analog input device100. According to various embodiments, each analog push button assembly120may include a button cap123removably coupled or fixedly coupled to the plunger element122of the pressure sensor121. According to various embodiments, the button cap123may be a thin shell having an input surface for receiving a finger tip of the user. Accordingly, the button cap123may be ergonomically shaped for receiving the finger tip. According to various embodiments, the pressure reception arrangement124of each analog push button assembly120may include an optical sensing sub-arrangement125configured to measure an amount of light varied according to a pressure or force sensed at the pressure reception arrangement124. Accordingly, depressing the push button assembly120may exert a corresponding pressure or force on the pressure reception arrangement124which may vary the amount of light sensed by the optical sensing sub-arrangement125. According to various embodiments, the amount of light may be varied via proportionally varying an extent of blockage of the light with a light blocking element134with respect to the pressure or force on the pressure reception arrangement124. According to various embodiments, the pressure reception arrangement124may include an output terminal133for outputting an analog signal corresponding to the amount of light measured. Accordingly, the amount of light measured may be output as the analog signal. According to various embodiments, the amount of light may be an intensity of light. According to various embodiments, the pressure reception arrangement124may include a biasing element126. The biasing element126may be arranged between the plunger element122and the at least one mounting panel110. The biasing element126may bias the plunger element122away from the at least one mounting panel110in a biasing direction. Accordingly, the biasing element126may provide resistance against the pressure or force depressing the analog push button assembly120. According to various embodiments, the biasing element126may include a spring or a resilient membrane structure or other suitable elements, structures or configurations which may return the plunger element122to an original or initial position after being depressed. According to various embodiments, the biasing element126may be directly or indirectly connected in between the plunger element122and the at least one mounting panel110. According to various embodiments, the pressure reception arrangement124may include a housing mounted to the at least one mounting panel110and the plunger element122may be slidable through a ceiling of the housing. The button cap123may be coupled to the plunger element122so as to be movable relative to the housing. The biasing element126may bias the plunger element122away from the floor of the housing in the biasing direction so as to indirectly bias the button cap123away from the at least one mounting panel110. According to various embodiments, the optical sensing sub-arrangement125of the pressure reception arrangement124may include a light emitter130. The light emitter130may be disposed at an intermediate level between the plunger element122and the at least one mounting panel110. The light emitter130may be oriented to emit light along a light path131perpendicular to the biasing direction of the biasing element126. Accordingly, the light path131of the light emitted from the light emitter130may be substantially perpendicular to a direction of depression of the at least one analog input button assembly120by the user. According to various embodiments, the light emitter130may be a laser light emitter or a collimated light emitter. According to various embodiments, the intermediate level between the plunger element122and the at least one mounting panel110may be a position along a height between the mounting panel110and a maximum depression of the plunger element122. According to various embodiments, the optical sensing sub-arrangement125of the pressure reception arrangement124may include a light sensor132. The light sensor132may be disposed in the light path131and may be configured to output an analog signal based on the amount of light sensed by the light sensor132for outputting via the output terminal133. Accordingly, the light sensor132may be placed in a position that is directly facing the light emitter130. Hence, the light emitter130and the light sensor132may be arranged in an opposing manner such that the light from the light emitter130is directly projected straight towards the light sensor132. According to various embodiments, the light sensor132may detect an intensity of light incident on the light sensor132and output an analog signal according to the intensity of light detected. According to various embodiments, the light sensor132may include, but not limited to, a phototransistor-type light sensor or a photoresistor-type light sensor or a photodiode-type light sensor. According to various embodiments, the analog signal from the light sensor132may be an analog voltage or an analogue current. According to various embodiments, the optical sensing sub-arrangement125of the pressure reception arrangement124may include a light blocking element134. The light blocking element134may be associated with the plunger element122in a manner so as to be movable together with the plunger element122along a movement direction parallel to the biasing direction. The light blocking element134may be extending towards the mounting panel110to intersect the light path131between the light emitter130and the light sensor132. According to various embodiments, the light blocking element134may be directly or indirectly coupled to the plunger element122. According to various embodiments, the light blocking element134may be of an elongate shape and may be extending downwards towards the mounting panel110. According to various embodiments, the light blocking element134may be positioned in a manner such that a movement path of the light blocking element134, due to depressing of the analog push button assembly120by the user, may intersect the light path131between the light emitter130and the light sensor132. According to various embodiments, when the pressure reception arrangement124includes the housing and the plunger element122is slidable through the ceiling of the housing with the button cap123coupled to the plunger element122, the light blocking element134may be coupled to the plunger element122so as to be movable together with the button cap123. According to various embodiments, the light blocking element134may include a cut-out profile.FIG.3AtoFIG.3Dshow various examples of the cut-out profile136of the light blocking element134. The cut-out profile136of the light blocking element134may vary the amount of light passing through the light blocking element134as the light blocking element134moves transversely across the light path131when a pressure or force is applied to push the button cap123towards the at least one mounting panel110. Accordingly, the cut-out profile136of the light blocking element134may vary the extent of blockage of the light path131according to the movement of the plunger element122as a result of the pressure or force on the pressure reception arrangement124. According to various embodiments, the cut-out profile136of the light blocking element134may include, but not limited to, a triangular shape (seeFIG.3A), or a frusto-conical shape (seeFIG.3B), or a trumpet shape (seeFIG.3C), or arc shape (seeFIG.3D) or any other suitable shape which may vary the amount of light passing through as the light blocking element134moves to intersect the light path131. According to various embodiments, the light blocking element134may include an elongate plate with the cut-out profile136and may be disposed so as to move longitudinally to intersect the light path131as the user applies a pressure or force to push the button cap123. According to various embodiments, the analog input device100may include a multiplexer138having an output side and an input side. According to various embodiments, the input side of the multiplexer138may be coupled to the output terminals133of the matrix of analog push button assemblies120. Accordingly, the multiplexer138may accept multiple analog signals from the matrix of analog push button assemblies120and provides a single output. According to various embodiments, the analog input button assemblies120of the analog input device100may be coupled to the multiplexer138via a matrix connection. According to various embodiments, the analog input device100may include an analog-to-digital converter (ADC)140. The ADC140may be coupled to the output side of the multiplexer138. Accordingly, the ADC140may receive the analog signal from the multiplexer138and may be configured to discretize the analog signal into a corresponding digital-step-value. According to various embodiments, the multiplexer138may be electrically coupled to the ADC140such that the analog signal output from the multiplexer138may be sent to the ADC140for converting into readable data. According to various embodiments, the ADC140may convert the continuous-time and continuous-amplitude analog signal from the multiplexer138into a discrete-time and discrete-amplitude digital-step-value. According to various embodiments, the ADC140may perform the conversion at a predetermined sampling interval. According to various embodiments, the total number of discrete digital-step-values for the range of analog signal from the light sensor132may be based on a resolution of the ADC140. According to various embodiments, the digital-step-value may be an integer number from 0 to N, whereby N is one less than a power of two. Accordingly, each integer number of the digital-step-value may represent a corresponding magnitude of the analog signal from the light sensor132. According to various embodiments, the analog input device100may include a processor142. The processor142may be coupled to the ADC140in a manner so as to receive the digital-step-value. The processor142may be configured to output a data packet including a button identity (ID) of the push button assembly120pressed by the user's finger and the digital-step-value corresponding to the analog signal from the push button assembly120. According to various embodiments, the processor142and the ADC140may be in digital communication with each other. Accordingly, the digital-step-value converted by the ADC140may be digitally communicated to the processor142from the ADC140. According to various embodiments, the processor142may receive various information data from the ADC140and/or the pressure reception arrangement124and/or the pressure sensor121, and may arrange, compile and/or format the various information data, including button identity (ID) and the digital-step-value, into a string of formatted data for transmission. According to various embodiments, the string of formatted data may be in the form of a USB (Universal Serial Bus) HID (Human Interface Device) Vendor report. According to various embodiments, the analog input device100may include a communication interface144. The communication interface may be wired or wireless. The communication interface144may be connectable to a host computing device. The communication interface144may be configured to transmit the data packet from the processor142to the host computing device. According to various embodiments, the wired communication interface144may include USB connector or multi-pin electrical connectors. According to various embodiments, the wireless communication interface144may include infrared (IR) communication interface, radio frequency (RF) communication interface, Bluetooth communication interface, or Wi-Fi communication interface. According to various embodiments, the host computing device may be a computer or a programmable machine or programmable electronic device to which peripherals such as the input device100may be connected to and which directs the operation of the peripherals, including drivers for input/output devices connected to the host computing device. According to various embodiments, the ADC140and the processor142may be separate elements of the analog input device100. According to various embodiments, the ADC140and the processor142may be integrated as a single microcontroller150. FIG.2shows a schematic diagram of an analog input device200according to various embodiments. According to various embodiments, the analog input device200ofFIG.2includes all the features of the analog input device100ofFIG.1. Accordingly, all features, changes, modifications, and variations that are applicable to the analog input device100ofFIG.1may also be applicable to the analog input device200ofFIG.2. According to various embodiments, the analog input device200ofFIG.2may differ from the analog input device100ofFIG.1in that the analog input device200ofFIG.2may further include the following additional features and/or limitations. According to various embodiments, the analog input device200ofFIG.2may further include a filter260. The filter260may be coupled in an electrical connection between the pressure sensor121and the analog-to-digital converter140. The filter260may be configured to reduce noise in the analog signal from the pressure sensor121. According to various embodiments, the filter260may include a low-pass filter. According to various embodiments, the analog input device200ofFIG.2may further include a storage element270. The storage element270may be coupled to the processor142and may store instructions for execution by the processor142. According to various embodiments, the storage element270may be a memory. According to various embodiments, the memory may include, but not limited to, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a flash memory. According to various embodiments, the analog input device200ofFIG.2may further include a lighting arrangement280including at least one light source controlled by the processor142. According to various embodiments, the lighting arrangement280may include backlighting for the at least one analog input button assembly120, and/or underglow lighting for the analog input device200. According to various embodiments, the processor142may be configured to control the lighting arrangement280based on a lighting sequence and/or pattern stored in the storage element270. According to various embodiments, the processor142may receive instructions from the host computing device which the analog input device200is connected to. According to various embodiments, the processor142may prioritise the control of the lighting arrangement280to be based on the instructions received from the host computing device over the lighting sequence and/or pattern stored in the storage element270. According to various embodiments, the processor142may overwrite or replace the lighting sequence and/or pattern stored in the storage element270with a new lighting sequence and/or pattern based on instructions received from the host computing device. FIG.4shows a schematic diagram of a computing system401for receiving and processing analog input according to various embodiments. According to various embodiments, the computing system401may include a host processor402and the analog input device100,200(ofFIG.1and/orFIG.2) connected to the host processor402via the communication interface144. According to various embodiments, the host processor402may be a central processing unit of the host computing device404. According to various embodiments, the host processor402may receive the data packet from the input device100,200. According to various embodiments, the host processor402may interpret the input data packet from the input device100,200and execute programed instructions based on the interpreted input data packet. According to various embodiments, the host processor402may determine an amount of depression of the button cap123of a respective push button assembly120based on the digital-step-value corresponding to the analog signal from the push button assembly120. According to various embodiments, the host processor402may determine an amount of depression of the button cap123of the respective push button assembly120via performing calculation or mathematical processing or mapping or table loop-up operation or other suitable processing technique. According to various embodiments, the host processor402may generate a corresponding predetermined application event in an application program based on the determined amount of depression of the button cap123of the respective push button assembly120and an input setting for the application program. According to various embodiments, the corresponding predetermined application event may be a programmed action or occurrence triggered by the application program in response or recognition of the determined amount of depression of the button cap123of the respective push button assembly120. According to various embodiments, the input setting for the application program may be a mapping of predetermined application events to the matrix of analog push button assemblies120and respective amount of depression. According to various embodiments, the input setting may be a pre-defined setting in the application program. According to various embodiments, the input setting may be a user definable or configurable setting which the user may alter or change in the application program accordingly based on user preference and usage. According to various embodiments, the host processor402may be further configured to transform the determined amount of depression of the button cap123of the respective push button assembly120onto a non-linear scale prior to generating the corresponding predetermined application event. According to various embodiments, the non-linear scale may include a logarithmic scale or a variable scale. According to various embodiments, with the translation of the determined amount of depression of the button cap123of the respective push button assembly120onto the non-linear scale, the analog input device100may be configured to be more responsive in the lower or middle or higher depression range. According to various embodiments, translation to non-linear scale may allow the user to customise their own setting to suit their personal usage based on desired responsiveness of the analog input device100. According to various embodiments, the corresponding predetermined application event may include a continuous variable action. For example, in gaming, the continuous variable action may include a magnitude for a character's speed, directions, movements, actions etc, According to various embodiments, the host processor may be configured to generate a state of the continuous variable action according to the determined amount of depression of the button cap123of the respective push button assembly120. According to various embodiments, the corresponding predetermined application event may include a discrete action. For example, the discrete action may be a binary action such as on or off. According to various embodiments, the host processor may be configured to generate the discrete action when the determined amount of depression of the respective push button assembly120is equal to or greater than a pre-set depression level of the respective push button assembly120. Accordingly, the input device may serve as a normal binary input device such as a type-writing keyboard or a numeral keypad etc. According to various embodiments, with the pre-set depression level of the button cap123of the respective push button assembly120, an actuation point or trigger point of the respect push button assembly120may be configured or programmed. Accordingly, the respective push button assembly120may be configured or programmed to generate the discrete action at a desired amount of depression. Thus, the respective push button assembly120may trigger the discrete action without requiring full depression of the button cap123of the push button assembly120. According to various embodiments, the pre-set depression level of the button cap123of the respective push button assembly120may be a user defined input. Accordingly, the host processor402may be configured to receive and store the user defined input as the pre-set depression level of the button cap123of the respective push button assembly120. According to various embodiments, the corresponding predetermined application event may include a first discrete action and a second discrete action. According to various embodiments, the host processor may be configured to generate the first discrete action when the determined amount of depression of the button cap123of the respective push button assembly120is equal to a first pre-set depression level of the button cap123of the respective push button assembly120or between the first pre-set depression level of the button cap123of the respective push button assembly120and a second pre-set depression level of the button cap123of the respective push button assembly120. According to various embodiments, the host processor may be configured to generate the second discrete action when the determined amount of depression of the button cap123of the respective push button assembly120is equal to or greater than the second pre-set depression level of the button cap123of the respective push button assembly120. Accordingly, a single push button assembly120may be configured to trigger to two or more different discrete actions by pre-setting the two or more different depression ranges for triggering the respective discrete actions. According to various embodiments, each of the first and second pre-set depression levels of the button cap123of the respective push button assembly120may be a respective user defined input. Accordingly, the host processor402may be configured to receive and store the user defined inputs as the first and second pre-set depression levels of the button cap123of the respective push button assembly120. According to various embodiments, the host processor may be configured to toggle between a first input setting and a second input setting for the application based on a user input command via a physical modifier key or a virtual modifier key. According to various embodiments, a first corresponding predetermined application event associated with the first input setting may be different from a second corresponding predetermined application event associated with the second input setting. For example, in gaming, the first input setting may be a first mapping of fighting related predetermined application events to the matrix of analog push button assemblies120and the second input setting may be a second mapping of driving related predetermined application events to the matrix of analog push button assemblies120. In the following, a gaming keypad is described as an example of the analog input device100according to the various embodiments. A gaming keypad combines the benefits of keyboards with the compact and ergonomic size of a controller. Typical gaming keypads contain 15-25 keys designed to be controlled by fingers of the users. These keys are laid out in a way similar to that of a keyboard's number pad, to realize various functions such as directions and navigations (up, down, left, and right), changing a weapon, jumping, or shooting. However, the keys in the conventional keypad are coupled to digital switches, which only output a binary signal, limiting the achievable functions of the keypad and possible user intents. Example embodiments solve these problems by employing analog switches that output analog signals and a corresponding algorithm that process the signals sent from individual keys of the keypad at the host computer that is connected with the keypad wirelessly or through a USB. Analog keypads provide an enhanced input method with more granularity which gives mechanical keyboards the precision control normally found in devices such as gaming controllers, steering wheels, and aviation joysticks. By processing the signals from each key at the host computer, the latency is reduced, and a quicker response is achieved. Further, upgrading the processing solution by handling it in software offers the user flexibility in key mapping. FIG.5shows a computing system501having an analog keypad500, as an analog input device, in accordance with an example embodiment. By way of example, the analog keypad500may include a plurality of analog switches510(or analog pressure sensor) each disposed under a key540, a microcontroller520that includes a processor and a memory, and an analog-to-digital converter (ADC)530. The analog switches510may be based on opto-mechanical switch technology and output different analog signals according to the pressure or force applied to the keys or the displacement of the keys compared with the un-pressed position. The ADC530may convert the analog signals into digital signals and send the digital signals to the microcontroller to preprocess the data. The analog keypad500may be connected with a host computer504wirelessly or through a USB. The host computer may receive the preprocessed data from the microcontroller and perform calculation to determine the pressure or force applied to one or more keys and proceed with actions to be taken. Firmware may be held in the memory530(or storage element) such as ROM, EPROM, or flash memory, in order to provide control for the switches and translate the analog signals sent from each analog switch to the hosting computer, such that the hosting computer may further process the signals or data to realize the corresponding functions. In one example embodiment, when the user presses a key (or a button assembly) to a specific distance, the firmware may register this event and the event will be read by the microcontroller. Different events may be registered for difference distances the keys are pressed down. This microcontroller may constantly be monitoring the keys on the keypad via scanning, which may occur many times per second. The firmware may register when the key is pressed and to what distance the key is pressed, and rapidly perform the process of translating the keypresses from physical contact into electrical signals and then outputting them to the host computer. By way of example, one analog switch (or pressure sensor) may reside underneath each key. The analog switch may include a light emitter, a chopper (or a light blocking element) disposed in the light path and a light receiver (or light sensor). The path that the lights travels along may be substantially in parallel to the surface of the keycap (or button cap). The amount of light that may be detected by the light receiver may be affected by the location of the chopper, which may be further determined by how far the key is pressed down as a function of pressure or force applied to the keycap. As the displacement of the key is proportional to the pressure or force applied, and the amount of light passing through the chopper may be related to the displacement of the key, the amount of the light detected by the light receiver may also be related to the pressure or force applied to the key. As such, the amount of light indicates the pressure or force applied on the key. For example, a spring (or a biasing element) under the key may be configured to allow a displacement of the keycap from the top end position to a bottom end position, which is proportional to the pressure or force applied to the key. The light receiver outputs analog data based on the intensity of the light detected and the analog data is further processed by the microcontroller before being sent to the host computer. In one example embodiment, when two directional keys representing x and y directions, respectively, are pressed down simultaneously, the analog switch underneath the first key may output a first analog signal having a first magnitude, and the analog switch underneath the second key may output a second analog signal having a second magnitude. The ADC converts the first and second magnitudes, which are analog signals, into digital signals based on a calibration set with a predetermined range of numbers with a minimum value and a maximum value, and sends the digital signals to the microcontroller. In one example embodiment, one or more filters filter the analog signals based on the predetermined range of numbers and send the filtered analog signals to the ADC, in order to reduce noise in one or more of the analog signals from the force sensitive keys (or pressure sensitive keys). The microcontroller may further process the digital signals sent by the ADC and converts the digital signals to codes (or format) that the host computer can understand. For example, when the analog keypad is connected with the host computer through USB, the converted code is a USB code. The conversion is usually done using a lookup table. This table is also where the keyboard layout is defined. The host computer receives the codes of each key from the analog keypad and calculates the addition of the codes, for example, and then determine the directions of the movement. The host computer may further adjust the actuation point of the switches. In addition, the host computer may change the function of the key based on the amount of pressure or force applied on the keycap. FIG.6andFIG.7show schematic diagrams of other computing systems601,701with analog input device according to various embodiments. InFIG.6, ‘analog output switches’610represent switching hardware that can generate different signal level according to different pressure or force, ‘ADC’630represents analog-digital-conversion module, a module which converts analog signal to digital data, and ‘analog data compression’650is the compression of the received data into an encrypted/small packet to send back to the host CPU604for analog data translation. According to various embodiments, ‘peripheral hardware’ will only collect raw analog data, compress it and send back to the host PC through USB-Interface for data-decompression and conversion. All analog conversion for gaming applications may be done in host PC. The above-mentioned analog keypad500is shown as one example embodiment of an input device. Other input devices such as mice, keyboards or controller with analog switches also apply. One or more features of the input device other than computer games may advantageously be incorporated for many other applications in translating user intent to a form interpretable by any type of computing device, including, but not limited to, personal computers, entertainment systems, industrial computing systems, stenography devices, medical computing systems, and other computing devices. According to various embodiments, there is provided an input device for providing inputs to a computing device. The input device may include at least one input key or button. The at least one input key or button may include an input surface to receive a depressing force or pressure applied by a user. The at least one input key or button may include a switch interaction component to interact with an analog switch. The at least one input key or button may include the analog switch to receive an interaction with the interaction component, whereby the analog switch detects a property that is a function of the amount of depressing force or pressure applied by the user. According to various embodiments, the input device may be a keyboard having a plurality of input keys, or a keypad having a plurality of input keys, or a mouse having two or more click buttons, or a game controller having a plurality of input keys or buttons. According to various embodiments, the input surface may be a keycap on the top surface of the key and the switch interaction component may be attached to the bottom surface of the key. According to various embodiments, the analog switch may be located underneath the key and may interface with the switch interaction component that is attached to the bottom surface of the key. According to various embodiments, the switch may include a light emitter, light receiver, and a chopper, whereby the light emitter may emit light in a light path substantially in parallel to the surface of the keycap which is received by the light receiver, wherein the chopper may be disposed between the light emitter and light receiver. According to various embodiments, the interaction component may be configured to interface with the chopper to cause the chopper to move within the light path and affect the amount of light that passes through the chopper and that is received by the light receiver, and wherein the property detected by the analog switch is an amount of light received by the light receiver. According to various embodiments, the amount of movement of the chopper within the light path may be a function of the amount of depressing force or pressure applied by the user. Various embodiments have provided an analog input device which may provide more granularity of the input in an effective and simple manner. Various embodiments have also provide an analog input device whereby the data processing will be done by the host computing device while the analog input device will just send analog data. In other words, major data crunching is performed by the host computing device while the analog input device only perform minimum pre-processing of analog signal for sending to the host computing device. Accordingly, manufacturing costs of the analog input device may be significantly reduced and the analog data processing performance may increase. Various embodiments have provided an analog input device which has re-define conventional input device. According to various embodiments, the analog input device may give more option to end user. According to various embodiments, the analog input device of the computing system may provide definable trigger point (or configurable actuation point), more than one trigger point which allows multi-functions with a single key (or multiple actuation point for multiple events with a single key/button), and/or joystick/flight stick/driving wheel/game controller functions mapping. While the invention has been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes, modification, variation in form and detail may be made therein without departing from the scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced. | 50,398 |
11863176 | DESCRIPTION OF EMBODIMENTS In the following, an operation panel2according to an embodiment of the present invention and an instrument panel1serving as a vehicle interior part to which the operation panel2is applied will be described with reference to the drawings. The instrument panel1will be described with reference toFIG.1.FIG.1is a perspective view showing a configuration of the instrument panel1. As shown inFIG.1, the instrument panel1has the operation panel2. The instrument panel1is provided inside a vehicle cabin of a vehicle. The instrument panel1is provided at a front part in the vehicle cabin including the front side of a driver's seat. Meters (not shown) indicating information of an automobile are arranged in the instrument panel1. Next, the operation panel2will be described with reference toFIGS.2to6. FIG.2is an exploded perspective view of the operation panel2.FIG.3is a perspective view of a panel member3viewed from the back surface side.FIG.4is a sectional view of a part of the panel member3and a main body portion8where projected portions31are provided.FIG.5is a sectional view for explaining configurations of touch position sensors6and a load sensor7.FIG.6is a perspective view of the operation panel2viewed from the back surface side. As shown inFIG.2, the operation panel2is provided with the panel member3, a sensor module5, and the main body portion8. The panel member3is formed to have a free-form surface shape having at least a curved part. The panel member3is exposed to an interior of the vehicle cabin of the vehicle. The panel member3has switches4each serving as an operation part. The switches4are provided as parts of the panel member3. The switches4are pressed and operated by a user. The switches4have a first switch4ato a tenth switch4jfor operating an air-conditioning device (an air conditioner). The first switch4a, a second switch4b, a ninth switch4i, and the tenth switch4jare switches for adjusting a temperature setting of the air-conditioning device. A third switch4cis a switch for switching ON/OFF of a rear defogger. A fourth switch4dis a switch for switching ON/OFF of a front defroster. A fifth switch4eand a sixth switch4fare switches for adjusting an wind level of the air-conditioning device. A seventh switch4gis a switch for switching ON/OFF of an AUTO mode. An eighth switch4his a switch for switching an external air/internal air circulation. As shown inFIG.3, the panel member3has the projected portions31and attachment parts32. The projected portions31project out from a back surface of the panel member3. The projected portions31are each formed of divided projected portions311that are provided by being divided into a plurality of parts. With such a configuration, because each of the divided projected portions311can be formed to have a small size, when the panel member3is formed by an injection molding of a resin material, it is possible to suppress formation of a sink mark on a surface of the panel member3compared with a case in which a large projected portion is formed on the back surface. As shown inFIG.4, each of the projected portions31is formed in a flat surface portion3aof the panel member3. In other words, even if the surface of the panel member3has the curved shape, the flat surface portion3ais formed in the back surface on which the projected portions31are formed. With such a configuration, it is possible to equalize the projected heights of respective divided projected portions311in each of the projected portions31. The projected portions31have a first projected portion31a, a second projected portion31b, and a third projected portion31c. The first projected portion31afaces a first load sensor7a, which will be described below. The second projected portion31bfaces a second load sensor7b, which will be described below. The third projected portion31cfaces a third load sensor7c, which will be described below. The projected height of the second projected portion31bis higher than the projected height of the first projected portion31a. The projected height of the third projected portion31cis higher than the projected height of the second projected portion31b. The projected heights of the first to third projected portions31ato31care set in accordance with the shape of the panel member3. As shown inFIG.5, tip end portions311aof the projected portions31face the load sensor7. The divided projected portions311are each formed to have the projected height such that the distance between the panel member3and the load sensor7becomes uniform. In other words, even if the surface of the panel member3has the curved shape, the tip end portions311aof the divided projected portions311coming into contact with the load sensor7are arranged on a flat plane that extends in parallel with a surface of the load sensor7. With such a configuration, it is possible to cause all of the divided projected portions311to come into contact with the load sensor7, rather than causing only a part of the divided projected portions311to come into contact with the load sensor7. As shown inFIG.3, the attachment parts32are provided at positions away from the projected portions31in the longitudinal direction. With such a configuration, deformation of the panel member3is not interfered when the switches4are operated by the user. As shown inFIG.2, the sensor module5has a sensor sheet5a, the touch position sensors6, and the load sensor7. The sensor sheet5ais connected to a substrate part11. The sensor sheet5aelectrically connects the touch position sensors6and the load sensor7with the substrate part11. The touch position sensors6are provided on the sensor sheet5aso as to face the back surface of the panel member3. The touch position sensors6are respectively provided so as to correspond to the switches4. The touch position sensors6detect that a finger of the user has touched each of the switches4. In other words, the first to tenth touch position sensors6ato6jare respectively provided at the positions corresponding to the first to tenth switches4ato4j. As shown inFIG.5, the touch position sensors6are provided on the back surface of the panel member3so as to respectively correspond to the switches4. The touch position sensors6are each a capacitive proximity sensor. The touch position sensors6each has a plate-shaped electrode62that is arranged on the sensor sheet5a. The touch position sensors6measure an electrostatic capacity value at a period of 10 [ms], for example. As the finger of the user touches the switch4, the electrostatic capacity value to be measured by the touch position sensor6is changed. In accordance with this change in the electrostatic capacity value, the touch position sensor6detects which of the switches4has been touched by the finger of the user. As shown inFIG.2, the load sensor7is provided on the sensor sheet5aso as to face the back surface of the panel member3. The load sensor7detects a load caused when the panel member3is displaced as the switch4is operated. In other words, the load sensor7detects the load applied by the user to each of the switches4. On the basis of the detected load, the load sensor7detects that the switch4has been operated. The load sensor7is provided with the first load sensor7a, the second load sensor7b, and the third load sensor7c. The first load sensor7ais provided between the third switch4cand the fourth switch4dunder the third switch4cand the fourth switch4d. The second load sensor7bis provided between the fifth switch4eand the sixth switch4funder the fifth switch4eand the sixth switch4f. The third load sensor7cis provided between the seventh switch4gand the eighth switch4hunder the seventh switch4gand the eighth switch4h. As shown inFIG.5, the load sensor7has a plate-shaped first electrode71, a plate-shaped second electrode72, and a spacer73. The first electrode71is provided on the sensor sheet5aso as to face the main body portion8. The second electrode72is provided on the sensor sheet5aso as to face the back surface of the panel member3. The spacer73is provided between the panel member3and the sensor sheet5aand is arranged such that the first electrode71and the second electrode72respectively face with the spacer73with a predetermined gap therebetween. The spacer73is provided so as to be elastically deformable and is compressed and deformed when the switch4is operated by the user. The load sensor7is a capacitive position sensor that is wrapped with a projected piece70serving as a ground part by folding the projected piece70that is formed on an end portion of the sensor sheet5aholding the first electrode71and the second electrode72each serving as an electrode. With such a configuration, because a portion where the projected piece70is folded becomes a closed state, it is possible to prevent a leakage of electric charge. Therefore, it is possible to suppress a misdetection by the load sensor7. The load sensor7measures the electrostatic capacity value at a period of 10 [ms], for example. As the user presses the switch4down by the finger, the panel member3is deformed so as to be dented about the position of the switch4. As the panel member3is dented, the distance between the first electrode71and the second electrode72is decreased. Therefore, the electrostatic capacity value between the first electrode71and the second electrode72is changed. On the basis of this change in the electrostatic capacity value, the load sensor7detects the level of the load (load detection level) acting on the panel member3. As shown inFIG.5, an upper end portion36of the panel member3is a fixed end that is attached to the vehicle body, and a lower end portion37thereof is a free end that is not attached to other member. The load sensor7is provided at the position closer to the lower end portion37than to the switches4. In other words, the load sensor7is provided at the position where, when the switch4is operated, the displacement of the panel member3is greater than the displaced amount of the switch4. With such a configuration, the load sensor7can detect the load that is larger than the load acting on the switch4by the operation performed by the user. Therefore, it is possible to improve the detection accuracy of the load sensor7for the operation of the switches4by the user. As shown inFIG.2, the main body portion8has a base portion9, illumination parts10, a the substrate part11, a case part12, and a pair of electric solenoids13each serving as a vibration generating device. The base portion9is attached to the vehicle body. A plurality of through holes for embedding the illumination parts10are formed in the base portion9. The illumination parts10are each a transparent member that allows the light to pass therethrough. A plurality of illumination parts10are provided so as to respectively correspond to the first to tenth switches4ato4j. The illumination parts10allows the light emitted to the first to tenth switches4ato4jfrom the back surface side to pass therethrough. The substrate part11is provided between the base portion9and the case part12. Electric signals from the touch position sensors6and the load sensor7are input to the substrate part11. The substrate part11outputs an electric signal to a controller (not shown) of the vehicle in a manner corresponding to the input electric signal. A plurality of light emitting parts (not shown) for respectively illuminating the illumination parts10are installed on the substrate part11. The light emitting parts are each formed of, for example, an LED (Light Emitting Diode). The case part12is inserted on the back side of the base portion9and attached to the vehicle body. The case part12holds one ends of the electric solenoids13. As shown inFIG.6, the main body portion8has a plurality of attachment parts8ato be attached to the vehicle body. With such a configuration, the main body portion8is attached to the vehicle body and the rigidity is ensured. In contrast, the panel member3is attached to the vehicle body at the position away from the main body portion8in the longitudinal direction. Thus, when the switch4is operated by being pressed by the user, the panel member3is deformed with respect to the main body portion8whose rigidity is ensured by being attached to the vehicle body. Therefore, the pressing pressure is allowed to act on the load sensor7when the user operates the switch4by pressing it, and so, escape of the pressing pressure to other parts such as the vehicle body, for example, is suppressed. As shown inFIG.2, the electric solenoids13are arranged on the back surface side of the panel member3. The electric solenoids13generate a touching feeling for the finger of the user by causing vibration to the panel member3when the switches4are operated. The electric solenoids13each has a coil (not shown) and a moving core (not shown). In the electric solenoids13, as the coil is energized, the moving core is displaced towards the panel member3. On the other hand, in the electric solenoids13, as the coil is de-energized, the moving core is moved away from the panel member3. With such a configuration, the vibration is generated to the panel member3by the electric solenoids13. The one ends of the electric solenoids13are held by the case part12. Thus, it is possible to transmit the vibration generated by the displacement of the moving core to the panel member3with reliability. According to the embodiment mentioned above, the advantages described below are afforded. The operation panel2provided on the vehicle is provided with: the panel member3formed to have at least a curved part, the panel member3being exposed to the interior of the vehicle cabin of the vehicle; the switch4provided on the panel member3, the switch4being configured to be pressed and operated by the user; and the load sensor7provided so as to face the back surface of the panel member3, the load sensor7being configured to detect that the switch4has been operated based on the load caused by the displacement of the panel member3, wherein the load sensor7is provided at the position where the displacement of the panel member3is greater than the displaced amount of the switch4when the switch4is operated. With this configuration, the load sensor7is provided at the position where, when the switch4is operated, the displacement of the panel member3is greater than the displaced amount of the switch4. Thus, the load sensor7detects the load that is larger than the load acting on the switch4by the operation performed by the user. Therefore, it is possible to improve the detection accuracy of the load sensor7for the operation of the switches4by the user. In addition, the panel member3has the projected portions31projecting out from the back surface such that the tip end portions311aface the load sensor7. According to this configuration, because the projected portions31are formed to project out such that the tip end portions311aface the load sensor7, even if the panel member3has the curved shape, it is possible to transmit the load caused when the switch4is pressed and operated by the user to the load sensor7. In addition, the projected portions31are each formed of the divided projected portions311that are provided by being divided into a plurality of parts. According to this configuration, with such a configuration, because each of the divided projected portions311can be formed to have a small size, when the panel member3is formed by the injection molding of the resin material, it is possible to suppress the formation of the sink mark on the surface of the panel member3compared with a case in which a large projected portion is formed on the back surface. In addition, the divided projected portions311are each formed to have the projected height such that the distance between the panel member3and the load sensor7becomes uniform. According to this configuration, even if the surface of the panel member3has the curved shape, the tip end portions311aof the divided projected portions311coming into contact with the load sensor7are arranged on the flat plane that extends in parallel with the surface of the load sensor7. With such a configuration, it is possible to cause all of the divided projected portions311to come into contact with the load sensor7, rather than causing only a part of the divided projected portions311to come into contact with the load sensor7. In addition, the projected portions31are formed in the flat surface portion3aof the panel member. According to this configuration, even if the surface of the panel member3has the curved shape, the flat surface portion3ais formed in the back surface on which the projected portions31are formed, and therefore, it is possible to equalize the projected heights of the respective divided projected portions311in each of the projected portions31. The load sensor7is a capacitive sensor that is wrapped with the projected piece70serving as the ground part by folding the projected piece70that is formed on an end portion of the sensor sheet5aholding the first electrode71and the second electrode72. According to this configuration, because a portion where the projected piece70is folded becomes the closed state, it is possible to prevent a leakage of electric charge. Therefore, it is possible to suppress a misdetection by the load sensor7. Although the embodiment of the present invention has been described in the above, the above-mentioned embodiment merely illustrates a part of application examples of the present invention, and the technical scope of the present invention is not intended to be limited to the specific configurations of the above-described embodiment. For example, in the above-mentioned embodiment, an example in which the load sensor7is of a capacitive type is illustrated. However, as the load sensor7, a load sensor of other types such as a resistive type, a diffusive type, a film type, and so forth may0 be used. In addition, in the above-mentioned embodiment, an example in which the switches4are used as switches for operating the air-conditioning device has been shown. However, the switches4may be switches for operating a car audio system or switches for other operations. In addition, in the above-mentioned embodiment, an example in which three load sensors7are provided has been shown. However, one, two, or four or more load sensors7may be employed. In the above-mentioned embodiment, an example in which ten switches4are provided has been shown. However, the number of the switches4is not limited to this aspect. In the above-mentioned embodiment, an example in which the present invention is applied to the operation panel2provided on the instrument panel1has been shown. However, the present invention may also be applied to input devices that are provided on a console or an armrest. In addition, the present invention may also be applied to input devices provided in a furniture, an electrical appliance, or the like. The present application claims a priority based on Japanese Patent Application No. 2019-170331 filed with the Japan Patent Office on Sep. 19, 2019, the entire content of which are incorporated into this specification by reference. | 19,188 |
11863177 | The same reference numbers are used in the drawings to designate the same or similar (structurally and/or functionally) features. DETAILED DESCRIPTION Specific examples are described in detail below with reference to the accompanying figures. These examples are not intended to be limiting. In the drawings, corresponding numerals and symbols generally refer to corresponding parts unless otherwise indicated. The objects depicted in the drawings are not necessarily drawn to scale, and graphs are approximate representations. In examples, structure and/or functionality is provided to reduce or eliminate differential reverse leakage and/or discharge capacitance current in H-bridge drivers and similarly constructed components. In examples, such structure and/or functionality is enabled during driver disable or mode change in an H-bridge driver half-duplex configuration. In examples, the polarity flip of the output voltage, during disable or mode change, is mitigated or eliminated to improve communication between an H-bridge driver and downstream components, e.g., a microcontroller. FIG.1is a circuit diagram of an example H-bridge driver100with a common mode load102. Common mode load102may include a common mode voltage, e.g., −12 V, at a voltage terminal104, and a resistive network that includes resistors106and108, each of which is coupled to voltage terminal104, and a resistor110. Resistors106and108also coupled to bus output nodes Y and Z, respectively, which output nodes are bridged by resistor110. Resistors106and108may each be approximately 375Ω, and resistor110may be approximately 54Ω. Coupled between the Y bus output node and ground (GND) is a first pair of transient-voltage-suppression (TVS) diodes112. A second pair of TVS diodes114is coupled between the Z output node and ground. Each TVS diode pair112,114is comprised of two diodes coupled back-to-back. The node between the two diodes of pair112is denoted N1, and the node between the two diodes of pair114is denoted N2. First and second current switches116and118are coupled to bus output nodes Y and Z, respectively. Each of switches116and118may be comprised of an n-type metal-oxide-semiconductor field-effect transistor (MOSFET) and a diode coupled between the drain and source of the n-type MOSFET. The source of first current switch116(MY_NDiode) is coupled to the Y bus output node, and the source of second current switch118(MZ_NDiode) is coupled to the Z bus output node. The drain of current switch116is coupled to an N stack, Y side current source122(INSTACK_Y), and the drain of current switch118is coupled to an N stack, Z side current source124(INSTACK_Z). Each of current sources122and124is also coupled to ground. The control terminals (e.g., gates) of first and second current switches116and118are controlled by first and second control switches126and128, respectively. Each of control switches126and128may be comprised of a p-type MOSFET and a diode coupled between the drain and source of the p-type MOSFET. The drain of control switch126is coupled to the control terminal of current switch116, and the drain of control switch128is coupled to the control terminal of current switch118. The sources of control switches126and128are coupled to a power supply terminal (Vcc). The gate of each of control switches126and128is controlled by input signal DE, which is the inverted signal of driver enable DE signal. A resistor132is coupled between the control terminal (e.g., gate) and source of current switch116, and a resistor134is coupled between the control terminal (e.g., gate) and source of current switch118. Each of resistor132and134may be approximately 5 kΩ. H-bridge driver100also includes P stack current sources. On the Y bus output node side, there are two such current sources142and144, which are configured to deliver currents IPSTACK_1_Yand IPSTACK_2_Y, respectively. Each of current source142and144is coupled between the power supply terminal (e.g., Vcc) and the Y bus output node. A Z side, P stack current source146, configured to deliver current IPSTACK_Z, is coupled between Vccand the Z bus output node. Coupled in parallel with current sources142and144is a Y side, compensation current source148, which is configured to deliver current ICOMP_Y. Compensation current source148is coupled between Vccand the Y bus output node. Another compensation current source152on the Z side is configured to deliver current ICOMP_Zand is coupled between Vccand the Z bus output node. A pre-charge current source154, configured to deliver current Ipre-charge, is coupled in parallel with current sources146and152. H-bridge driver100further includes a pair of pull-down switches156and158. Each of pull-down switches116and118may be comprised of an n-type MOSFET and a diode coupled between the drain and source of the n-type MOSFET. The drain of pull-down switch156, disposed on the Y side, is coupled to the control terminal of current switch116, and the source of pull-down switch156is coupled to ground (GND). Pull-down switch158is similarly disposed on the Z side. That is, the drain of pull-down switch158is coupled to the control terminal of current switch118, and the source of pull-down switch158is coupled to ground. The control terminals (e.g., gates) of pull-down switches156and158are configured to receive a pre-charge pulse to activate them and rapidly discharge voltages of current switches116and118, as described below. In an example, when the DE signal is asserted (DE=1) and applied to control switches126and128for a period of time, which is designated by the hatched portion inFIG.2, H-bridge driver100operates in the enabled state. During this time period, based on a driver input signal (DIN), P stack current sources142and144, as well as N stack current source124, are ON. As a result, current flows from P stack current sources142and144to the Y bus output node, into common mode load102(across resistor110and also through resistors106and108toward voltage terminal104). Current also flows into the Z bus output node, then through current switch118, and is then discharged to ground through N stack current source124. In an example operation during this time period (DE=1), control switches126and128are turned ON byDE, which results in a voltage signal, e.g., a 5 V signal, being applied to the control terminals (e.g., gates) of current switches116and118. As a result, the gate-to-source voltage (VGS) of current switch118is greater than the VGSof current switch116. In this example, the VGSof current switch118is approximately 5 V, while the VGSof current switch116is approximately 1 V. Also, the voltage at node N1in TVS diode pair112is greater than the voltage at node N2in TVS diode pair114. Thus, as shown inFIGS.3and5, after DE transitions (e.g., to 0) and H-bridge driver100enters a disabled state, a pre-charge pulse is applied to pull-down switches156and158for a pre-charge monopulse time period (td), which is within but less than a driver disable time period (tpz), where tpzrepresents a time during which H-bridge driver100is disabled. Driver disable time period (tpz) may be set in accordance with the RS-485 standard (incorporated by reference in its entirety), which is based on the maximum data rate supported by the driver. For example, for a 10 Mbps data rate driver, driver disable time period (tpz) is 75 ns (max). Pre-charge monopulse time period (td) may be, for example, less than 50 ns across all supply, temperature and technology process corners. The transition of DE to the disabled level (e.g., to 0) also disables one of the Y side, P stack current sources, e.g., current source142, which is turned OFF. During the duration of the pre-charge pulse, the other Y side, P stack current source, e.g., current source144remains ON, continuing to deliver current IPSTACK_2_Y, and pre-charge current source154is enabled to deliver current IPre-charge. In an example, current sources144and154are operated during the pre-charge monopulse time period (td), such that the current delivered by the Y side, P stack current source that remains ON, e.g., IPSTACK_2_Yfrom current source144, is greater than the Ipre-chargecurrent (that is, IPSTACK_2_Y>Ipre-charge). As a result, the voltage at each of the bus output nodes Y and Z is pulled to a value higher than a threshold turn-on voltage VTNof n-type MOSFET switches116and118. Also, during the pre-charge monopulse time period (td), pull-down switches156and158are enabled via application of the discharge signal, to rapidly discharge VGSof each of current switches116and118to less than 0 V, as shown inFIG.5. Thus, there is no appreciable reverse leakage current through TVS diode pairs112and114. That is, IDiode_Yand IDiode_Z, which represent the reverse leakage currents through TVS diode pairs112and114, respectively, are each at or near zero, as shown inFIG.5and thus so is the differential reverse leakage current. As a result, as shown inFIG.5, the differential output voltage (difference between the voltage at Y and Z, denoted VOD) remains positive; the polarity of VOD does not flip. To maintain VOD greater than 0 V, current compensation is applied during a current compensation time period (tcomp), which occurs after the driver disable time period (tpz) and the pre-charge monopulse time period (td) within tpz. Compensation time period (tcomp) may be set in the range of 500 ns-600 ns. At the start of the current compensation time period (tcomp), the P stack current source that was ON in the pre-charge monopulse time period (td), e.g., current source144, is disabled, as is pre-charge current source154. With these current sources now disabled, voltages of the TVS diode pairs112and114discharge toward common mode load102, decreasing the bus output node voltages, i.e., voltages at Y and Z, and generating capacitance-based discharge currents IDiode_Yand IDiode_Z, which are typically of different values. Thus, a compensation current source is enabled during tcompto offset or compensate for the differential capacitance-based discharge current. Based on the value of DIN, either compensation current source148is enabled or compensation current source152is enabled. When compensation current source148is enabled, the current (ICOMP_Y) it delivers is greater than the difference IDiode_Z−IDiode_Y(i.e., ICOMP_Y>IDiode_Z−IDiode_Y). When compensation current source152is enabled, the current (ICOMP_Z) it delivers is greater than the difference IDiode_Y−IDiode_Z(i.e., ICOMP_Z>IDiode_Y−IDiode_Z). As shown inFIG.6, in an example in which compensation current source148is enabled to deliver current ICOMP_Y, VOD remains above 0 V, thus avoiding a polarity flip. Current compensation time period (tcomp) may be set based on the expected maximum capacitance of TVS diode pairs112,114. For compatibility with the RS-485 standard, ICOMP(from either compensation current source) should not exceed 10% of the short-circuit output current (Ios). That is, ICOMP<Ios. FIG.7is a flow diagram700of an example method of operating an example H-bridge driver. Operation702includes disabling, for a first time period (e.g., tpz), a first current source (e.g., current source142) coupled to a first current switch (e.g., current switch116) at a first output node (e.g., bus output node Y) of a driver circuit (e.g., H-bridge driver100). In operation704, a second current source (e.g., current source124) coupled to a second current switch (e.g., current switch118) at a ground terminal is also disabled for the first time period. During a second time period (td), which is within but less than the first time period, operations706,708,710and712are performed. In operation706, a third current source (e.g., current source154) coupled to the second current switch at a second output node (e.g., bus output node Z) of the driver circuit is enabled for td. In operation708, a fourth current source (e.g., current source144) coupled to the first current switch at the first output node continues to operate for td. In operation710, a first pull-down switch (e.g., pull-down switch156) coupled between the first current switch and the ground terminal is enabled, and in operation712, a second pull-down switch (e.g., pull-down switch158) coupled between the second current switch and the ground terminal is enabled. Both pull-down switches156and158are enabled for td. In operation714, after the first time period, during a third time period (tcomp), the third and fourth current sources may be disabled and a compensation current source (e.g., current source148) is enabled. In operation716, after the third time period (tcomp), the compensation current source is disabled. FIG.7depicts one possible order of operations. Not all operations need necessarily be performed in the order described. Some operations may be combined into a single operation, which may be based on the time period in which they occur. For example, operations702and704may be considered a single operation. Similarly, operations706,708,710and712may be considered a single operation, or grouped based components, e.g., enabling of current sources and enabling of pull-down switches. Additional operations may be performed as well. As the foregoing demonstrates, various examples of structure and/or functionality are provided to reduce or eliminate differential reverse leakage and/or discharge capacitance current in H-bridge drivers and similarly constructed components. For example, an additional P stack current source on one output side that remains operable after driver disable, a pre-charge current source on the other output side that is enabled during a pre-charge monopulse time period within the driver disable time period, and a pair of pull-down switches cooperate with each other and other driver components to reduce differential leakage current. In another aspect, compensation current is provided using an enabled compensation current source to offset or compensate for differential TVS diode capacitance-based discharge current. In examples, structure and/or functionality is provided to mitigate or eliminate polarity flip of the output voltage during drive disable or mode change to improve communication between an H-bridge driver and downstream components, e.g., a microcontroller. The term “couple” is used throughout the specification. The term may cover connections, communications, or signal paths that enable a functional relationship consistent with this description. For example, if device A provides a signal to control device B to perform an action, in a first example device A is coupled to device B, or in a second example device A is coupled to device B through intervening component C if intervening component C does not substantially alter the functional relationship between device A and device B such that device B is controlled by device A via the control signal provided by device A. A device that is “configured to” perform a task or function may be configured (e.g., programmed and/or hardwired) at a time of manufacturing by a manufacturer to perform the function and/or may be configurable (or re-configurable) by a user after manufacturing to perform the function and/or other additional or alternative functions. The configuring may be through firmware and/or software programming of the device, through a construction and/or layout of hardware components and interconnections of the device, or a combination thereof. As used herein, the terms “terminal”, “node”, “interconnection”, “pin” and “lead” are used interchangeably. Unless specifically stated to the contrary, these terms are generally used to mean an interconnection between or a terminus of a device element, a circuit element, an integrated circuit, a device or other electronic or semiconductor component. Also, as used herein, the term “pre-charge” is relative to operation(s) that occur at a later period of time, i.e., the current compensation time period. A circuit or device that is described herein as including certain components may instead be adapted to be coupled to those components to form the described circuitry or device. For example, a structure described as including one or more semiconductor elements (such as transistors), one or more passive elements (such as resistors, capacitors, and/or inductors), and/or one or more sources (such as voltage and/or current sources) may instead include only the semiconductor elements within a single physical device (e.g., a semiconductor die and/or integrated circuit (IC) package) and may be adapted to be coupled to at least some of the passive elements and/or the sources to form the described structure either at a time of manufacture or after a time of manufacture, for example, by an end-user and/or a third-party. While the use of particular transistors is described herein, other transistors (or equivalent devices) may be used instead. For example, a p-type MOSFET may be used in place of an n-type MOSFET, and vice versa, with little or no changes to the circuit. Furthermore, other types of transistors may be used (such as bipolar junction transistors (BJTs)). Circuits described herein are reconfigurable to include the replaced components to provide functionality at least partially similar to functionality available prior to the component replacement. Components shown as resistors, unless otherwise stated, are generally representative of any one or more elements coupled in series and/or parallel to provide an amount of impedance represented by the shown resistor. For example, a resistor or capacitor shown and described herein as a single component may instead be multiple resistors or capacitors, respectively, coupled in parallel between the same nodes. For example, a resistor or capacitor shown and described herein as a single component may instead be multiple resistors or capacitors, respectively, coupled in series between the same two nodes as the single resistor or capacitor. Uses of the phrase “ground” in the foregoing description include a chassis ground, an Earth ground, a floating ground, a virtual ground, a signal ground, a digital ground, a common ground, and/or any other form of ground connection applicable to, or suitable for, the teachings of this description. Unless otherwise stated, “about,” “approximately,” and/or “substantially” preceding a value means +/−10 percent of the stated value. Modifications of the described examples are possible, as are other examples, within the scope of the claims. For example, in an arrangement in which only one Y side, P stack current source is employed, i.e., the functionality of current sources142and144are combined into a single current source, that current source may be controlled to be partially enabled during the pre-charge monopulse time period (td) and then disabled during the current compensation time period (tcomp). Moreover, features described herein may be applied in other environments and applications consist with the teachings provided. | 18,991 |
11863178 | DETAILED DESCRIPTION OF THE EMBODIMENTS A core of the present invention is to provide a method for detecting properness of a PG pin power-on timing sequence, to effectively determine whether a power-on timing sequence of PG pins in a VR chip is proper, to avoid an incorrect action of a subsequent circuit. To make those skilled in the art better understand the solution of the present invention, the present invention will be further described in detail below with reference to the accompanying drawings and specific implementations. Apparently, the described embodiments are merely a part but not all of the embodiments of the present invention. Based on the embodiments in the present invention, all the other embodiments obtained by those of ordinary skill in the art without any creative effort shall all fall within the protection scope of the present invention. In view of an abnormal power-on timing sequence of PG pins in a VR chip, the applicant firstly verifies whether the abnormality is caused by external interference. Specifically, circuit board line cut processing is adopted, that is, a connection between a PG signal and a subsequent circuit is disconnected. However, after a test, the abnormality still exists. In this case, impact of the external interference may be excluded, and then it may be determined that the problem lies in a voltage conversion circuit itself. The applicant further analyzes a rising trend of a waveform of the PG signal, and considers that the rising trend of the waveform is basically consistent with a trend of a pull-up level P3V3. It should be noted that, althoughFIG.3does not show a rising trend of a waveform of the pull-up level P3V3, the rising trend of the waveform of the pull-up level P3V3 is basically the same as a rising trend of a waveform of VOUT. Therefore, the applicant considers that there is voltage division when a PG pin is at a low level. That is, there is an equivalent resistance to ground when the PG signal is at the low level. When the equivalent resistance to ground is relatively large, or a resistance of a pull-up resistor is relatively low, due to voltage division of the equivalent resistance to ground to the pull-up level, when the PG pin is at the low level, the output VOUT of the PG pin may be relatively high, for example, reach 1.2 V as shown inFIG.3, which in turn may disrupt a correct power-on timing sequence and cause an incorrect action of the subsequent circuit. Referring toFIG.4,FIG.4is a flowchart of implementation of a method for detecting properness of a PG pin power-on timing sequence according to the present invention, and may include the following steps: Step S101: Obtain a pull-up level of a PG pin of a VR chip. Different pull-up levels may be adopted for different VR chips. Usually, the pull-up level of the PG pin may be obtained by reading a parameter list of the VR chip, and certainly, the pull-up level may alternatively be input by a relevant staff through an input device, so that a system for detecting properness of the PG pin power-on timing sequence may obtain the pull-up level of the PG pin. Step S102: Determine a value of a pull-up resistor of the PG pin, as a first resistance, when a current injected into the VR chip by using the pull-up level is equal to a maximum withstand current of the VR chip. It may be understood that the value of the pull-up resistor affects the current injected into the VR chip, and a larger value of the pull-up resistor indicates a lower injected current, and on the contrary, a lower value of the pull-up resistor indicates a higher injected current. For a specific VR chip, the VR chip has a specified maximum withstand current. Therefore, a pull-up resistance, that is, the value of the pull-up resistor, cannot be specified excessively low. It is necessary to ensure that the current injected into the VR chip by using the pull-up level is less than or equal to the maximum withstand current of the VR chip. A minimum value of the pull-up resistor that satisfies this indicator is the first resistance described in this application. For example, when the pull-up level is 3.3 V, if the resistance of the pull-up resistor is 400Ω, the current injected into the VR chip in this case is equal to the maximum withstand current of the VR chip. In this case, 400Ω is the first resistance described in this application. When subsequent selection and adjustment of the resistance of the pull-up resistor are performed, the resistance needs to be set to at least 400 ohms. It should be additionally noted that, since a value range of the pull-up resistor is usually defined by considering the maximum withstand current of the VR chip in a conventional solution, step S102may refer to the relevant prior art to calculate a critical value, namely, the first resistance, that is of the pull-up resistor and that satisfies a withstand current indicator. Step S103: Obtain an equivalent resistance to ground when the PG pin is at a low level, and calculate, based on the equivalent resistance to ground, a value of the pull-up resistor of the PG pin, as a second resistance, when an output voltage of the PG pin is equal to a preset interference voltage limit value. The equivalent resistance to ground may be obtained in a plurality of manners. For example, data in a VR chip parameter table may be obtained. For example, it is obtained that when PG=0.5 V, a sink current of the PG pin ranges from 0.5 mA to 1 mA. In this case, it may be determined that a range of the equivalent resistance to ground when the PG pin is at the low level ranges from 500Ω to 1000Ω, and then, for example, an intermediate value of 750 ohms may be obtained as the obtained equivalent resistance to ground. For another example, it is considered that an error occurs in the PG pin power-on timing sequence due to voltage division of the equivalent resistance to ground. In this case, a maximum value of 1000 ohms may be obtained as the equivalent resistance to ground, so that a subsequently calculated divided voltage of the equivalent resistance to ground is not lower than an actual divided voltage thereof. For another example, in another implementation, an actual power-on timing sequence of the VR chip may be obtained, and further the equivalent resistance to ground when the PG pin is at the low level is determined. Specifically, for example, in the embodiment inFIG.3, the pull-up level of the VR chip is 3.3 V, a divided voltage when the PG pin is at the low level is 1.2 V, and an actual resistance of the pull-up resistor is 1000Ω. In this case, the equivalent resistance to ground when the PG pin is at the low level=1000*1.2/(3.3−1.2)=572Ω. That is, the equivalent resistance to ground when the PG pin is at the low level/the pull-up resistance=an output voltage when the PG pin is at the low level/(the pull-up level−the output voltage when the PG pin is at the low level). After the equivalent resistance to ground when the PG pin is at the low level is obtained, the value of the pull-up resistor of the PG pin when the output voltage of the PG pin is equal to a preset interference voltage limit value may be calculated based on the equivalent resistance to ground, and the value is denoted as the second resistance in this application. The interference voltage limit value refers to a maximum voltage value allowed to be output by the PG pin when the PG pin is at the low level, for example, is usually 200 mV. In other words, when the PG pin is at the low level, the output voltage is lower than 200 mV. This does not cause an incorrect action of a subsequent circuit. Certainly, another specific value may be used for the interference voltage limit value in another implementation. An example in which the equivalent resistance to ground is 572Ω and the pull-up level is 3.3 V is still used. When the interference voltage limit value is 200 mV, 572/the value of the pull-up resistor=0.2/3.3. That is, in the specific embodiment, the value of the pull-up resistor=572*3.3/0.2=9438Ω. That is, the second resistance is 9438Ω. This represents that a voltage of an output terminal of the PG pin when the PG pin is at the low level is lower than the interference voltage limit value of 200 mV only when the value of the pull-up resistor is greater than or equal to 9438Ω, to avoid an incorrect action of the subsequent circuit. It should be additionally noted that, when the PG pin is at the low level, the VR chip usually controls a related switch circuit in the inside of the chip to be turned on, that is, to enable the PG pin to be grounded. In addition, the grounded PG pin has a relatively large equivalent resistance. This is because, usually, during actual application, the switch circuit does not necessarily consist of a single switch tube, for example, a single MOS, and a related circuit may function as a switch, resulting in a relatively large equivalent resistance to ground when the PG pin is at the low level. Certainly, in another specific occasion, due to another type of reason, there may still be a relatively large equivalent resistance to ground when the PG pin is grounded. The relatively large equivalent resistance to ground makes it possible to cause an incorrect action of the subsequent circuit when the PG pin is at the low level. This is the reason why the PG pin power-on timing sequence described in this application is improper. Step S104: Output, when it is determined that an actual resistance of the pull-up resistor is lower than the first resistance or the second resistance, first prompt information for indicating that a resistance of the pull-up resistor is improper and that the PG pin power-on timing sequence has a hidden danger. When it is determined that the actual resistance of the pull-up resistor is lower than the first resistance, it indicates that the current flowing into the VR chip by using the pull-up level may exceed a withstand current of the VR chip. When the actual resistance of the pull-up resistor is lower than the second resistance, an incorrect action of the subsequent circuit may also be caused when the PG pin is at the low level. In other words, the actual resistance of the pull-up resistor being higher than the first resistance and higher than the second resistance is a proper resistance of the pull-up resistor, so that the first prompt information is not output. Certainly, whether the current of the VR chip exceeds the withstand current of the VR chip, or the incorrect action of the subsequent circuit is caused when the PG pin is at the low level, it can be determined that the PG pin power-on timing sequence has a hidden danger. Therefore, the first prompt information is used to represent that the resistance of the pull-up resistor is improper and that the PG pin power-on timing sequence has a hidden danger, to provide a prompt for indicating a relevant staff to notice this case, so that the resistance of the pull-up resistor can be adjusted in time. Certainly, the adjustment described herein may be adjustment performed in a circuit design stage, or may be adjustment of an actually produced hardware circuit, but is usually adjustment performed in the design stage. The applicant analyzes a rising trend of a waveform of a PG signal, and considers that the rising trend of the waveform is basically consistent with a trend of a pull-up level P3V3. Therefore, it is considered that there is voltage division when the PG pin is at a low level, that is, there is an equivalent resistance to ground when the PG signal is at the low level. When the equivalent resistance to ground is relatively large, or a resistance of the pull-up resistor is relatively low, due to voltage division of the equivalent resistance to ground to the pull-up level, when the PG pin is at the low level, the output VOUT of the PG pin may be relatively high, which in turn may disrupt a correct power-on timing sequence and cause an incorrect action of the subsequent circuit. Therefore, when the resistance of the pull-up resistor is specified and selected, it should be considered whether the resistance of the pull-up resistor is excessively low, thus causing improperness of the PG pin power-on timing sequence. Specifically, in this application, the value of the pull-up resistor of the PG pin when the current injected into the VR chip by using the pull-up level is equal to the maximum withstand current of the VR chip is determined, as the first resistance. The first resistance represents a minimum value of the pull-up resistor when it is ensured that the current injected into the VR chip by using the pull-up level is less than or equal to the maximum withstand current of the VR chip. In addition, in this application, the equivalent resistance to ground when the PG pin is at the low level is obtained, and the value of the pull-up resistor of the PG pin when the output voltage of the PG pin is equal to the preset interference voltage limit value is calculated based on the equivalent resistance to ground, as the second resistance. The second resistance represents a minimum value of the pull-up resistor when the incorrect action of the subsequent circuit is avoided. Therefore, the first prompt information is output when it is determined that the actual resistance of the pull-up resistor is lower than the first resistance or the second resistance. Therefore, according to the solutions of this application, it can be effectively determined whether a power-on timing sequence of PG pins in a VR chip is proper, to avoid an incorrect action of a subsequent circuit. In a specific implementation of the present invention, the method may further include:obtaining a value of the pull-up resistor, as a third resistance, when an edge rate of the VR chip reaches a preset maximum edge rate;obtaining a value of the pull-up resistor, as a fourth resistance, when an edge rate of the VR chip reaches a preset minimum edge rate; andoutputting second prompt information when it is determined that the actual resistance of the pull-up resistor is lower than the third resistance or higher than the fourth resistance. In the foregoing implementation, two indicators are considered for the resistance of the pull-up resistor. One is to ensure that the current flowing into the VR chip is within a withstand range of the VR chip, and the other is to avoid the incorrect action of the subsequent circuit because of overvoltage caused when the PG pin is at the low level. In this implementation, it is further considered that the resistance of the pull-up resistor affects an edge rate of a signal. Specifically, a higher resistance of the pull-up resistor indicates a lower edge rate of the signal. Correspondingly, a lower resistance of the pull-up resistor indicates a higher edge rate of the signal. In addition, a maximum edge rate and a minimum edge rate are usually preset for the VR chip. Therefore, the third resistance and the fourth resistance are calculated in this implementation of this application. That is, when the actual resistance of the pull-up resistor is greater than or equal to the third resistance and is less than or equal to the fourth resistance, it indicates that this edge rate indicator is satisfied. Correspondingly, when it is determined that the actual resistance of the pull-up resistor is lower than the third resistance or higher than the fourth resistance, the second prompt information may be output, to provide a prompt for indicating a relevant staff to notice this case, so that the resistance of the pull-up resistor can be adjusted in time. In a specific implementation of the present invention, the method further includes:obtaining a value of the pull-up resistor, as a fifth resistance, when a power loss of the pull-up resistor reaches a preset loss threshold; andoutputting third prompt information when it is determined that the actual resistance of the pull-up resistor is lower than the fifth resistance. In addition to the three indicators described in the foregoing embodiment, the power loss of the pull-up resistor is also considered in this implementation. Specifically, a lower resistance of the pull-up resistor indicates a higher power loss of the pull-up resistor. When it is determined that the actual resistance of the pull-up resistor is lower than the fifth resistance, it indicates that a power loss of the pull-up resistor is higher than a loss threshold, and therefore the third prompt information is output, to provide a prompt for indicating a relevant staff to notice a case in which the power loss is excessively high. Further, in a specific implementation, the method may further include:determining a resistance selection range by using the first resistance, the second resistance, the third resistance, the fourth resistance, and the fifth resistance and displaying the resistance selection range, wherefor any value in the resistance selection range, the value is greater than or equal to the first resistance, is greater than or equal to the second resistance, is greater than or equal to the third resistance, is greater than or equal to the fifth resistance, and is less than or equal to the fourth resistance. In other words, in this implementation, a resistance selection range that satisfies the four indicators described in the foregoing embodiment is determined, and the pull-up resistance may be selected from the resistance selection range according to an actual need. In addition, in this application, the resistance selection range is displayed, to help a relevant staff intuitively see the range and further specify and adjust the resistance of the pull-up resistor. For example, in the implementation inFIG.1, the resistance of the pull-up resistor is adjusted from 1000 ohms to 10000 ohms. For a timing waveform after the adjustment, refer toFIG.5. It can be learned that a step of the PG signal changes to 168 mV. When the output voltage reaches a critical value of 90%, after about 1 ms, the PG signal rises starting from 168 mV. This meets a requirement of the PG pin power-on timing sequence. It should be additionally noted that, the resistance selection range that satisfies the four indicators can usually be determined. In rare occasions, it may not be possible to simultaneously satisfy the four indicators. In this case, prompt information may also be output, so that the staff can weigh importance of each indicator to select a preferred resistance of the pull-up resistor. For example, it should be preferentially ensured that the value of the pull-up resistor needs to be greater than or equal to the first resistance and greater than the second resistance. Second, it is considered that the value is greater than or equal to the third resistance and is less than or equal to the fourth resistance. Finally, a condition that the value is greater than or equal to the fifth resistance is considered to be met. Corresponding to the foregoing method embodiments, an embodiment of the present invention further provides a system for detecting properness of a PG pin power-on timing sequence, which can be cross-referenced with the above. FIG.6is a schematic structural diagram of a system for detecting properness of a PG pin power-on timing sequence according to the present invention, including:a pull-up level obtaining module601, configured to obtain a pull-up level of a PG pin of a VR chip;a first resistance determining module602, configured to determine a value of a pull-up resistor of the PG pin, as a first resistance, when a current injected into the VR chip by using the pull-up level is equal to a maximum withstand current of the VR chip;a second resistance determining module603, configured to: obtain an equivalent resistance to ground when the PG pin is at a low level, and calculate, based on the equivalent resistance to ground, a value of the pull-up resistor of the PG pin, as a second resistance, when an output voltage of the PG pin is equal to a preset interference voltage limit value; anda first prompt information output module604, configured to output, when it is determined that an actual resistance of the pull-up resistor is lower than the first resistance or the second resistance, first prompt information for indicating that a resistance of the pull-up resistor is improper and that the PG pin power-on timing sequence has a hidden danger. In a specific implementation of the present invention, the method further includes:a third resistance determining module, configured to obtain a value of the pull-up resistor, as a third resistance, when an edge rate of the VR chip reaches a preset maximum edge rate;a fourth resistance determining module, configured to obtain a value of the pull-up resistor, as a fourth resistance, when an edge rate of the VR chip reaches a preset minimum edge rate; anda second prompt information output module, configured to output second prompt information when it is determined that the actual resistance of the pull-up resistor is lower than the third resistance or higher than the fourth resistance. In a specific implementation of the present invention, the method further includes:a fifth resistance determining module, configured to obtain a value of the pull-up resistor, as a fifth resistance, when a power loss of the pull-up resistor reaches a preset loss threshold; anda third prompt information output module, configured to output third prompt information when it is determined that the actual resistance of the pull-up resistor is lower than the fifth resistance. In a specific implementation of the present invention, the method further includes:a resistance selection range display module, configured to: determine a resistance selection range by using the first resistance, the second resistance, the third resistance, the fourth resistance, and the fifth resistance and display the resistance selection range, wherefor any value in the resistance selection range, the value is greater than or equal to the first resistance, is greater than or equal to the second resistance, is greater than or equal to the third resistance, is greater than or equal to the fifth resistance, and is less than or equal to the fourth resistance. Corresponding to the foregoing method and system embodiments, an embodiment of the present invention further provides a device for detecting properness of a PG pin power-on timing sequence and a computer-readable storage medium, which can be cross-referenced with the above. FIG.7is a schematic structural diagram of a device for detecting properness of a PG pin power-on timing sequence, including:a memory701, configured to store a computer program; anda processor702, configured to execute the computer program to implement the step of the method for detecting properness of the PG pin power-on timing sequence according to any one of the foregoing embodiments. A computer-readable storage medium storing a computer program is provided. The computer program implements the step of the method for detecting properness of a PG pin power-on timing sequence according to any one of the foregoing embodiments when being executed by a processor. The computer-readable storage medium mentioned herein includes a random access memory (RAM), an internal memory, a read-only memory (ROM), an electrically programmable ROM, an electrically erasable programmable ROM, a register, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the technical field. The professionals may further realize that the units and algorithmic steps of each example described in combination with the embodiments disclosed herein are capable of being implemented in electronic hardware, computer software, or a combination of the two, and the composition and steps of each example have been described generally by function in the above description for the purpose of clearly illustrating the interchangeability of hardware and software. Whether these functions are performed in hardware or software depends on the particular application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions for each particular application, but such implementation should not be considered as going beyond the scope of the present invention. Specific examples have been applied herein to illustrate the principles and implementation of the present invention, and the above descriptions of the embodiments are merely used to help understand the technical solution of the present invention and its core ideas. It should be noted that, for those skilled in the art, several improvements and modifications can be made to the present invention without departing from the principles of the present invention, and such improvements and modifications shall also fall within the protection scope of the claims of the present invention. | 24,963 |
11863179 | DETAILED DESCRIPTION Exemplary implementations will now be described more fully with reference to the accompanying drawings. Exemplary implementations, however, can be embodied in various forms and should not be construed as limitation to the examples set forth herein; rather, these implementations are provided so that the present disclosure will be thorough and complete, and will fully convey the concept of exemplary implementations to a person skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more implementations. In the following description, numerous specific details are provided in order to give a thorough understanding of the implementations of the present disclosure. However, a person skilled in the art would appreciate that the technical solutions of the present disclosure may be practiced without one or more of the specific details, or other methods, components, apparatuses, steps, etc. may be employed. In other instances, well-known solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure. In addition, the accompanying drawings are merely schematic illustrations of the present disclosure, and the same reference numerals in the drawings denote the same or similar parts, and thus their repeated descriptions will be omitted. Some of the block diagrams illustrated in the figures are functional entities that do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor apparatuses and/or microcontroller apparatuses. The exemplary implementations of the present disclosure will be described in detail below with reference to the accompanying drawings. FIG.1illustrates a schematic diagram of a voltage conversion circuit in an exemplary embodiment of the present disclosure. With reference toFIG.1, the voltage conversion circuit100may include a first input module11and a second input module12. The first input module11is connected to a first voltage VCC_High and has a first input terminal111which is configured to receive an input signal Si and output a conversion signal Sc, a high level of the input signal Si is a second voltage VCC_Low which is less than the first voltage VCC_High. The second input module12is connected to the first input module11and has a second input terminal121and an output terminal122, the second input terminal121is configured to receive a sampling signal Ss, and the second input module12samples the conversion signal Sc according to the sampling signal Ss to output an output signal So via the output terminal122. In an embodiment of the present disclosure, the first input module11operates at the first voltage VCC_High, and therefore, the conversion signal Sc generated according to the input signal Si is a signal at the first voltage VCC_High, and the conversion signal Sc is sampled by the second input module12, an output signal at the first voltage VCC_High can be output, so as to realize boosting of the voltage of the input signal Si. In an embodiment of the present disclosure, the enable level of the sampling signal Ss may appear within a preset time period during which the level of the input signal Si changes, that is, the sampling signal Ss may be triggered by the polarity variation of the input signal Si, so that the timing of the output signal So is made to be consistent with timing of the input signal Si. In another embodiment of the present disclosure, the sampling signal Ss may also be a pulse signal with a predetermined cycle, so that the aperiodic input signal Si is converted into a periodic output signal So, so that the timing of the signal is easier to be controlled. The input signals having different timings can be processed to form a signal group having the same timing through the processing of the sampling signal, facilitating the use of subsequent circuits. In another embodiment of the present disclosure, the sampling signal Ss may also be used to sample the input signal Si according to the control signal. With a simple structure, the voltage conversion circuit100provided by the embodiments of the present disclosure can be implemented with fewer components, which greatly reduces the occupied area and the power consumption of the components having the voltage conversion function. FIG.2illustrates a circuit diagram of a voltage conversion circuit in an embodiment of the present disclosure. With reference toFIG.2, in the voltage conversion circuit200, the first input module11may include a first P-type transistor M1and a first N-type transistor M4. The gate of the first P-type transistor M1is connected to the first input terminal111, and the source of the first P-type transistor M1is connected to the first voltage VCC_High. The gate of the first N-type transistor M4is connected to the first input terminal111and the source of the first N-type transistor M4is grounded. The second input module12may include a second P-type transistor M2and a second N-type transistor M3. The gate of the second P-type transistor M2is electrically connected to the second input terminal121, the source of the second P-type transistor M2is connected to the drain of the first P-type transistor M1, and the drain of the second P-type transistor M2is connected to the output terminal122. The gate of the second N-type transistor M3is electrically connected to the second input terminal121, the source of the second N-type transistor M3is connected to the drain of the first N-type transistor M4, and the drain of the second N-type transistor M3is connected to the output terminal122. In an embodiment illustrated inFIG.2, the second input module12further includes a first inverter OP1. In an embodiment, an enable level of the sampling signal Ss is a high level, the gate of the second P-type transistor M2is connected to the second input terminal121(as illustrated inFIG.2) via the first inverter, and the gate of the second N-type transistor M3is connected to the second input terminal121. In another embodiment, an enable level of the sampling signal Ss is a low level, the gate of the second P-type transistor M2is connected to the second input terminal121(not illustrated), and the gate of the second N-type transistor M3is connected to the second input terminal121via the first inverter. The operating principle of the voltage conversion circuit of the embodiment illustrated inFIG.2will be described below by controlling the timing. FIG.3illustrates a timing control diagram in the embodiment illustrated inFIG.2. In the embodiment illustrated inFIG.3, the input signal Si is a periodic signal, and the sampling signal Ss is also a periodic signal. The sampling signal Ss appears in the preset time period during which the input signal Si undergoes a level conversion, and the level of the output signal So changes with the occurrence of the sampling signal Ss. It can be seen that the level variation range of the input signal Si is the second voltage VCC_Low, and the level variation ranges of the sampling signal Ss, the inversion signal Ss of the sampling signal Ss after passing the first inverter OP1, and the level variation ranges of the output signal So are all the first voltage VCC_High. FIGS.4A to4Dillustrate schematic diagrams of equivalent circuits of the voltage conversion circuit illustrated inFIG.2under the control of the timing illustrated inFIG.3. With reference toFIG.4A, when the input signal Si is at a low level and the sampling signal Ss is at a high level, the first P-type transistor M1, the second P-type transistor M2, and the first N-type transistor M3are conducted, and the second N-type transistor M4is turned off. At this time, the first voltage VCC_High is output to the output terminal122through the first P-type transistor M1and the second P-type transistor M2to generate the current IL so that the output signal So is VCC_High, that is, a high level. With reference toFIG.4B, when the input signal Si is a high level and the sampling signal Ss is at a high level, the second P-type transistor M2, the first N-type transistor M3, and the second N-type transistor M4are conducted, and the first P-type transistor M1is turned off (or at least the pull-up capability of the first P-type transistor M1is less than the pull-down capability of the second N-type transistor M4). At this time, the output terminal122is grounded through the first N-type transistor M3and the second N-type transistor M4, and the discharging current I2is generated, so that the output signal So is 0, that is, the output signal So is at a low level. It can be understood that, the high level of the input signal Si is VCC_Low, and VCC_Low is smaller than VCC_High, for example, VCC_Low is 0.9V and VCC_High is 1.1V, and therefore, the first P-type transistor M1may not be completely turned off, and there will be certain charging current I3, the charging current I3flows from VCC_High to the output terminal122, and the discharging current I2flows from the output terminal122to the ground terminal. In order to make the charging current I3much smaller than the discharging current I2, so that the output signal So of the output terminal122reaches 0V, in an embodiment of the present disclosure, the value of the second voltage VCC_Low needs to make the pull-up capability of the first P-type transistor M1smaller than the pull-down capability of an N-type transistor M4. In an exemplary embodiment of the present disclosure, the difference between the first voltage VCC_High and the second voltage VCC_Low may be, for example, less than or equal to the threshold voltage Vp1of the first P-type transistor M1, and the value of Vp1may be, for example, 0.5V. It can be seen from the embodiments illustrated inFIGS.4A and4B, the voltage conversion circuit200can convert the input signal Si with a lower voltage variation range (VCC_Low to 0) into an output signal with a higher voltage variation range (VCC_High to 0), and the phase of the output signal So is opposite to that of the input signal Si. If the phase of the output signal So needs to be set to be the same as the phase of the input signal Si, an inverter may be connected to the output terminal122, which will not be repeated here in the present disclosure. With reference toFIG.4CandFIG.4D, when the sampling signal Ss is at a low level, no matter what kind of signal the input signal Si is, no signal variation will be generated at the output terminal122. In order to make the output signal So of the output terminal122have the same phase as the input signal Si, one way is to control, the sampling signal Ss to be always at a high level, and the second input module12can also be removed at this time, however, in this way, when the input signal Si is at a high level, the charging current I3to the ground will continue to be generated, resulting in an increase in circuit power consumption. Another way is to connect a latch after the output terminal122, so that when the sampling signal Ss is at a low level, the output signal of the entire circuit can still remain unchanged. FIG.5illustrates a circuit diagram of a voltage conversion circuit in another embodiment of the present disclosure. With reference toFIG.5, in an exemplary embodiment of the present disclosure, the voltage conversion circuit500further include a latch module13. An input terminal of the latch module13is connected to the output terminal122, and the latch module13is configured to latch the output signal So. In the embodiment illustrated inFIG.3, the latch module13includes a second inverter OP2and a feedback inverter OP3. An input terminal of the second inverter OP2is connected to an output terminal122of the voltage conversion circuit, and an output terminal of the second inverter may be connected to an output terminal131of the latch module13; and an input terminal of the feedback inverter OP3may be connected to an output terminal131of the latch module13, and an output terminal of the feedback inverter OP3may be connected to an output terminal122of the voltage conversion circuit. And both the second inverter OP2and the feedback inverter OP3are connected to the first voltage VCC_High, which is not illustrated in the figure. FIG.6illustrates a timing control diagram in the embodiment illustrated inFIG.5. With reference toFIG.6, compared withFIG.3, the timing diagram of the output signal Output of the output terminal131of the latch module13is added inFIG.6. The level variation of the output signal Output is later than the level variation of the output signal So due to the delay of the latch. FIGS.7A to7Dillustrate schematic diagrams of equivalent circuits of the voltage conversion circuit of the embodiment illustrated inFIG.5under the timing illustrated inFIG.6. With reference toFIG.7A, when the input signal Si is at a low level and the sampling signal Ss is at a high level, the first P-type transistor M1, the second P-type transistor M2, and the first N-type transistor M3are conducted, and the second N-type transistor M4is turned off (or at least the pull-up capability of the first P-type transistor M1is less than the pull-down capability of the second N-type transistor M4). At this time, the first voltage VCC_High is output to the output terminal122through the first P-type transistor M1and the second P-type transistor M2to generate the current I1, so that the output signal So is VCC_High, that is, the output signal So is at a high level. The output signal So enters the second inverter OP2and outputs a low-level output signal Output through the output terminal131, and the phase of the output signal Output is synchronized with the phase of the sampling signal Ss. The output signal Output returns to the output terminal122via the feedback inverter OP3, and is still at a high level. Since the feedback inverter OP3does not need to operate to maintain the voltage of the output terminal122at this time, in order to reduce power consumption, the feedback inverter OP3can be controlled to be in a turned-off state in such a case. With reference toFIG.7B, when the input signal Si is at a high level and the sampling signal Ss is at a high level, the second P-type transistor M2, the first N-type transistor M3, and the second N-type transistor M4are conducted, and the first P-type transistor M1is turned off. At this time, the output terminal122is grounded through the first N-type transistor M3and the second N-type transistor M4, and the discharging current I2is generated, so that the output signal So is 0, that is, the output signal So is at a low level. It can be understood that, since the high level of the input signal Si is VCC_Low, the first P-type transistor M1may not be completely turned off, and there will be certain charging current I3. The output signal So passes through the second inverter OP2and an output signal Output having a high level is output through the output terminal131, and the phase of the output signal Output is synchronized with the phase of the sampling signal Ss. The output signal Output returns to the output terminal122via the feedback inverter OP3, and is still at a low level. Since the feedback inverter OP3does not need to operate to maintain the voltage of the output terminal122at this time, in order to reduce power consumption, the feedback inverter OP3can be controlled to be in a turned-off state in such a case. That is, when the sampling signal Ss is in an enabled state, the feedback inverter OP3may be set in a turned-off state. With reference toFIG.7C, when the sampling signal Ss is at a low level, no matter what kind of signal the input signal Si is, no signal variation will be generated at the output terminal122. Before this state is formed, that is, before the sampling signal Ss is controlled to be at a low level, the feedback inverter OP3can be enabled to form a feedback path between the output terminal122and the output terminal131. Since the second inverter OP2and the feedback inverter OP3are all active devices, the latch module13can maintain the voltage of the output terminal122when operating, and thus maintain the voltage of the output terminal131unchanged. The same applies to the embodiment illustrated inFIG.7D, and details will not be repeated herein in the present disclosure. Since the sampling signal Ss is in the enabled state and the input signal Si is at a high level, the charging current I3will be generated, which increases power consumption, and when the latch module13is used to maintain the output signal unchanged, the time period during which the enabled state of the sampling signal Ss is maintained can be reduced as much as possible to reduce circuit power consumption. In an embodiment of the present disclosure, when the enable level of the sampling signal Ss occurs within a preset time period during which the level of the input signal Si changes, the time period during which the enable level of the sampling signal Ss is maintained may be, for example, less than a half of the time period of the high level of the input signal Si. In another embodiment of the present disclosure, when the sampling signal Ss is a pulse signal with a predetermined cycle, the duty ratio of the enable level in the sampling signal Ss may be, for example, less than ½. To further reduce power consumption, the duty cycle of the sampling signal Ss may be lower, for example, the duty cycle of the sampling signal Ss may be smaller than ¼, smaller than 1/10, or smaller than 1/20. The above numerical values are only examples. In practical applications, a person skilled in the art can set the enable level maintaining time of the sampling signal Ss by himself/herself according to the actual situation. Furthermore, by reducing the size of the components in the second input module12, the charging current I3can also be reduced. The high level of the sampling signal Ss is VCC_High, and the low level is 0. The sampling signal Ss can be generated by a control circuit powered by VCC_High. In the above embodiments of the present disclosure, by using a lower-voltage input signal Si to drive a transistor operating at a higher voltage, the lower-voltage input signal Si can be converted into a higher-voltage output signal So. By using the sampling signal to cooperate with the latch to realize the sampling and maintenance of the output signal So, the enable level maintaining time of the sampling signal can be greatly reduced, the generation of leakage current can be reduced, and furthermore, the power consumption of the entire circuit can be reduced. Compared with the prior art, the voltage conversion circuit provided by the embodiments of the present disclosure has fewer components, a small occupied area of the components, and low power consumption, and can be widely used in a circuit with a plurality of signal voltage conversion requirements. In the embodiments of the present disclosure, the first input module is used to convert a lower-voltage input signal into a higher-voltage signal, and the second input module is used to sample the higher-voltage signal and then output it, such that a lower-voltage input signal can be converted to a higher-voltage output signal by using fewer components, thereby reducing the occupied area and power consumption of components in the process of voltage conversion. It is to be noted that although several modules or units of the equipment for action performance are mentioned in the above detailed description, this division is not mandatory. Indeed, according to embodiments of the present disclosure, the features and functions of two or more modules or units described above may be embodied in one module or unit. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied. A person skilled in the art would easily conceive of other implementations of the present disclosure upon consideration of the specification and practice of the present disclosure disclosed herein. The present disclosure is intended to cover any variations, uses, or adaptations of the present disclosure that follow the general principles of the present disclosure and include common sense or commonly-used techniques in the technical field not disclosed by the present disclosure. The specification and embodiments are to be regarded as exemplary only, with the true scope and spirit of the present disclosure being indicated by the claims. | 20,690 |
11863180 | DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS FIG.1is a schematic of a circuit100, according to an embodiment. The circuit includes a switch104, a voltage generation circuit120, a level shifter130and a logic unit140. The voltage generation circuit120is coupled to the switch104. The level shifter130is coupled to voltage generation circuit120. The logic unit140is coupled to the voltage generation circuit120, the level shifter130and the switch104. The switch104includes first transistors and second transistors. The first transistors include one or more n-channel field effect transistors (NFETs), and the second transistors include one or more p-channel field effect transistors (PFETs). The first transistors include a first NFET Q0112, and the second transistors include a first PFET Q1114and a second PFET Q2116. A drain terminal of the first NFET Q0112: (a) is coupled to a source terminal of the first PFET Q1114; and (b) receives an input signal Vin108. A gate terminal of the first NFET Q0112receives a control signal CNTRL106. A bulk terminal of the first NFET Q0112, in one example, is coupled to a supply voltage Vdd124. In another example, the bulk terminal is coupled to one of the source, drain or gate terminal of the first NFET Q0112based on requirement of the circuit100. The source terminal of the first PFET Q1114receives the input signal Vin108, and a gate terminal of the first PFET Q1114is coupled to the logic unit140. A drain terminal of the first PFET Q1114is coupled to a drain terminal of the second PFET Q2116at a first node N1118. A gate terminal of the second PFET Q2116is coupled to the logic unit140, and a source terminal of the second PFET Q2116is coupled to a source terminal of the first NFET Q0112. The voltage generation circuit120is coupled to the second transistors in the switch104. The voltage generation circuit120includes a diode D1126and a third PFET Q3128. The diode D1126receives the supply voltage Vdd124and is coupled to the level shifter130and the logic unit140. The third PFET Q3128is coupled to the second transistors in the switch104. Both the diode D1126and the third PFET Q3128are coupled to the level shifter130and the logic unit140at a third node N3122. A source terminal of the third PFET Q3128is coupled to drain terminals of the first PFET Q1114and the second PFET Q2116. A gate terminal of the third PFET Q3128receives the control signal CNTRL106. A drain terminal of the third PFET Q3128is coupled to the level shifter130and the logic unit140. The logic unit140includes a primary PFET Q4134and a secondary NFET Q5136. The primary PFET Q4134and the secondary NFET Q5136are coupled to the level shifter130and the switch104. A source terminal of the primary PFET Q4134is coupled to both the diode D1126and the third PFET Q3128in the voltage generation circuit120. A gate terminal of the primary PFET Q4134is coupled to the level shifter130. A drain terminal of the primary PFET Q4134is coupled to a drain terminal of the secondary NFET Q5136at a second node N2132. The drain terminals of the primary PFET Q4134and the secondary NFET Q5136are coupled to the gate terminals of the first PFET Q1114and the second PFET Q2116. A gate terminal of the secondary NFET Q5136is coupled to the level shifter130, and a source terminal of the secondary NFET Q5136is coupled to a ground terminal. The circuit100may include one or more conventional components that are not described herein for brevity. Each component of the circuit100may also be coupled to other components or blocks inFIG.1, but those connections are not described herein for brevity. Also, each block or component ofFIG.1may be coupled to conventional components of a system using the circuit100, which are also not shown inFIG.1for brevity. In operation, the switch104receives the input signal Vin108and generates an output signal Vout110. The voltage generation circuit120receives the supply voltage Vdd124, the control signal CNTRL106, and a voltage at the first node N1118. The voltage generation circuit120generates a first signal at the third node N3122. The level shifter130and the logic unit140receive the first signal from the voltage generation circuit120. The level shifter130: (a) receives the first signal and the control signal CNTRL106; and (b) generates a primary signal. The primary signal, in one version, is a level shifted version of the control signal CNTRL106. The logic unit140receives the first signal from the voltage generation circuit120and the primary signal from the level shifter130. The logic unit140generates a secondary signal, which is provided to the second node N2132. The gate terminals of the first PFET Q1114and the second PFET Q2116receive the secondary signal. The voltage generation circuit120operates as a multiplexer that provides a maximum one of a first voltage or a second voltage (i.e., whichever one is higher). The first voltage is (or is proportional to) the voltage at the first node N1118, and the second voltage is (or is proportional to) a difference between a threshold voltage of the diode D1126and the supply voltage Vdd124. In one example, the threshold voltage is proportional to a voltage drop across the diode D1126. In another example, the functionality of the diode D1126can be implemented using one or more of, or a combination of, diode, PN junction diode, Schottky diode, Zener diode and transistors, which ensure unidirectional flow of current from the supply voltage Vdd124towards the third node N3122. The voltage generation circuit120provides a maximum one of the first or second voltages (i.e., whichever one is higher) as the first signal at the third node N3122. In one version, the voltage at the third node N3122keep the level shifter130operational. The combination of the diode D1126and the third PFET Q3128is one way of implementing voltage generation circuit120, and the voltage generation circuit120may be implemented with hardware elements (and/or operations in a different order) in configurations different than those described herein. When the control signal CNTRL106has a logic low state (logic ‘0’), the first NFET Q0112has OFF state (i.e., is switched OFF, or is OPENED, to stop conducting current), and the third PFET Q3128has ON state (i.e., is switched ON, or is CLOSED, to conduct current). The voltage at the third node N3122is equal to the first voltage (i.e., the voltage at the first node N1118). The primary signal generated by the level shifter130results in switching ON the primary PFET Q4134and switching OFF the secondary NFET Q5136. As a result, a voltage at the second node N2132is equal to the voltage at the third node N3122. This results in switching OFF of the second PFET Q2116. Thus, the switch104stops generating the output signal Vout110responsive to the input signal Vin108when the control signal CNTRL106has a logic low state. When the control signal CNTRL106has a logic high state (logic ‘1’), the first NFET Q0112has ON state. When the control signal CNTRL106has a logic high state and is greater than the voltage at the first node N1118, the third PFET Q3128has OFF state. When the control signal CNTRL106has a logic high state and is less than the voltage at the first node N1118, the third PFET Q3128has ON state. The voltage at the third node N3122is a maximum one of the first voltage or the second voltage (i.e., whichever one is higher). The primary signal generated by the level shifter130results in switching OFF the primary PFET Q4134and switching ON the secondary NFET Q5136. As a result, the second node N2132is coupled to the ground terminal through the secondary NFET Q5136. This results in switching ON of the first PFET Q1114and the second PFET Q2116. Thus, the switch104generates the output signal Vout110responsive to the input signal Vin108when the control signal CNTRL106has a logic high state. Thus, circuit100provides the switch104that operates at high-speed and has low impedance when the output signal Vout110is generated. The switch104is useful in applications requiring high-speed ADCs. Also, when the control signal CNTRL106has a logic low state (logic ‘0’), the circuit100enables failsafe operation even when the input signal Vin108is higher than the supply voltage Vdd124or even when the input signal Vin108is as low as 0 volt. When the input signal Vin108is higher than the supply voltage Vdd124, the circuit100provides for complete switching OFF of the second PFET Q2116, so output signal Vout110is not generated. This is because the voltages at the first node N1118and the second node N2132are equal. When the input signal Vin108is low, for example 0 volt, the level shifter130provides for complete switching OFF of the second PFET Q2116. This is because the level shifter130is driven by the supply voltage Vdd124. The voltage at the first node N1118acts as a failsafe input signal, which drives the gate terminals of the first PFET Q1114and the second PFET Q2116. Thus, the circuit100solves the failsafe issue without drawing static current either from the supply voltage Vdd124or from the input signal Vin108. This enables the switch104to operate as a high-speed switch with no quiescent current. The voltage generation circuit120ensures that a maximum one of the first voltage or the second voltage (i.e., whichever one is higher) is provided to the level shifter130, which drives the first PFET Q1114and the second PFET Q2116. In conventional circuits, the switches are implemented using multiple resistors and capacitors which results in large RC time constants, and hence cannot operate as high-speed switches. The switch104in the circuit100operate as a high-speed switch because no resistors and capacitors are used, and also the transistors present in switch104undergo a fast transition when there is a change in state of control signal CNTRL106. In some other conventional circuits, the switch is implemented using an ideal diode. The ideal diode is implemented using an NFET pass-gate ideal diode and/or a PFET pass-gate ideal diode. However, an ideal diode requires a high voltage charge pump which increases the power consumption of the circuit. The diode D1126in circuit100is not required to be an ideal diode, and accordingly the circuit100may consume less power than conventional circuits. The combination of the voltage generation circuit120, the level shifter130and the logic unit140act as a zero quiescent current gate driver for the first PFET Q1114and the second PFET Q2116. The voltage generation circuit120provides for automatic switching between the first voltage and the second voltage. Thus, circuit100does not require a separate digital switch to switch between the supply voltage Vdd124or the voltage at the first node N1118. Another advantage of voltage generation circuit120is that the circuit100does not require a sub-regulator to generate a maximum one of the first voltage or the second voltage (i.e., whichever one is higher). The voltage generation circuit120provides a smooth switchover between the first voltage and the second voltage without having blips in the output signal Vout110. All these features enable the circuit100to be used as a failsafe switch operable at high frequencies in multi-domain systems. FIG.2is a schematic of a voltage generation circuit200, according to an embodiment. The voltage generation circuit200is another implementation of the voltage generation circuit120ofFIG.1. The voltage generation circuit200includes a fourth PFET Q4212, a fifth PFET Q5214and a sixth PFET Q6218. The fourth PFET Q4212receives a supply voltage Vdd202. The fifth PFET Q5214is coupled to the fourth PFET Q4212. The sixth PFET Q6218is coupled to the fifth PFET Q5214. A source terminal of the fourth PFET Q4212receives the supply voltage Vdd202. A drain terminal of the fourth PFET Q4212is coupled to a source terminal of the fifth PFET Q5214. A gate terminal of the fourth PFET Q4212is coupled to a gate terminal of the fifth PFET Q5214and also to a third node N3222. A source terminal of the sixth PFET Q6218is coupled to second transistors in a switch (similar to the second transistors in the switch104inFIG.1). The source terminal of the sixth PFET Q6218(similar to the third PFET Q3128inFIG.1) is coupled to the second transistors in the switch (such as switch104ofFIG.1). The source terminal of the sixth PFET Q6218receives a voltage VN1204. The voltage VN1204is generated at a first node (similar to the first node N1118ofFIG.1). In one version, the first node N1 similar to the first node N1118is formed by coupling a drain terminal of a first PFET and a source terminal of a second PFET, the first PFET and the second PFET are part of the second transistors. A gate terminal of the sixth PFET Q6218receives a control signal CNTRL224. The control signal CNTRL224is similar to the control signal CNTRL106ofFIG.1. A drain terminal of the sixth PFET Q6218is coupled to the third node N3222. InFIG.1, the third node N3122is coupled to the level shifter130and the logic unit140. Similarly, the third node N3222is coupled to a level shifter and a logic unit, which are not shown for brevity. Thus, the gate terminal of the fourth PFET Q4212, the gate terminal of the fifth PFET Q5214and the drain terminal of the sixth PFET Q6218are all coupled to the level shifter (similar to the level shifter130ofFIG.1). In operation, the voltage generation circuit200receives the supply voltage Vdd124, the control signal CNTRL224and the voltage VN1204, which is generated at the first node N1 (similar to the first node N1118inFIG.1). The voltage generation circuit120generates a first signal V1230. The first signal V1230is generated at the third node N3222. The level shifter and the logic unit receives the first signal V1230from the voltage generation circuit120. The voltage generation circuit200operates as a multiplexer that provides a maximum one of the voltage VN1204or the second voltage (i.e., whichever one is higher). The second voltage is (or is proportional to) a difference between the supply voltage Vdd202and a threshold voltage of the fourth PFET Q4212(and of the fifth PFET Q5214). The voltage generation circuit200provides a maximum one of the voltage VN1204or the second voltage (i.e., whichever one is higher) at the third node N3222. In one version, the voltage at the third node N3222keep the level shifter operational. The combination of the fourth PFET Q4212, the fifth PFET Q5214and the sixth PFET Q6218is one way of implemented voltage generation circuit200, and the voltage generation circuit200may be implemented with hardware elements (and/or operations in a different order) in configurations different than those described herein. When the control signal CNTRL106has a logic low state (logic ‘0’), the sixth PFET Q6218has ON state. The voltage at the third node N3222is equal to the voltage VN1204(i.e., the voltage at the first node N1, such as the first node N1118ofFIG.1). When the control signal CNTRL106has a logic high state (logic ‘1’), the voltage at the third node N3222is a maximum one of the first voltage or the second voltage (i.e., whichever one is higher). When the control signal CNTRL224has a logic high state and is greater than VN1204, the sixth PFET Q6218has OFF state. When the control signal CNTRL224has a logic high state and is less than VN1204, the sixth PFET Q6218has ON state. The voltage generation circuit200ensures that a maximum one of the voltage VN1204or the second voltage (i.e., whichever one is higher) is provided to the level shifter, which drives the second transistors (such as the first PFET Q1114and the second PFET Q2116inFIG.1) in the switch. This enables the switch to turn ON and OFF at high-speed. The voltage generation circuit200provides for automatic switching between the voltage VN1204and the second voltage. Thus, circuit100does not require a separate digital switch to switch between the supply voltage Vdd124or the voltage VN1204. Another advantage of voltage generation circuit200is that a sub-regulator is not required to generate a maximum one of the voltage VN1204or the second voltage (i.e., whichever one is higher). The voltage generation circuit200provides a smooth switchover between the voltage VN1204and the second voltage. For example, when the voltage generation circuit200is used in a CMOS switch, it enables the switch to be used as a failsafe switch operable at high frequencies in multi-domain systems. FIG.3is a timing diagram of operation of the circuit ofFIG.1, according to an embodiment. The timing diagram is described in connection with the circuit100ofFIG.1. The timing diagram shows the supply voltage Vdd124, the input signal Vin108, the output signal Vout110, the control signal CNTRL106, the first voltage VN1 at the first node N1118, and a voltage VN2 at the second node N2132. In the timing diagram, as an example, the supply voltage Vdd124and the input signal Vin108are fixed at constant voltage values. When the control signal CNTRL106has a logic low state (logic ‘0’), the first NFET Q0112has OFF state, and the third PFET Q3128has ON state. The voltage at the third node N3122is equal to the first voltage VN1 (i.e., the voltage at the first node N1118). The primary signal generated by the level shifter130results in switching ON the primary PFET Q4134and switching OFF the secondary NFET Q5136. As a result, the voltage VN2 at the second node N2132is equal to the voltage at the third node N3122. Thus, the voltage VN2 is equal to the first voltage VN1. This results in switching OFF of the second PFET Q2116. Thus, the switch104stops generating the output signal Vout110responsive to the input signal Vin108when the control signal CNTRL106has a logic low state. When the control signal CNTRL106has a logic high state (logic ‘1’), the first NFET Q0112has ON state. When the control signal CNTRL106has a logic high state and is greater than the voltage at the first node N1118, the third PFET Q3128has OFF state. When the control signal CNTRL106has a logic high state and is less than the voltage at the first node N1118, the third PFET Q3128has ON state. The voltage at the third node N3122is a maximum one of the first voltage of the second voltage (i.e., whichever one is higher). The primary signal generated by the level shifter130results in switching OFF the primary PFET Q4134and switching ON the secondary NFET Q5136. As a result, the second node N2132is coupled to the ground terminal through the secondary NFET Q5136. Thus, the voltage VN2 at the second node N2132has a logic low state. This results in switching ON of the first PFET Q1114and the second PFET Q2116. Thus, the switch104generates the output signal Vout110responsive to the input signal Vin108when the control signal CNTRL106has a logic high state. Thus, circuit100provides a switch104that operates at high-speed and has low impedance when the output signal Vout110is generated. The switch104is useful in applications requiring high-speed ADCs. Also, when the control signal CNTRL106has a logic low state (logic ‘0’), the circuit100enables failsafe operation even when the input signal Vin108is higher than the supply voltage Vdd124or even when the input signal Vin108is as low as 0 volt. When the input signal Vin108is higher than the supply voltage Vdd124, the circuit100provides for complete switching OFF of the second PFET Q2116, so output signal Vout110is not generated. This is because the voltages at the first node N1118and the second node N2132are equal. When the input signal Vin108is low, for example 0 volt, the level shifter130provides for complete switching OFF of the second PFET Q2116. This is because the level shifter130is driven by the supply voltage Vdd124. The first voltage VN1 at the first node N1118acts as a failsafe input signal, which drives the gate terminals of the first PFET Q1114and the second PFET Q2116. Thus, the circuit100solves the failsafe issue without drawing static current either from the supply voltage Vdd124or from the input signal Vin108. This enables the switch104to operate as a high-speed switch with no quiescent current. FIG.4is a waveform diagram of operation of the circuit ofFIG.1, according to an embodiment. The waveform diagram is explained in connection with the circuit100ofFIG.1. The waveform diagram shows the supply voltage Vdd124, the input signal Vin108, the output signal Vout110, the first voltage VN1 at the first node N1118, a voltage VN2 at the second node N2132, and a voltage VN3 at the third node N3122. The waveform diagram also shows an output signal VoutC402in a conventional circuit. In the waveform diagram, as an example, the supply voltage Vdd124is fixed at constant voltage value while the input signal Vin108is linearly increasing. When the control signal CNTRL106has a logic low state (logic ‘0’), the first NFET Q0112has OFF state, and the third PFET Q3106has ON state. A voltage at the third node N3122is VN3. As shown in the waveform diagram, the input signal Vin108is linearly increasing. The voltage VN1 at the first node N1118remains constant when the input signal Vin108is less than a threshold voltage of the first PFET Q1114. When the input signal Vin108is greater than the threshold voltage of the first PFET Q1114, the voltage VN1 at the first node N1118follows the input signal Vin108. The voltage VN3 at the third node N3122remains constant when the voltage VN1 at the first node N1118is less than a threshold voltage of the third PFET Q3128. When the voltage VN1 at the first node N1118is greater than the threshold voltage of the third PFET Q3128, the voltage VN3 at the third node N3122follows the voltage VN1 at the first node N1118. Thus, the voltage VN3 at the third node N3122is equal to the first voltage VN1 (i.e., the voltage at the first node N1118). The primary signal generated by the level shifter130results in switching ON the primary PFET Q4134and switching OFF the secondary NFET Q5136. As a result, the voltage VN2 at the second node N2132follows the voltage at the third node N3122. Thus, the voltage VN2 is equal to the first voltage VN1. This results in switching OFF of the second PFET Q2116. Thus, the switch104stops generating the output signal Vout110responsive to the input signal Vin108when the control signal CNTRL106has a logic low state. In contrast, in a conventional circuit, a switch used in the circuit100is not completely switched OFF which results in generation of the output signal VoutC402responsive to the input signal Vin108. Thus, a conventional circuit does not provide a reliable switch that is completely turned OFF when the control signal CNTRL106has a logic low state and the input signal Vin108is higher than the supply voltage Vdd124. However, the switch104of circuit100provides for complete switching OFF of the second PFET Q2116. The circuit100ensure complete switching OFF of the second PFET Q2116even when the input signal Vin108is as low as 0 volt or the input signal Vin108is higher than the supply voltage Vdd124. This is ensured as the circuit100provides that the voltage VN2 follows the first voltage VN1. Thus, circuit100provides the switch104that operates at high-speed and has low impedance when the output signal Vout110is generated. The switch104is useful in multi-domain applications requiring high-speed ADCs. FIG.5is a flowchart500of a method of operation of a circuit, according to an embodiment. The flowchart500is described in connection with the circuit100ofFIG.1. The flowchart starts at step502and ends at step508. At step502, a control signal is provided to first transistors in a switch. In circuit100, for example, the switch104includes first transistors and second transistors. The first transistors include a first NFET Q0112, and a gate terminal of the first NFET Q0112receives the control signal CNTRL106. At step504, a secondary signal is provided to second transistors in the switch. The second transistors include a first PFET. In circuit100, the second transistors include a first PFET Q1114and a second PFET Q2116. The gate terminals of the first PFET Q1114and the second PFET Q2116receive the secondary signal. At step506, an output signal is generated by the switch responsive to an input signal when the control signal has a logic high state. In circuit100, the switch104generates the output signal Vout110responsive to the input signal Vin108when the control signal CNTRL106has a logic high state. At step508, the first transistors and the second transistors in the switch are inactivated (i.e., switched OFF) when the control signal has a logic low state and the secondary signal is proportional to a voltage at a drain terminal of the first PFET. When the control signal CNTRL106has a logic low state (logic ‘0’), the first transistors (which include the first NFET Q0112), and the second transistors (which include the first PFET Q1114and the second PFET Q2116) have OFF state. The secondary signal generated by the logic unit140is proportional to a voltage at a drain terminal of the first PFET Q1114. A first signal is generated responsive to a supply voltage and the control signal. The circuit100includes a voltage generation circuit120. The voltage generation circuit120includes a diode D1126and a third PFET Q3128. The diode D1126receives the supply voltage Vdd124. The third PFET Q3128is coupled to the second transistors in the switch104. Both the diode D1126and the third PFET Q3128are coupled to the level shifter130and the logic unit140at a third node N3122. A source terminal of the third PFET Q3128is coupled to drain terminals of the first PFET Q1114and the second PFET Q2116at a first node N1118. A gate terminal of the third PFET Q3128receives the control signal CNTRL106. A drain terminal of the third PFET Q3128is coupled to the level shifter130and the logic unit140. The voltage generation circuit120receives the supply voltage Vdd124, the control signal CNTRL106and a voltage at the first node N1118. The voltage generation circuit120generates the first signal. The first signal is generated at the third node N3122. The voltage generation circuit120operates as a multiplexer that provides a maximum one of a first voltage or a second voltage (i.e., whichever one is higher). The first voltage is (or is proportional to) the voltage at the first node N1118, and the second voltage is (or is proportional to) a difference between a threshold voltage of the diode D1126and the supply voltage Vdd124. In one example, the threshold voltage is proportional to a voltage drop across the diode D1126. In another example, the functionality of the diode D1126can be implemented using one or more of, or a combination of, diode, PN junction diode, Schottky diode, Zener diode and transistors, which ensure unidirectional flow of current from the supply voltage Vdd124towards the third node N3122. The voltage generation circuit120provides a maximum one of the first or second voltages (i.e., whichever one is higher) as the first signal at the third node N3122. A primary signal is generated responsive to the control signal and the first signal. The circuit100includes a level shifter130. The level shifter130is coupled to voltage generation circuit120. The level shifter130receives the first signal (from the voltage generation circuit120) and the control signal CNTRL106, and generates a primary signal. The secondary signal is generated responsive to the first signal and the primary signal. The circuit100includes a logic unit140. The logic unit140includes a primary PFET Q4134and a secondary NFET Q5136. The primary PFET Q4134and the secondary NFET Q5136are coupled to the level shifter130and the switch104. A source terminal of the primary PFET Q4134is coupled to both the diode D1126and the third PFET Q3128in the voltage generation circuit120. A gate terminal of the primary PFET Q4134is coupled to the level shifter130. A drain terminal of the primary PFET Q4134is coupled to a drain terminal of the secondary NFET Q5136at a second node N2132. The drain terminals of the primary PFET Q4134and the secondary NFET Q5136are coupled to the gate terminals of the first PFET Q1114and the second PFET Q2116. A gate terminal of the secondary NFET Q5136is coupled to the level shifter130, and a source terminal of the secondary NFET Q5136is coupled to a ground terminal. The logic unit140receives the first signal from the voltage generation circuit120and the primary signal from the level shifter130. The logic unit140generates a secondary signal, which is provided to the second node N2132. The gate terminals of the first PFET Q1114and the second PFET Q2116receive the secondary signal. The switch104in the circuit100generates the output signal Vout110responsive to the input signal Vin108when the control signal CNTRL106has a logic high state. When the control signal CNTRL106has a logic high state (logic ‘1’), the first NFET Q0112is activated (i.e., switched ON). The secondary NFET Q5136, the first PFET Q1114and the second PFET Q2116are activated. The primary PFET Q4134is inactivated. The first signal is equal to a maximum one of the first voltage or the second voltage (i.e., whichever one is higher). The switch104in the circuit100stops generating the output signal Vout110responsive to the input signal Vin108when the control signal CNTRL106has a logic low state. When the control signal CNTRL106has a logic low state (logic ‘0’), the first NFET Q0112is inactivated, and the third PFET Q3128is activated. The secondary NFET Q5136, and the second PFET Q2116are inactivated. The primary PFET Q4134is activated. The first signal is equal to the first voltage. Thus, the method shown in the flowchart500enables a switch in a circuit, similar to the circuit100, to operates at high-speed and with low impedance. This switch is useful in multi-domain systems applications requiring high-speed ADCs. A circuit, enabled by flowchart500, provides complete switching of the transistors in the switch even when the input signal is as low as 0 volt. This enables the switch to operate as a high-speed switch with no quiescent current. FIG.6is a block diagram of an example device600in which several aspects of example embodiments can be implemented. The device600is, or in incorporated into or is part of a server farm, a vehicle, a communication device, a transceiver, a personal computer, a gaming platform, a computing device, any other type of electronic system, or a portable device such as a battery powered handheld measurement device. The device600may include one or more conventional components that are not described herein for brevity. The device600includes a voltage reference circuit602, a circuit604and an analog to digital converter (ADC)606. The voltage reference circuit602provides an input signal Vin612. The circuit604receives the input signal Vin612and generates an output signal Vout614. The circuit604is similar, in connection and operation, to the circuit100ofFIG.1. The ADC606converts the output signal Vout614to a digital signal Dout620. The circuit604includes a switch, a voltage generation circuit, a level shifter and a logic unit. The voltage generation circuit is coupled to the switch. The level shifter is coupled to voltage generation circuit. The logic unit is coupled to the voltage generation circuit, the level shifter and the switch. The switch includes first transistors and second transistors. The first transistors include one or more NFETs, and the second transistors include one or more PFETs. The first transistors and the level shifter receive a control signal. The logic unit generates a secondary signal. The second transistors in the switch receive the secondary signal. The first transistors include a first NFET, and a gate terminal of the first NFET receives the control signal. The second transistors include a first PFET and a second PFET. The gate terminals of the first PFET and the second PFET receive a secondary signal. The voltage generation circuit includes a diode and a third PFET. A source terminal of the third PFET is coupled to drain terminals of the first PFET and the second PFET at a first node N1. Both the diode and the third PFET are coupled to the level shifter and the logic unit. The voltage generation circuit receives the supply voltage, the control signal and a voltage at the first node N1. The voltage generation circuit generates the first signal. The level shifter receives the first signal (from the voltage generation circuit) and the control signal, and generates a primary signal. The logic unit receives the first signal from the voltage generation circuit and the primary signal from the level shifter. The logic unit generates the secondary signal. The circuit604generates the output signal Vout614responsive to the input signal V in612when the control signal has a logic high state. When the control signal has a logic high state (logic ‘1’), the first NFET is activated. The secondary signal generated by the logic unit activates the first PFET and the second PFET. The first signal generated by the voltage generation circuit is proportional to a maximum one of the first voltage or the second voltage (i.e., whichever one is higher). The circuit604stops generating the output signal Vout614responsive to the input signal V in612when the control signal has a logic low state. When the control signal has a logic low state (logic ‘0’), the first NFET is inactivated, and the third PFET is activated. The secondary signal generated by the logic unit inactivates the second PFET. The first signal generated by the voltage generation circuit is proportional to a voltage at the first node N1. The switch enables the circuit604to operates at high-speed and with low impedance. The circuit604support ADC606which might operate at high-speed of the order of GSPS. The circuit604provides complete switching of the transistors in the switch even when the input signal Vin612is as low as 0 volt. This enables the circuit604to operate as a high-speed switch with no quiescent current. FIG.7is a block diagram of an example device700in which several aspects of example embodiments can be implemented. The device700is, or is incorporated into or is part of, a server farm, a vehicle, a communication device, a transceiver, a personal computer, a gaming platform, a computing device, or any other type of electronic system. The device700may include one or more conventional components that are not described herein for brevity. In one example, the device700includes a processor702and a memory706. The processor702can be a CISC-type CPU (complex instruction set computer), RISC-type CPU (reduced instruction set computer), a digital signal processor (DSP), a microcontroller, a CPLD (complex programmable logic device) or an FPGA (field programmable gate array). The memory706(which can be memory such as RAM, flash memory, or disk storage) stores one or more software applications (e.g., embedded applications) that, when executed by the processor702, performs any suitable function associated with the device700. The processor702may include memory and logic, which store information frequently accessed from the memory706. The device700includes a circuit710. In one example, the processor702may be placed on the same PCB or module as the circuit710. In another example, the processor702is external to the device700. The circuit710can function as a switch. The circuit710may include additional analog circuitry, digital circuitry, memory and/or software. The circuit710may include circuitry that is similar, in connection and operation, to the circuit100ofFIG.1. The circuit710includes a switch, a voltage generation circuit, a level shifter and a logic unit. The voltage generation circuit is coupled to the switch. The level shifter is coupled to voltage generation circuit. The logic unit is coupled to the voltage generation circuit, the level shifter and the switch. The switch includes first transistors and second transistors. The first transistors include one or more NFETs, and the second transistors include one or more PFETs. The first transistors and the level shifter receive a control signal. The logic unit generates a secondary signal. The second transistors in the switch receive the secondary signal. The first transistors include a first NFET, and a gate terminal of the first NFET receives the control signal. The second transistors include a first PFET and a second PFET. The gate terminals of the first PFET and the second PFET receive a secondary signal. The voltage generation circuit includes a diode and a third PFET. A source terminal of the third PFET is coupled to drain terminals of the first PFET and the second PFET at a first node N1. Both the diode and the third PFET are coupled to the level shifter and the logic unit. The voltage generation circuit receives the supply voltage, the control signal and a voltage at the first node N1. The voltage generation circuit generates the first signal. The level shifter receives the first signal (from the voltage generation circuit) and the control signal, and generates a primary signal. The logic unit receives the first signal from the voltage generation circuit and the primary signal from the level shifter. The logic unit generates the secondary signal. The circuit710generates an output signal responsive to an input signal when the control signal has a logic high state. When the control signal has a logic high state (logic ‘1’), the first NFET is activated. The secondary signal generated by the logic unit activates the first PFET and the second PFET. The first signal generated by the voltage generation circuit is proportional to a maximum one of the first voltage or the second voltage (i.e., whichever one is higher). The circuit710stops generating the output signal responsive to the input signal when the control signal has a logic low state. When the control signal has a logic low state (logic ‘0’), the first NFET is inactivated, and the third PFET is activated. The secondary signal generated by the logic unit inactivates the second PFET. The first signal generated by the voltage generation circuit is proportional to a voltage at the first node N1. The switch enables the circuit710to operates at high-speed and with low impedance. The circuit710support ADCs which operate at high-speeds of the order of GSPS. The circuit710provides complete switching of the transistors in the switch even when the input signal is as low as 0 volt. This enables the circuit710to operate as a high-speed switch with no quiescent current. In this description, unless otherwise stated, “about,” “approximately” or “substantially” preceding a parameter means being within +/−10 percent of that parameter. Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims. | 38,836 |
11863181 | While the disclosure is amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that other embodiments, beyond the particular embodiments described, are possible as well. All modifications, equivalents, and alternative embodiments falling within the spirit and scope of the appended claims are covered as well. DETAILED DESCRIPTION Level-shifting, logic-conversion, signal-interfacing, etc. all refer to ensuring electrical compatibility between one or more electrical circuits, modules, or systems. In the discussion herein, the phrase level-shifting will be substantially used; however, the concepts and examples discussed applied to a wide variety of such electrical interfacing. Just one example application employing such level-shifters is now presented for Universal Serial Bus (USB) interfaces. USB (e.g. v2.0) has been one of the most successful wired interfaces in the past 20 years, and almost all SoCs today are equipped with a USB 2.0 interface. USB standards evolution kept the original 3.3-V I/O USB 1.0 interface intact for backward compatibility, helping enable wider adoption and a larger ecosystem while also preserving device interoperability. However, as process nodes approach more advanced node (e.g. 5 nm), the manufacturing cost to maintain USB 2.0 3.3V I/O signaling has grown exponentially. Embedded USB2 (eUSB2) is a supplement specification to the USB 2.0 specification that addresses issues related to interface controller integration with advanced system-on-chip (SoC) process nodes by enabling USB 2.0 interfaces to operate at I/O voltages of 1V or 1.2V instead of 3.3V. eUSB2 can enable smaller, more power-efficient SoCs, in turn enabling process nodes to continue to scale while increasing performance in applications such as smartphones, tablets and notebooks. In some examples, designers integrate the eUSB2 interface at a device level while leveraging and reusing the USB 2.0 interface at a system level. eUSB2 can support onboard inter-device connectivity through direct connections as well as exposed connector interfaces through an eUSB2-to-USB 2.0 repeater for performing level shifting. The following Table presents some differences between USB 2.0 and eUSB2: FeatureUSB 2.0eUSB2Signal interfaceD+, D−eD+, eD−I/O voltage3.3 V (Lowspeed/full-1 V or 1.2 Vspeed), <1 V (Highspeed)Supported dataLow speed: 1.5 MbpsLow speed: 1.5 MbpsrateFull speed: 12 MbpsFull speed: 12 MbpsHigh speed: 480 MbpsHigh speed: 480 Mbps FIGS.1A and1Brepresents examples100of two eUSB to USB configurations102,104requiring level-shifting. The first configuration102includes a system on a chip (SoC) having two eUSB embedded interfaces (as shown). The chip106is configured to be coupled to an external eUSB device108and to a legacy USB2 device110. An eUSB2 repeater112is necessary to convert a differential eUSB signal (eD+/eD−) to a differential USB signal (D+/D−). The eUSB2 repeater112in some examples is on a same PC board as the chip106, while the eUSB108and USB110devices are coupled via cabling. The second configuration104is substantially similar to the first configuration102, except now an SoC114includes two USB2 embedded interfaces (as shown). In some example applications, eUSB2 repeater112is need to perform CIVIL (current mode logic) to CMOS logic conversions. In such applications, low jitter is a key requirement for level-shifters in the eUSB2 repeater112. Many CIVIL-to-CMOS converter level-shifters suffer from either elevated jitter problems or power/current penalties across different process, voltage and temperature (PVT) corners. For example, some CIVIL-to-CMOS converter level-shifters suffer severe power penalties since their output stages have wide variation over PVT. Non-differential applications can have even worse jitter performance as the essentially rely on a single inverter stage for the level-shifting/conversion. Other example CML-to-CMOS converter level-shifters include diode connected MOS devices as loads in a pre-driver stage, which also degrades the jitter performance. FIG.2represents an example bi-directional eUSB repeater200. This example repeater follows the first configuration102example inFIG.1A, but in another example embodiment could follow the second configuration104inFIG.1B. The repeater200includes a transmit datapath202, a receive datapath204, an eUSB2 port206, a datapath switch matrix208, a USB2 port210, and a controller212. The repeater200is configured to be coupled to differential eUSB signals (eD+/eD−)214in a low voltage domain, and differential USB signals (D+/D−)216in a high voltage domain. Power supplies VDD 1.8V, VDD 3.3 V and a mode control218signal is also shown. The transmit and receive datapaths202,204are substantially similar and include: a slicer220, a level-shifter222, a datapath switch224, and a line-driver226. The datapaths202,204in various embodiments also include (not shown) a continuous time linear equalizer (CTLE), a feed forward equalizer (FFE) for removing most intersymbol interference (ISI), input and termination resistors (RT). RT can be different for different standards (e.g. for an USB2 to an eUSB repeater, input RT=40Ω, output RT=45Ω). The slicer220makes a (non-linear) hard decision and makes the data signal either high or low, which avoids propagation of amplitude noise and allows regeneration of pre-emphasis, but turns residual intersymbol interference (ISI) into timing jitter. Since the data signal after the slicer220is in either the lower voltage domain (e.g. 1.8V) or the high voltage domain (e.g. 3V), depending upon the datapath202,204, the level-shifter222either steps-up or steps-down the signal voltage as required before the line driver226. FIG.3represents a first example level-shifter300. The level-shifter300in some example embodiments can be used as the level-shifter222in either datapath202,204inFIG.2. The level-shifter300includes: a pre-driver stage302, a first output stage304, a second output stage306, differential input308, output buffer310, differential output312, supply voltages (VDD)314, and ground potentials316. The pre-driver stage302includes MP0, MP1, MP2, M0, R1 and R2. The pre-driver stage302receives the differential input308and a bias signal318and generates differential pre-driver outputs320,322. The pre-driver stage302has two resistor loads and a diode connected MOS device (MO) configured to set a minimum single-ended output voltage of the pre-driver stage302and a common mode voltage for the output stages304,306. The first output stage304includes M1, M2, R3, MP3, and MP4. The first output stage304receives the differential pre-driver outputs320,322and generates a single-ended first stage output324. The second output stage306includes M3, M4, R4, MP5, and MP6. The second output stage306also receives the differential pre-driver outputs320,322but then generates a single-ended second stage output326. The output buffer310receives the first stage output324and the second stage output326and generates the differential output312. The output buffer310inverters are sized for driving additional circuits coupled to the differential output312. The output buffer310can be removed if such additional circuits would not heavily load the level-shifter300. Various example embodiments of the level shifter300can be tuned to handle many PVT (Process, Voltage, Temperature) dependent current variations, as well as tuned for specific power consumption and overall jitter performance. In some example embodiments to minimize current spread over PVT, resistors are set a R1=R2=2*R3=2*R4 and MOS device widths are set as 0.5*M0=M1=M2=M3=M4. Using such ratio settings, both output stages304,306each individually consume about a same amount of current as the pre-driver stage302does, and current consumption of the output stages304,306will not vary significantly over PVT, as the current consumption of stage302is a scaled version of a bias current. Current consumption can be controlled by adjusting ratios between the MOS devices and resistors in the pre-driver stage302the first output stage304and the second output stage306. Current consumption of the pre-driver stage302, is typically a multiple integer times of a bias current that, together with a bias generation circuit, sets the bias signal318of the MP0 transistor. In various example embodiments, different resistance and width ratios can be set so that the output stages304,306consume more or less current than the pre-driver stage302. MP0, MP1, MP2, R1, and R2 in the pre-driver stage302in various example embodiments work as a normal current-mode logic (CML) circuit, which preserves the low jitter feature. The pre-driver stage302avoids diode connected MOS device to achieve good jitter performance. In some example embodiments, the level-shifter300tuning can achieve a few pico-seconds jitter in a USB2/eUSB communication system having a 480 Mbps data rate. The controller212(seeFIG.2) in some example embodiments is configured to place the level-shifter300into either an off-mode, an idle-mode, or an active-mode. In the off-mode, the bias signal318at MP0 is pulled high to completely turn off the pre-driver stage302and the output stages304,306are turned off using disable/enable signals (such as shown inFIGS.4and5). In the idle-mode, the pre-driver stage302is turned on by setting the bias voltage at318at MP0 to provide a bias current. In the idle-mode the output stages304,306are still turned off In the idle-mode, the level-shifter300is partially on so as to be ready for a fast transition from the idle-mode to the active-mode. In the active-mode, the pre-driver stage302is turned on and the output stages304,306are turned on using the disable/enable signals. While the level-shifter300is shown as implemented with particular PMOS and NMOS transistors, in other example embodiments, implementations can be easily revised NMOS and PMOS transistors. FIG.4represents a second example level-shifter400. The second example level-shifter400is substantially similar to the example level-shifter300except that the second example level-shifter400further includes an enable/disable circuit402. The enable/disable circuit402includes MP7, MP8, MP9 and MP10 and is controlled by an enable/disable signal404. In some example embodiments, when the level-shifter300is placed in the active mode, hot carrier degradation can be a problem for MP4 and MP6 as these MOS devices would then see none-zero current when their source-gate voltage is close to half the supply voltage (VDD)314and their source-drain voltage is close to the supply voltage (VDD)314. In some example embodiments, since M1, M2, M3 and M4 may see significantly different current in an idle mode due to mismatches, switch devices MP7, MP8, MP9, and MP10 are added to cut off current from the supply voltage (VDD)314. MP7, MP8, MP9, and MP10 also significantly reduce the source-drain voltages for MP4 and MP6 in the active-mode. Hot carrier degradation, as a result, can be substantially improved. FIG.5represents a third example level-shifter500. The third example level-shifter500is substantially similar to the example level-shifter300except that the third example level-shifter500further includes a first enable/disable circuit502responsive to pull-up signals504and a second enable/disable circuit506responsive to pull-down signals508. The third example level-shifter500is most applicable when there is not significant hot carrier degradation of the transistors in the output stages304,306. If MP4 and MP6 do not have significant hot carrier degradation then the pull-up signals504and pull-down signals508can be used to cut off current paths to M0-M4 to avoid asymmetric stress and minimize current consumption in the idle mode. Note that the level-shifters300,400,500in various example embodiments can be substituted for theFIG.2level-shifter222in either datapath202,204. Various instructions and/or operational steps discussed in the above Figures can be executed in any order, unless a specific order is explicitly stated. Also, those skilled in the art will recognize that while some example sets of instructions/steps have been discussed, the material in this specification can be combined in a variety of ways to yield other examples as well, and are to be understood within a context provided by this detailed description. In some example embodiments these instructions/steps are implemented as functional and software instructions. In other embodiments, the instructions can be implemented either using logic gates, application specific chips, firmware, as well as other hardware forms. When the instructions are embodied as a set of executable instructions in a non-transitory computer-readable or computer-usable media which are effected on a computer or machine programmed with and controlled by said executable instructions. Said instructions are loaded for execution on a processor (such as one or more CPUs). Said processor includes microprocessors, microcontrollers, processor modules or subsystems (including one or more microprocessors or microcontrollers), or other control or computing devices. A processor can refer to a single component or to plural components. Said computer-readable or computer-usable storage medium or media is (are) considered to be part of an article (or article of manufacture). An article or article of manufacture can refer to any manufactured single component or multiple components. The non-transitory machine or computer-usable media or mediums as defined herein excludes signals, but such media or mediums may be capable of receiving and processing information from signals and/or other transitory mediums. It will be readily understood that the components of the embodiments as generally described herein and illustrated in the appended figures could be arranged and designed in a wide variety of different configurations. Thus, the detailed description of various embodiments, as represented in the figures, is not intended to limit the scope of the present disclosure, but is merely representative of various embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by this detailed description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, discussions of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment. Furthermore, the described features, advantages, and characteristics of the invention may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, in light of the description herein, that the invention can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention. Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the indicated embodiment is included in at least one embodiment of the present invention. Thus, the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment. | 16,471 |
11863182 | The features and advantages of embodiments will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number. DETAILED DESCRIPTION I. Introduction The present specification and accompanying drawings disclose one or more embodiments that incorporate the features of the present invention. The scope of the present invention is not limited to the disclosed embodiments. The disclosed embodiments merely exemplify the present invention, and modified versions of the disclosed embodiments are also encompassed by the present invention. Embodiments of the present invention are defined by the claims appended hereto. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. In the discussion, unless otherwise stated, adjectives such as “substantially” and “about” modifying a condition or relationship characteristic of a feature or features of an embodiment of the disclosure, are understood to mean that the condition or characteristic is defined to within tolerances that are acceptable for operation of the embodiment for an application for which it is intended. Numerous exemplary embodiments are described as follows. It is noted that any section/subsection headings provided herein are not intended to be limiting. Embodiments are described throughout this document, and any type of embodiment may be included under any section/subsection. Furthermore, embodiments disclosed in any section/subsection may be combined with any other embodiments described in the same section/subsection and/or a different section/subsection in any manner. II. Example Embodiments A state machine is a device that may be implemented in electrical circuitry and/or program code executing in a processor, which at any particular time can be in one of a set number of stable conditions depending on its previous condition and on the present values of its inputs. The performance of state machines is typically related to the dependence of each input on the previous state. For table-based state machines, this performance at least in part depends on a table lookup for each iteration. This dependence makes it difficult to parallelize across inputs. Embodiments disclosed herein provide a high-performance table-based state machine that enables extensive pipelining to hide latencies, and permit high clock frequency, and thus overcomes the deficiencies described above. Such embodiments may be implemented in various ways. In this section, we present an overview of a table-based state machine as depicted inFIG.1. FIG.1depicts an example table-based state machine100, according to an embodiment. Table-based state machine100includes a state table circuit102. State table circuit102may be implemented in hardware (e.g., an electrical circuit including transistors, logic gates, electrical components, etc.) and is configured to receive/accept an input104and a current state106(also referred to herein as “current operating state”) as inputs. State table circuit102is configured to thereafter generate an output108and a new state110(also referred to herein as “new operating state”). A table-based state machine includes hardware that implements a programmable state machine by encoding state transitions and outputs through a lookup table. For example, consider the following example state transition table, State Transition Table A: TABLE AState TransitionInput 104Current State 106Output 108New state 11000001111011100 State table circuit102may be implemented in hardware in various different ways, as known in the art to persons skilled in the relevant art(s), to provide a lookup-table mechanism whereby upon being provided with input104and current state106as inputs, state table circuit102generates output108and new state110as outputs with the values of such outputs dictated by its state transition table (e.g., State Transition Table A). For example, state table circuit102may be implemented in the form of transistors, logic gates, an Application Specific Integrated Circuit (ASIC), a configurable circuit such as in a Field Programmable Gate Array (FPGA), a Complex Programmable Logic Device (CPLD), and/or fabricated directly on silicon or other semiconductor materials using photolithographic techniques as known in the relevant art(s). As such, state table circuit102may be implemented using microprogramming techniques as also known in the relevant art(s), and one or more state transition tables associated with state table circuit102may likewise include microprogramming instructions, microcodes, one or more addresses for in-memory microcode subroutines, and the like. Thus, state table circuit102is not merely an arrangement of data, but instead exposes lookup-table functionality whereby outputs and new states may be retrieved from a corresponding state transition table stored in memory. For example, and with reference to State Transition Table A shown herein above, when input104is 1 and current state106is 1 and are provided to state table circuit102as inputs, state table circuit102will provide 0 for output108, and 0 for new state110as outputs. The performance of table-based state machine100is limited in at least two aspects. First, each input is processed sequentially because the new state depends on the previous input. Second, the processing time of each iteration of the state machine (i.e., the “iteration interval”) is bottlenecked by the relatively slow memory access required to look up an entry in the state table circuit, and to output new state110. That is, no matter how fast the next input104is provided to the state machine, generation of the next output108is blocked until such time that new state110is generated and fed back to become current state106. Moreover, an increase in the time it takes to calculate output108likewise need not affect iteration time because calculating output108may be performed in parallel with beginning to process the next input104. From these facts, it may be appreciated that the iteration interval of table-based state machine100is governed by the amount of time required to lookup and output new state110. Based on these observations, we now turn to description of embodiments that are configured to reduce the iteration interval by removing any dependence on the current state or previous inputs from the state table lookup. FIG.2depicts an example table-based state machine200with a reduced iteration interval, according to an embodiment. Table-based state machine200includes state table circuits202A-202N and multiplexer (“MUX”)204. Each of state table circuits202A-202N corresponds to a sub-table of a complete state transition table (such as, e.g., State Transition Table A shown above), wherein each of the sub-tables only includes information for one possible current state value. For example, state table circuit202A includes state transition information that corresponds only to a current state of 0, state table circuit202B includes state transition information that corresponds only to a current state of 1, and so forth. When a new value for input104is received and provided to each of state table circuits202A-202N, each such table looks up and provides the state and output that corresponds to input104for each table as input to MUX204. Note, for the sake of clarity inFIG.2, the state and output from each of state table circuits202A-202N is depicted as signals lines denoted as Outputs206A-206N. It should be understood, however, that Outputs206A-206N each include two independent signals (i.e., the state and output looked up in the table based on the fixed state corresponding to a particular state table circuit202A-202N, and input104). Moreover, although each of state table circuits202A-202N is depicted as an independent circuit, it should be understood that embodiments are not so limited. For example, state table circuits202A-202N may comprise one static random-access memory (“SRAM”) that contains all the state transition tables for each of state table circuits202A-202N. As discussed above, MUX204receives outputs206A-206N from the state table circuits202A-202N, respectively, and is further configured to receive current state106as input. MUX204(or multiplexor204) may be also referred to as a data selector, and is a device formed of electrical circuits (e.g., transistors, logic gates (such as AND gates, NAND gates, OR gates, XOR gates), etc.) that selects between several analog or digital input signals and forwards the selected input to a single output line. Moreover, although MUX204is depicted as an N-input MUX, due each of output206A-206N including two signals, MUX204may also be configured as two independent N-input MUXES. One such MUX is used for selecting the correct output206A-output206N to route to output108based on current state106, and the other MUX is used for selecting the correct input to route to new state110. It should be noted that current state106will be updated to reflect new state110after every iteration. Table-based state machine200as depicted inFIG.2removes the state table lookup/read from the dependence path inasmuch as each table lookup depends only on the current input, and the memory reads corresponding to the state table reads may be pipelined. Accordingly, the iteration interval for table-based state machine200depends only on the time it takes for MUX204to route the correct output and state from the state tables output108and new state110, respectively. In embodiments, and depending on the size of the state machine (i.e., the number of possible states of the machine), operation of MUX204may be substantially faster than the table reads which, when combined with pipelining of such reads, enables embodiments to be switched at very high frequencies. Embodiments may implement further performance enhancement techniques as described herein below. As discussed above, table-based state machine100suffers from a further drawback that each input is processed sequentially because the new state depends on the previous input. That is, because each input depends on the new state produced by the previous input, there is a dependence that prevents full parallelization. The embodiments described above reduce the dependence on knowing the current state for the most expensive part of the processing (i.e., the state table lookups), but do not necessarily achieve such state independence when used as a parallel input state machine. Consider, for example,FIG.3which depicts a naïve implementation of a state machine300configured to process inputs in parallel, according to an embodiment. State machine300includes four instances of table-based state machine200, which each include respective instances of state table circuits202A-202N. Each instance of table-based state machine200receives a corresponding one of Inputs302-308(denoted as Input[0]-Input[3], respectively) and generates a corresponding one of Outputs310-316(denoted as Output[0]-Output[3], respectively) and one of New States318-324. Furthermore, each of New States318-324is fed forward to MUX204of the next instance of table-based state machine200, with New State324ultimately being fed back and becoming Current State326which controls MUX204of the first instance of table-based state machine200. It is apparent inFIG.3that the naïve implementation of state machine300includes a state dependence chain as denoted by the bold dashed lines of New States318-322that are fed forward, as well as New State324that is fed back. That is, each of Output[1]310, Output[2]312and Output[3]314is not generated until the New State of the prior stage is computed (e.g., each of New State318-322, respectively) because such New States drive the corresponding MUXes204at each stage. Said another way, each of Outputs310-316do not become valid until after the delay imposed by the delay of its corresponding MUX204, and thus obtaining the four outputs that comprise Outputs310-316takes a minimum of four delays of MUX204. Unfortunately, such MUX operations cannot be pipelined because of this dependence. In embodiments, this dependence is removed to enable full pipelining that produces multiple outputs every clock. For example, considerFIG.4which depicts an example state machine400configured to process inputs in parallel, and that improves state machine300ofFIG.3by eliminating its state dependence, according to an embodiment. State machine400includes a table lookup stage428, a state propagation stage430, and an output selection stage432. Table lookup stage428includes state table circuits414-420. Stage propagation stage430includes state propagators408-412, and output selection stage432includes MUXs420-426. Each of state table circuits414-420corresponds to state table circuits202A-202N of table-based state machine200as depicted inFIG.3. Each of MUX420-426likewise corresponds to an instance of MUX204of table-based state machine200as depicted inFIG.3. Other structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the following discussion regarding state machine400ofFIG.4. As shown inFIG.4, state table circuits414-420each receive a corresponding one of inputs [0]-[3]302-308, and generate corresponding output states. State table circuit414generates first output states402received by state propagator408and MUX420. Each of state table circuit416-420generate respective output states received by a corresponding one of state propagators408-412. State propagator408generates output states404received by MUX422and state propagator410. State propagator410generates output states406received by MUX424and state propagator412. State propagator412generates output states434received by MUX426. MUX420receives output states402and current state326and generates output[0]310and new state318. MUX422receives output states404and current state326and generates output[1]312and new state320. MUX424receives output states406and current state326and generates output[2]314and new state322. MUX426receives output states434and current state326and generates output[3]316and new state324. State machine400removes the state dependence inherent to state machine300as depicted inFIG.3by using the state transitions output from each set of state table circuits to determine the state transition table for later inputs. This process of determining later state transitions may be described as state tracing that may be understood by way of the following example. Suppose input[0]302and input[1]304equal arbitrary values x and y, respectively, and that the state table lookup for those values and for each possible current state are shown as follows in Table B: TABLE Bcurrent stateinput[0]input[1]012133220301 Each table entry shown in Table B indicates the new state dictated by, for example, state table circuits414for the respective values of input[0]302and input[1]304for each possible current state. The idea behind state tracing is to determine what the current state corresponding to input[1]304will be (i.e., the new state generated in response to input[0]) given a particular current state corresponding to input[0]302. One may find the new state for input[1]302if the current state to input[0] is 0 by tracing through Table B. For example, suppose that the current state corresponding to input[0] is 0. The table above shows that for the current input at input[0] and a current state of 0, the new state will be 1. Next, given a current state of 1304(i.e., the new state resulting from input[0]302and its current state of 0) corresponding to input[1], the table illustrates that the new state for input[1] is 3. The following table shows the tracing and the resulting state transition table for each possible current state given the particular values of input[0]302and input[1]304. TABLE CTraced State Transitioncurrent stateinput[0]input[1]010→1→3 = 3131→3→1 = 1222→2→0 = 0303→0→2 = 2 Table C above illustrates, for example, that for the current values in input[0]302and input[1]304and a current state of 2, the appropriate state corresponding to input[1]304is 0. More detailed operation of state machine400and state propagators408will now be described with reference to the above described state propagation table. State machine400is very similar to state machine300, but differs in a few key aspects. First, and with reference toFIGS.2and3and as described above, each of outputs206A-206N includes both the possible states, and the possible outputs that will subsequently be selected by the corresponding MUX204. For the sake of clarity, state machine400is depicting only the states being provided by a given set of state tables, and omits the outputs. It should be understood, however, that state table circuits414-420provide both states and outputs to MUXes420-426as described herein above with respect to table-based state machine200. Second, state machine400includes state propagators408-412, the operation of which will be described in further detail below. Third, each of MUXes420-426correspond to each MUX204depicted inFIG.3, and are configured to be controlled by the *same* signal. That is, each of MUX420-426generate the output for their respective input at the same time according to current state326. Thus, each of outputs310-316become valid at the same time (i.e., on the same clock cycle), and the aforementioned state dependency present in state machine300ofFIG.3is removed. Removal the state dependency is accomplished through the use of state propagators408-412. Although there is a dependence chain through the state propagation logic (e.g., a series of logic gates, flip-flops, transistors, etc.), the logic itself is not dependent on anything other than the current state and inputs and may thereby be fully pipelined. For example, and as depicted in state machine400ofFIG.4, state table circuits416-420operate in parallel to simultaneously perform their respective table lookups in a pipeline stage denoted as table lookup stage428. Likewise, state propagation as described below is performed in the pipeline stage denoted as state propagation stage430. Finally, the final output and state that corresponds to each input are selected simultaneously by respective ones of MUXes420-426in output selection stage432. Moreover, although the layout area and pipeline depth of state machine400are both increased, the additional time to fill the pipeline is negligible and thereafter state machine400may deliver multiple outputs per clock with a clock period that is less than the sum of the computation delays at each stage (i.e., the clock period need be only as long as required to complete the slowest stage). FIG.5depicts an excerpted portion500of state machine400ofFIG.4illustrating aspects of state propagator408operation, according to an embodiment. Portion500includes state table circuits414, state table circuits416, MUX420, MUX422and state propagator408. State table circuits414and416as shown inFIG.5each include corresponding instances of state table circuits202A-202D that correspond to each possible state (as further described above). Other structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the following discussion regarding portion500of state machine400as illustrated in ofFIG.5. FIG.5depicts state machine400in the context of the state tracing example, and Table B and Traced State Transition Table C corresponding thereto, as described above. In particular, for illustrative purposes, it is assumed that input[0]302and input[1]304have values x and y, respectively. Further, the states that are output from state table circuits414correspond to the values given input[0] 302=x for each of the possible states. Likewise, the states that are output from state table circuits416correspond to the values given input[1] 304=y for each of the possible states. State propagator408is configured to re-map the state outputs of a given set of state table circuits to account for the traced states as reflected in the Traced State Transition Table C shown above. Suppose, for example, that current state326=0. Traced State Transition Table C indicates that next state320is expected to have a value of 3. Thus, state propagator408is configured to route the correct Next State output from State table circuits416to input 0 of MUX422(input 0 is appropriate because current state326=0). More specifically, and per the example state table circuits described herein above in the context of this example, state table circuits416indicate the state transitions indicated in Table B above, which dictates that for an value of y on input[1]304, the next states are 2, 3, 0 and 1 for each of current states 0, 1, 2 and 3, respectively. Recalling that MUX422selects the signal line having a 3 on input 0 (because the current state is 0), state propagator408selects the next state corresponding to Current State=1 from state table circuits416because that next state equals 3. Continuing with this example, suppose that current state326is 3. Per the Traced State Transition Table C shown above, next state320selected by MUX422is 2 when current state326is 3. Accordingly, state propagator408selects the next state corresponding to current state=0 in state table circuits416because that next state equals 2. By a similar process, one may see that in this example state propagator408routes the outputs corresponding to current states 2 and 3 of state table circuits416to inputs 2 and 1, respectively, of MUX422. With these aspects in mind, it may be appreciated that next states402as output from state table circuits414provides the exact mapping described herein above. That is, in this example, next states402=[1, 3, 2, 0] may be used directly by state propagator408to route the correct next state to the correct input of MUX422because the values of next states402correspond one-to-one with inputs 0 through 3 on MUX422, and such values dictate which of outputs 0 to 3 of state table circuits416should be chosen. That is, the state values of Next States402may be used to index the Next State outputs of State table circuits416. For example, the value of the first element of next states402(i.e., the value 1) corresponds to input 0 of MUX422and dictates that the output of current state 1 of state table circuits416be routed to that input by state propagator408. By that same token, the second element of Next States402(i.e., the value 3) corresponds to input 1 of MUX422which will receive the value from the state table circuit for current state=3 from state table circuits416, and so on. Put more simply, the values of next states402dictate to state propagator408which of next states output from state table circuits416to route to each of inputs 0 through 3 of MUX422. State propagator408may accomplish such routing in a number of ways. For example, state propagator408may be implemented as a crossbar switch, or with MUXes as depicted inFIG.6as will now be described. FIG.6depicts a block diagram of an example system600for state propagator408, according to an embodiment. As shown inFIG.6, system600includes state propagator408and MUX422. Furthermore, state propagator408includes first-fourth MUXs602-608, each of which receives the new states from state table circuits416as the input to be multiplexed. Each of MUXs602-608also receives one of the new states output from state table circuits414and is configured to switch the multiplexed input from state table circuits416to the MUX output. Such outputs are subsequently delivered to, for example, MUX422for final selection and output of next state320according to the current state as described herein above. Although the figures and corresponding description herein above illustrates and describes embodiments in terms of state outputs, one of ordinary skill will appreciate that the principles are equally applicable to routing and selection of appropriate output values according to the state table circuits and state propagation logic. For instance, although state propagator408and MUX422are shown inFIG.6and described above, such illustration and description is applicable to other state propagators and MUXs disclosed herein. In embodiments, state table circuit202A-202N ofFIG.2and state machine400ofFIG.4may be used in various ways to process inputs in parallel. For instance,FIG.7depicts flowchart700of a method for a state machine to process inputs in parallel, according to an embodiment. Flowchart700is described with continued reference toFIGS.2and4. However, other structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the following discussion regarding flowchart700. Flowchart700begins at step702. At step702, first state table circuit outputs are generated based on a first input and a corresponding predetermined state of a set of predetermined states, wherein the first state table circuit outputs correspond to a first set of state table circuits and collectively comprise a first set of state table circuit outputs. For example, and with continued reference to state machine400ofFIG.4, state table circuits414-420each comprise a set of state tables wherein each state table corresponds to a state within which state machine400may be operating. For example, state table circuits202A-202N ofFIG.2correspond to states of 0-N (i.e., the predetermined states), respectively, for state machine400. As described above, the output and next state of state machine400depends on the input and the current state of the state machine. By maintaining a lookup table for each of the N-possible states of state machine400(i.e., in the form of state table circuits202A-202N, instances of which are incorporated into each of state table circuits414-420), the table lookup is no longer dependent on the current state because all possible outputs and next state values are looked up simultaneously. The outputs of instances of state table circuits202A-202N comprise the “first state table circuit outputs[.]” Although state machine400ofFIG.4is depicted as generating states402-406and434, and as described above, it should be understood that state table circuits414-420output not only the possible next states, by likewise the possible outputs for subsequent selection by, for example, MUX420ofFIG.4. Flowchart700ofFIG.7continues at step704. In step704, second state table circuit outputs are generated based on a second input and a corresponding predetermined state of the set of predetermined states, wherein the second state table circuit outputs correspond to a second set of state table circuits and collectively comprise a second set of state table circuit outputs. For example, and with continued reference to state machine400ofFIG.4, the second state table circuit outputs are, for example, those generated by any of state table circuits416,418or420. For example, suppose that the second input comprises input[1]304. In such an instance, second state table circuit outputs are those outputs generated by state table circuits416. Flowchart700ofFIG.7continues at step706. In step706, a first state machine output is selected from among the first set of state table circuit outputs based on a current state of the state machine. For example, and with continued reference to state machine400ofFIG.4, MUX420receives the possible next states and possible outputs from state table circuits414, and selects the appropriate state and output based on current state326, wherein the first state machine output corresponds to output[0]310. Flowchart700ofFIG.7continues at step708. In step708, respective outputs of the second set of state table circuit outputs are selected to route as a set of state propagator outputs based on the first set of state table circuit outputs. For example, and with continued reference to state machine400ofFIG.4, state propagator408is configured to receive the outputs of state table circuits416, and to select the appropriate set of state propagator outputs to route to MUX422based on states402. For example, and with reference toFIG.6, state propagator408may comprise MUXes602-608ofFIG.6, each MUX configured to receive a respective one of possible states/outputs from state table circuits414, and thereafter use such to select the appropriate state of New States[0-3] from state table circuits416to route to MUX422, all as depicted inFIG.6. This operation of state propagator408serves to perform the state tracing operation described in greater detail herein above, thereby removing any dependence state machine400would otherwise have on the current state of state machine400. Flowchart700ofFIG.7concludes at step710. In step710, a second state machine output is selected from among the set of state propagator outputs based on the current state. For example, and with continued reference to state machine400ofFIG.4, the outputs of state propagator408that were selected therein from among the outputs of state table circuits416are received by MUX422, and thereafter the appropriate one of the respective outputs and new state corresponding to that input is selected by MUX422as dictated by the selection signal provided thereto (i.e., current state326of state machine400which comprises the fed-back new state324as output by MUX426. In the foregoing discussion of steps702-710of flowchart700, it should be understood that at times, such steps may be performed in a different order or even contemporaneously with other steps. For example, the selection of steps706may be performed after the selections of steps708and/or710, or may be performed at least partially in parallel. It should likewise be understood that although flowchart700describes a method of operating state machine400in a manner that processes only two inputs (i.e., the first and second inputs) in parallel, it may be appreciated that the method described in flowchart700ofFIG.7may be extended to process any number of inputs. For example, and as shown inFIG.4, state machine400may be configured to process4inputs (i.e., input[0]-input[3]302-308in parallel. Other operational embodiments will be apparent to persons skilled in the relevant art(s). Note also that the foregoing general description of the operation of state machine400is provided for illustration only, and embodiments of state machine400may comprise different hardware and/or software, and may operate in manners different than described above. One may likewise appreciate that the maximum clock frequency is limited by how fast the MUXes may operate, which in turn is limited by the size of the MUX. As described herein above, the MUX size is dictated 1 for 1 by the number of states in the machine. However, embodiments herein may employ further optimization techniques to keep the size of the MUXes manageable for state machines with a large possible number of states. More specifically, embodiments may use sparsified state transition tables where the number of unique transitions for a given input is small. That is, embodiments may store state transitions in a sparse manner thereby limiting the size of MUX required. Consider, for example, the following partial state transition table, Non-sparse State Transition Table D: TABLE DNon-sparse State TransitionInputCurrent StateNext StateA00102031B01112231 In the case of Non-sparse State Transition Table D, while there are 4 possible current states, there are only two unique state transitions. We can simplify this table to a Sparse State Transition Table E: TABLE ESparse State TransitionInputCurrent StateNext StateA31Default0B22Default1 Sparse State Transition Table E shown above multiplexes between two possible next states for the shown inputs. In this example, the overall reduction is small compared to the original shown above. However, it should be understood that these tables are merely exemplary, and the reduction may be more significant for larger tables. In this example, the output of a state table circuit such as, for example, state table circuits414as depicted inFIG.4is now pairs of (current state, next state). In order to multiplex among the next states, the actual current state has to be compared against these possible current states to determine which next state is to be used. If it matches none of the specified current states, then the default transition is used. Where multiple inputs are being processed in parallel, such as with state machine400as depicted inFIG.4, it is sometimes possible to further sparsify the number of valid state transitions because for certain inputs considered together, only certain states may be possible. For example, using the Non-sparse State Transition Table D shown above, consider an input of A followed by B. The input of A can only result in next states of 1 or 0. Thus, the next state for the input of B can only be 1. In the above examples, we have assumed that all possible input states for an input are valid. However, this is not always the case. For some state machines, some input and current state combinations are not valid. For example, consider this new transition table, State Transition Table with Invalid Inputs E: State Transition Table with Invalid Inputs EInputCurrent StateNext StateA0Error102Error31B0Error11223Error State Transition Table with Invalid Inputs E may be similarly sparsified yielding the following table, Sparsified State Transition Table with Invalid Inputs F: Sparsified State Transition Table with Invalid Inputs FInputCurrent StateNext StateA1031B0132 In the case of Sparsified State Transition Table with Invalid Inputs F, one may also use dynamic information to reduce the number of possible starting states to a sequence of inputs. Again considering inputs of A followed by B, the only possible starting states for an input of B are 0 and 3. If we look at the possible next states for A, the only valid one is 0 which corresponds to a current state of 1. In such a situation, the state transition for input A corresponding to state transition (3,1) may be eliminated from the state transition table. At runtime, embodiments may use the above two techniques to reduce the number of sparse transitions. Further improvement is possible by profiling the state machine while operating a typical workload to determine the choice of hardware MUX size based on the average or typical number of transitions after sparsification. Optimizing MUX size in this manner permits further increase in the clock frequency due to reduced delay of the MUX. Of course, a slower fallback path will be needed to handle any situation where where the number of transitions exceeds the optimized MUX size. III. Example Computer System Implementation Each of state table circuit102, state table circuits202A-202N, MUX204, state table circuits414-420, state propagators408-412, MUXes420-426and/or MUXes602-608, and flowchart700may be implemented in hardware, or hardware combined with software and/or firmware. For example, state table circuit102, state table circuits202A-202N, MUX204, state table circuits414-420, state propagators408-412, MUXes420-426and/or MUXes602-608, and flowchart700may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer readable storage medium. Alternatively, state table circuit102, state table circuits202A-202N, MUX204, state table circuits414-420, state propagators408-412, MUXes420-426and/or MUXes602-608, and flowchart700may be implemented as hardware logic/electrical circuitry. For instance, in an embodiment, one or more, in any combination, of state table circuit102, state table circuits202A-202N, MUX204, state table circuits414-420, state propagators408-412, MUXes420-426and/or MUXes602-608, and flowchart700may be implemented together in a SoC. The SoC may include an integrated circuit chip that includes one or more of a processor (e.g., a central processing unit (CPU), microcontroller, microprocessor, digital signal processor (DSP), etc.), memory, one or more communication interfaces, and/or further circuits, and may optionally execute received program code and/or include embedded firmware to perform functions. FIG.8depicts an exemplary implementation of a computing device800in which embodiments may be implemented. For example, in an embodiment, one or more, in any combination, of state table circuit102, state table circuits202A-202N, MUX204, state table circuits414-420, state propagators408-412, MUXes420-426and/or MUXes602-608, and flowchart700may be implemented in one or more computing devices similar to computing device800in stationary or mobile computer embodiments, including one or more features of computing device800and/or alternative features. The description of computing device800provided herein is provided for purposes of illustration and is not intended to be limiting. Embodiments may be implemented in further types of computer systems, as would be known to persons skilled in the relevant art(s). As shown inFIG.8, computing device800includes one or more processors, referred to as processor circuit802, a system memory804, and a bus806that couples various system components including system memory804to processor circuit802. Processor circuit802is an electrical and/or optical circuit implemented in one or more physical hardware electrical circuit device elements and/or integrated circuit devices (semiconductor material chips or dies) as a central processing unit (CPU), a microcontroller, a microprocessor, and/or other physical hardware processor circuit. Processor circuit802may execute program code stored in a computer readable medium, such as program code of operating system830, application programs832, other programs834, etc. Bus806represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. System memory804includes read only memory (ROM)808and random access memory (RAM)810. A basic input/output system812(BIOS) is stored in ROM808. Computing device800also has one or more of the following drives: a hard disk drive814for reading from and writing to a hard disk, a magnetic disk drive816for reading from or writing to a removable magnetic disk818, and an optical disk drive820for reading from or writing to a removable optical disk822such as a CD ROM, DVD ROM, or other optical media. Hard disk drive814, magnetic disk drive816, and optical disk drive820are connected to bus806by a hard disk drive interface824, a magnetic disk drive interface826, and an optical drive interface828, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computer. Although a hard disk, a removable magnetic disk and a removable optical disk are described, other types of hardware-based computer-readable storage media can be used to store data, such as flash memory cards, digital video disks, RAMs, ROMs, and other hardware storage media. A number of program modules may be stored on the hard disk, magnetic disk, optical disk, ROM, or RAM. These programs include operating system830, one or more application programs832, other programs834, and program data836. Application programs832or other programs834may include, for example, computer program logic (e.g., computer program code or instructions) for implementing of state table circuit102, state table circuits202A-202N, MUX204, state table circuits414-420, state propagators408-412, MUXes420-426and/or MUXes602-608, and flowchart700and/or further embodiments described herein. A user may enter commands and information into the computing device800through input devices such as keyboard838and pointing device840. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, a touch screen and/or touch pad, a voice recognition system to receive voice input, a gesture recognition system to receive gesture input, or the like. These and other input devices are often connected to processor circuit802through a serial port interface842that is coupled to bus806, but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB). A display screen844is also connected to bus806via an interface, such as a video adapter846. Display screen844may be external to, or incorporated in computing device800. Display screen844may display information, as well as being a user interface for receiving user commands and/or other information (e.g., by touch, finger gestures, virtual keyboard, etc.). In addition to display screen844, computing device800may include other peripheral output devices (not shown) such as speakers and printers. Computing device800is connected to a network848(e.g., the Internet) through an adaptor or network interface850, a modem852, or other means for establishing communications over the network. Modem852, which may be internal or external, may be connected to bus806via serial port interface842, as shown inFIG.8, or may be connected to bus806using another interface type, including a parallel interface. As used herein, the terms “computer program medium,” “computer-readable medium,” and “computer-readable storage medium” are used to refer to physical hardware media such as the hard disk associated with hard disk drive814, removable magnetic disk818, removable optical disk822, other physical hardware media such as RAMs, ROMs, flash memory cards, digital video disks, zip disks, MEMs, nanotechnology-based storage devices, and further types of physical/tangible hardware storage media. Such computer-readable storage media are distinguished from and non-overlapping with communication media (do not include communication media). Communication media embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wireless media such as acoustic, RF, infrared and other wireless media, as well as wired media. Embodiments are also directed to such communication media that are separate and non-overlapping with embodiments directed to computer-readable storage media. As noted above, computer programs and modules (including application programs832and other programs834) may be stored on the hard disk, magnetic disk, optical disk, ROM, RAM, or other hardware storage medium. Such computer programs may also be received via network interface850, serial port interface842, or any other interface type. Such computer programs, when executed or loaded by an application, enable computing device800to implement features of embodiments described herein. Accordingly, such computer programs represent controllers of the computing device800. Embodiments are also directed to computer program products comprising computer code or instructions stored on any computer-readable medium. Such computer program products include hard disk drives, optical disk drives, memory device packages, portable memory sticks, memory cards, and other types of physical storage hardware. IV. Additional Example Embodiments A state machine configured to process multiple inputs of a stream of inputs in parallel is provided herein. The state machine comprises: a first set of state table circuits, each state table circuit of the first set corresponding to a predetermined state of a set of predetermined states and configured to generate first state table circuit outputs based on a first input and the corresponding predetermined state, wherein the first state table circuit outputs corresponding to each state table circuit of the first set of state table circuits collectively comprise a first set of state table circuit outputs; a second set of state table circuits, each state table circuit of the second set corresponding to a predetermined state of the set of predetermined states and configured to generate second state table circuit outputs based on a second input and the corresponding predetermined state, wherein the second state table circuit outputs corresponding to each state table circuit of the second set of state table circuits comprise a second set of state table circuit outputs; a first output multiplexer (MUX) configured to receive the first set of state table circuit outputs and a current state of the state machine, and to select a first state machine output from among the first set of state table circuit outputs based on the current state; a state propagator configured to receive the first and second sets of state table circuit outputs and to select which of respective outputs of the second set of state table circuit outputs to route to respective outputs of the state propagator based on the first set of state table circuit outputs, said respective outputs comprising a set of state propagator outputs; and a second output MUX configured to receive the set of state propagator outputs and the current state of the state machine, and to select a second state machine output from among the set of state propagator outputs based on the current state. In an embodiment of the foregoing state machine, each state table circuit output of the first and second sets of state table circuit outputs comprises an output value and a state value, and wherein the first and second state machine outputs comprise output values. In an embodiment of the foregoing state machine, the state propagator is further configured to select which of respective ones of the second set of state table circuit outputs to route to respective outputs of the state propagator based on the state values corresponding to each state table circuit output of the first set of state table circuit outputs. In an embodiment of the foregoing state machine, the first output MUX is further configured to select the first state table output from among the output values that correspond to each state table circuit output of the first set of state table circuit outputs. In an embodiment of the foregoing state machine, the state machine further comprises: a state MUX configured to receive the current state of the state machine and the state values corresponding to the second set of state table circuit outputs, and to select a next state of the state machine. In an embodiment of the foregoing state machine, the state machine further comprises: one or more additional input stages, each of the one or more additional input stages configured to receive one or more additional inputs, respectively, the one or more additional inputs being temporally between the first and second inputs, the one or more additional input stages including: an additional set of state table circuits, each state table circuit corresponding to a predetermined state of the set of predetermined states and is configured to generate a state table circuit output based on a respective one of the one or more additional inputs and the corresponding predetermined state, wherein the state table circuit outputs corresponding to each state table circuit of the additional set of state table circuits collectively comprise an additional set of state table circuit outputs; an additional state propagator configured to receive the additional set of state table circuit outputs and to route each of the outputs of the additional set of state table circuit outputs to respective outputs of the additional state propagator based on the state values corresponding to the state table circuit outputs received from a state table circuit that corresponds to a temporally next input of the stream of inputs; and an additional output MUX configured to receive the additional set of state table circuit outputs and the current state of the state machine, and to select an additional state table output from among the additional set of state table circuit outputs based on the current state of the state machine. In an embodiment of the foregoing state machine, each additional set of state table circuits, additional state propagator and additional output mux corresponding to one of the one or more additional input stages is operated in a pipeline such that each of the first, second and additional state table output are valid on the same clock cycle. A method for a state machine configured to process inputs in parallel is provided herein. The method comprising: generating first state table circuit outputs based on a first input and a corresponding predetermined state of a set of predetermined states, wherein the first state table circuit outputs correspond to a first set of state table circuits and collectively comprise a first set of state table circuit outputs; generating second state table circuit outputs based on a second input and a corresponding predetermined state of the set of predetermined states, wherein the second state table circuit outputs correspond to a second set of state table circuits and collectively comprise a second set of state table circuit outputs; selecting a first state machine output from among the first set of state table circuit outputs based on a current state of the state machine; selecting which of respective outputs of the second set of state table circuit outputs to route as a set of state propagator outputs based on the first set of state table circuit outputs; and selecting a second state machine output from among the set of state propagator outputs based on the current state. In an embodiment of the foregoing method, each state table circuit output of the first and second sets of state table circuit outputs comprises an output value and a state value, and wherein the first and second state machine outputs comprise output values. In an embodiment of the foregoing method, selecting the respective outputs of the second set of state table circuit outputs further comprises selecting the respective ones of the second set of state table circuit outputs to route to respective outputs of the state propagator based on the state values corresponding to each state table circuit output of the first set of state table circuit outputs. In an embodiment of the foregoing method, selecting the first state machine output further comprises selecting an output value from among the output values that correspond to each state table circuit output of the first set of state table circuit outputs. In an embodiment of the foregoing method, the method further comprises selecting a next state of the state machine from among the state values corresponding to the second set of state table circuit outputs based on the current state of the state machine. In an embodiment of the foregoing method, the method further comprises: generating one or more additional state machine outputs, each of the one or more additional state machine outputs corresponding to a respective of one or more additional inputs, the one or more additional inputs being temporally between the first and second inputs, said generating comprising for each of the one or more additional inputs: generating an additional set of state table circuit outputs based on the respective one of the one or more additional inputs and a corresponding predetermined state of a set of predetermined states; selecting which of respective outputs of the additional set of state table circuit outputs to route as an additional set of state propagator outputs based on the values of a set of state table circuit outputs that correspond to a temporally next input of the stream of inputs; and selecting the respective one of the one or more additional state machine outputs from among the additional set of state propagator outputs based on the current state. In an embodiment of the foregoing method, the method further comprises operating the state machine by pipelining the steps of generating an additional set of state table circuit outputs, selecting which of respective outputs of the additional set of state table circuit outputs and selecting the respective one of the one or more additional state machine outputs. A computer program product comprising a computer-readable memory device having computer program logic recorded thereon that when executed by at least one processor of a computing device causes the at least one processor to perform operations implementing a state machine configured to process inputs in parallel is provided herein. The operations comprise: generating first state table circuit outputs based on a first input and a corresponding predetermined state of a set of predetermined states, wherein the first state table circuit outputs correspond to a first set of state table circuits and collectively comprise a first set of state table circuit outputs; generating second state table circuit outputs based on a second input and a corresponding predetermined state of the set of predetermined states, wherein the second state table circuit outputs correspond to a second set of state table circuits and collectively comprise a second set of state table circuit outputs; selecting a first state machine output from among the first set of state table circuit outputs based on a current state of the state machine; selecting which of respective outputs of the second set of state table circuit outputs to route as a set of state propagator outputs based on the values of the first set of state table circuit outputs; and selecting a second state machine output from among the set of state propagator outputs based on the current state. In an embodiment of the foregoing computer program product, each state table circuit output of the first and second sets of state table circuit outputs comprises an output value and a state value, and wherein the first and second state machine outputs comprise output values. In an embodiment of the foregoing computer program product, selecting the first state machine output comprises selecting an output value from among the output values that correspond to each state table circuit output of the first set of state table circuit outputs. In an embodiment of the foregoing computer program product, the operations further comprise selecting a next state of the state machine from among the state values corresponding to the second set of state table circuit outputs based on the current state of the state machine. In an embodiment of the foregoing computer program product, the operations further comprise: generating one or more additional state machine outputs, each of the one or more additional state machine outputs corresponding to a respective of one or more additional inputs, the one or more additional inputs being temporally between the first and second inputs, said generating comprising for each of the one or more additional inputs: generating an additional set of state table circuit outputs based on the respective one of the one or more additional inputs and a corresponding predetermined state of a set of predetermined states; selecting which of respective outputs of the additional set of state table circuit outputs to route as an additional set of state propagator outputs based on the values of a set of state table circuit outputs that correspond to a temporally next input of the stream of inputs; and selecting the respective one of the one or more additional state machine outputs from among the additional set of state propagator outputs based on the current state. In an embodiment of the foregoing computer program product, the operations further comprise pipelining the steps of generating an additional set of state table circuit outputs, selecting which of respective outputs of the additional set of state table circuit outputs and selecting the respective one of the one or more additional state machine outputs. V. Conclusion While various embodiments of the disclosed subject matter have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be understood by those skilled in the relevant art(s) that various changes in form and details may be made therein without departing from the spirit and scope of the embodiments as defined in the appended claims. Accordingly, the breadth and scope of the disclosed subject matter should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. | 57,639 |
11863183 | DETAILED DESCRIPTION In some embodiments, a multiplier cell is derived from a 1-bit full adder and an AND gate. In various embodiments, the 1-bit full adder is derived from first and second majority gates. A full adder adds binary numbers. A one-bit full-adder adds three one-bit numbers, A, B, and Cin, where A and B are the operands, and Cinis a carry-in bit which is carried in from a previous less-significant stage. A full adder is usually a derived as a cascade of adders. These adders add, for example, 8, 16, 32, etc. bit binary numbers. The output of a 1-bit full adder circuit produces a 2-bit output. One of the output bits is a carry output and other output bit is a sum. The carry is typically represented by signal Cinwhile the sum is typically represented by signal S, where the sum equals 2Cout+S. Implementing a 1-bit adder requires many logic gates such as AND logic gate, OR logic gate, inverters, and sometimes state elements such as flip-flops. Some embodiments describe a new class of logic gates that use non-linear polar material. These new class of logic gates become the basis of a 1-bit full adder. The logic gates include multi-input majority gates and threshold gates. Input signals in the form of digital signals, analog, digitals, or combination of them are driven to first terminals of non-ferroelectric capacitors. The second terminals of the non-ferroelectric capacitors are coupled to form a majority node. Majority function of the input signals occurs on this node. The majority node is then coupled to a first terminal of a capacitor comprising non-linear polar material. The second terminal of the capacitor provides the output of the logic gate, which can be driven by any suitable logic gate such as a buffer, inverter, NAND gate, NOR gate, etc. Any suitable logic or analog circuit can drive the output and inputs of the majority logic gate. As such, the majority gate of various embodiments can be combined with existing transistor technologies such as complementary metal oxide semiconductor (CMOS), tunneling field effect transistor (TFET), GaAs based transistors, bipolar junction transistors (BJTs), Bi-CMOS transistors, etc. In some embodiments, a 1-bit adder is implemented using a 3-input majority gate and a 5-input majority gate. An output from the 3-input majority gate is inverted and input two times to the 5-input majority gate. Other inputs to the 5-input majority gate are the same as those of the 3-input majority gate. The output of the 5-input majority gate is a sum while the output of the 3-input majority gate is the carry. In some embodiments, an additional fixed or programmable input is coupled to the majority node via a capacitor. This additional fixed or programmable input can be a positive or negative bias. The bias behaves as a threshold or offset added or subtracted to or from the voltage (or current) on the majority node and determines the final logic value of the logic gate. Depending on the polarity or voltage value of the bias, AND gate or OR logic gate functions are realized, in accordance with various embodiments. In some embodiments, the multiplier cell is implemented with a combination of two majority gates with majority and AND functions integrated in each of them. These two majority gates are a first majority logic gate with integration majority and AND logic functions, and a second majority logic gate with integration majority and AND logic functions. The two majority gates are coupled. Each of the first and second majority logic gates comprise a capacitor with non-linear polar material. In some embodiments, the first and second majority gates receive the two inputs A and B that are to be multiplied. Other inputs received by the first and second majority gates are carry-in input, a sum-in input, and a bias voltage. In various embodiments, the bias voltage is a negative voltage which produces an integrated AND function in conjunction with a majority function. As such, the majority gates are threshold gates. The second majority gate receives additional inputs, which are inverted output of the first majority gate. The multiplier cell of various embodiments can be an analog multiplier or digital multiplier. In an analog multiplier, the inputs that are multiplied as analog signals, and the output is a product of those analog signals. In a digital multiplier, the inputs are digital signals, and the output is a product of those digital signals. In some embodiments, the multiplier cell can receive both analog and digital signals that are multiplied with one another. There are many technical effects of the various embodiments. For example, extremely compact basic logic gates are formed using the non-ferroelectric capacitors and a capacitor with non-linear polar material. The non-linear polar material can be ferroelectric material, para-electric material, or non-linear dielectric. The logic gates become the basis of adders, multipliers, sequential circuits, and other complex circuits etc. The majority gate and threshold gate of various embodiments lower the power consumption because they do not use switching transistors and the interconnect routings are much fewer than the interconnect routings used in transitional CMOS logic gates. For example, 10× fewer interconnect length is used by the majority gate and threshold gate of various embodiments than traditional CMOS circuits for the same function and performance. The capacitor with non-linear polar material provides non-volatility that allows for intermittent operation. For example, a processor having such logic gates can enter and exit various types of low power states without having to worry about losing data. Since the capacitor with non-linear polar material can store charge from low energy devices, the entire processor can operate at much lower voltage level from the power supply, which reduces overall power of the processor. Further, very low voltage switching (e.g., 100 mV) of the non-linear polar material state allows for low swing signal switching, which in turn results in low power. The capacitor with non-linear polar material are used with any type of transistor. For example, the capacitor with non-linear polar material of various embodiments are used with planar or non-planar transistors. The transistors are formed in the frontend or backend of a die. The capacitors with non-linear polar material are formed in the frontend or backend of the die. As such, the logic gates are packed with high density compared to traditional logic gates. Adders and multipliers are basic building blocks in processors. The majority gate based multipliers of various embodiments are orders of magnitude smaller than a typical CMOS multiplier. This allows for implementing N×N multipliers to multiply very large numbers at very low power and with small area. The non-volatility of the outputs also makes the multipliers of various embodiments ideal for low power applications. Other technical effects will be evident from the various embodiments and figures. In the following description, numerous details are discussed to provide a more thorough explanation of embodiments of the present disclosure. It will be apparent, however, to one skilled in the art, that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring embodiments of the present disclosure. Note that in the corresponding drawings of the embodiments, signals are represented with lines. Some lines may be thicker, to indicate more constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. Such indications are not intended to be limiting. Rather, the lines are used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit or a logical unit. Any represented signal, as dictated by design needs or preferences, may actually comprise one or more signals that may travel in either direction and may be implemented with any suitable type of signal scheme. The term “device” may generally refer to an apparatus according to the context of the usage of that term. For example, a device may refer to a stack of layers or structures, a single structure or layer, a connection of various structures having active and/or passive elements, etc. Generally, a device is a three-dimensional structure with a plane along the x-y direction and a height along the z direction of an x-y-z Cartesian coordinate system. The plane of the device may also be the plane of an apparatus, which comprises the device. Throughout the specification, and in the claims, the term “connected” means a direct connection, such as electrical, mechanical, or magnetic connection between the things that are connected, without any intermediary devices. The term “coupled” means a direct or indirect connection, such as a direct electrical, mechanical, or magnetic connection between the things that are connected or an indirect connection, through one or more passive or active intermediary devices. The term “adjacent” here generally refers to a position of a thing being next to (e.g., immediately next to or close to with one or more things between them) or adjoining another thing (e.g., abutting it). The term “circuit” or “module” may refer to one or more passive and/or active components that are arranged to cooperate with one another to provide a desired function. The term “signal” may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meaning of “a,” “an,” and “the” include plural references. The meaning of “in” includes “in” and “on.” Here, the term “analog signal” generally refers to any continuous signal for which the time varying feature (variable) of the signal is a representation of some other time varying quantity, i.e., analogous to another time varying signal. Here, the term “digital signal” generally refers to a physical signal that is a representation of a sequence of discrete values (a quantified discrete-time signal), for example of an arbitrary bit stream, or of a digitized (sampled and analog-to-digital converted) analog signal. The term “scaling” generally refers to converting a design (schematic and layout) from one process technology to another process technology and subsequently being reduced in layout area. The term “scaling” generally also refers to downsizing layout and devices within the same technology node. The term “scaling” may also refer to adjusting (e.g., slowing down or speeding up—i.e. scaling down, or scaling up respectively) of a signal frequency relative to another parameter, for example, power supply level. The terms “substantially,” “close,” “approximately,” “near,” and “about,” generally refer to being within +/−10% of a target value. For example, unless otherwise specified in the explicit context of their use, the terms “substantially equal,” “about equal” and “approximately equal” mean that there is no more than incidental variation between among things so described. In the art, such variation is typically no more than +/−10% of a predetermined target value. Unless otherwise specified the use of the ordinal adjectives “first,” “second,” and “third,” etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking or in any other manner. For the purposes of the present disclosure, phrases “A and/or B” and “A or B” mean (A), (B), or (A and B). For the purposes of the present disclosure, the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). The terms “left,” “right,” “front,” “back,” “top,” “bottom,” “over,” “under,” and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. For example, the terms “over,” “under,” “front side,” “back side,” “top,” “bottom,” “over,” “under,” and “on” as used herein refer to a relative position of one component, structure, or material with respect to other referenced components, structures or materials within a device, where such physical relationships are noteworthy. These terms are employed herein for descriptive purposes only and predominantly within the context of a device z-axis and therefore may be relative to an orientation of a device. Hence, a first material “over” a second material in the context of a figure provided herein may also be “under” the second material if the device is oriented upside-down relative to the context of the figure provided. In the context of materials, one material disposed over or under another may be directly in contact or may have one or more intervening materials. Moreover, one material disposed between two materials may be directly in contact with the two layers or may have one or more intervening layers. In contrast, a first material “on” a second material is in direct contact with that second material. Similar distinctions are to be made in the context of component assemblies. The term “between” may be employed in the context of the z-axis, x-axis or y-axis of a device. A material that is between two other materials may be in contact with one or both of those materials, or it may be separated from both of the other two materials by one or more intervening materials. A material “between” two other materials may therefore be in contact with either of the other two materials, or it may be coupled to the other two materials through an intervening material. A device that is between two other devices may be directly connected to one or both of those devices, or it may be separated from both of the other two devices by one or more intervening devices. Here, multiple non-silicon semiconductor material layers may be stacked within a single fin structure. The multiple non-silicon semiconductor material layers may include one or more “P-type” layers that are suitable (e.g., offer higher hole mobility than silicon) for P-type transistors. The multiple non-silicon semiconductor material layers may further include one or more “N-type” layers that are suitable (e.g., offer higher electron mobility than silicon) for N-type transistors. The multiple non-silicon semiconductor material layers may further include one or more intervening layers separating the N-type from the P-type layers. The intervening layers may be at least partially sacrificial, for example to allow one or more of a gate, source, or drain to wrap completely around a channel region of one or more of the N-type and P-type transistors. The multiple non-silicon semiconductor material layers may be fabricated, at least in part, with self-aligned techniques such that a stacked CMOS device may include both a high-mobility N-type and P-type transistor with a footprint of a single FET (field effect transistor). Here, the term “backend” generally refers to a section of a die which is opposite of a “frontend” and where an IC (integrated circuit) package couples to IC die bumps. For example, high-level metal layers (e.g., metal layer6and above in a ten-metal stack die) and corresponding vias that are closer to a die package are considered part of the backend of the die. Conversely, the term “frontend” generally refers to a section of the die that includes the active region (e.g., where transistors are fabricated) and low-level metal layers and corresponding vias that are closer to the active region (e.g., metal layer5and below in the ten-metal stack die example). It is pointed out that those elements of the figures having the same reference numbers (or names) as the elements of any other figure can operate or function in any manner similar to that described, but are not limited to such. FIG.1Aillustrates logic gate100with a 3-input majority gate, in accordance with some embodiments. Logic Gate100comprises first, second, and third drivers101,102, and103, respectively. These drivers can be analog drivers generating analog signals or digital drivers generating signals that toggle between ground and the power supply rail, or a combination of analog or digital drivers. For example, driver101is a CMOS driver such as a buffer, inverter, a NAND gate, NOR gate, etc., while driver102is an amplifier generating a bias signal. The drivers provide input signals Vin1(and current I1), Vin2(and current I2), and Vin3(and current I3) to the three inputs of 3-input majority gate104. In various embodiments, 3-input majority gate104comprises three input nodes Vin1, Vin2, and Vin3. Here, signal names and node names are interchangeably used. For example, Vin1refers to node Vin1or signal Vin1depending on the context of the sentence. 3-input majority gate104further comprises capacitors C1, C2, and C3. Here, resistors R1, R2, and R3are interconnect parasitic resistances coupled to capacitors C1, C2, and C3respectively. In various embodiments, capacitors C1, C2, and C3are non-ferroelectric capacitors. In some embodiments, the non-ferroelectric capacitor includes one of: dielectric capacitor, para-electric capacitor, or non-linear dielectric capacitor. A dielectric capacitor comprises first and second metal plates with a dielectric between them. Examples of such dielectrics are: HfO, ABO3 perovskites, nitrides, oxy-fluorides, oxides, etc. A para-electric capacitor comprises first and second metal plates with a para-electric material between them. In some embodiments, f-orbital materials (e.g., lanthanides) are doped to the ferroelectric materials to make paraelectric material. Examples of room temperature paraelectric material include: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95)), HfZrO2, Hf—Si—O, La-substituted PbTiO3, PMN-PT based relaxor ferroelectrics. A dielectric capacitor comprises first and second metal plates with non-linear dielectric capacitor between them. The range for dielectric constant is 1.2 to 10000. The capacitors C1, C2, and C3can be implemented as MIM (metal-insulator-metal) capacitor technology, transistor gate capacitor, hybrid of metal capacitors or transistor capacitor. The capacitors C1, C2, and C3can be implemented as MIM (metal-insulator-metal) capacitor technology, transistor gate capacitor, or hybrid of metal capacitors or transistor capacitor. One terminal of the capacitors C1, C2, and C3is coupled to a common node cn. This common node is coupled to node n1, which is coupled to a first terminal of a non-linear polar capacitor105. The majority function is performed at the common node cn, and the resulting voltage is projected on to capacitor105. For example, the majority function of the currents (I1, I2, and I3) on node cn results in a resultant current that charges capacitor105. Table 1 illustrates the majority function f(Majority Vin1, Vin2, Vin3). TABLE 1cn (f(Majority Vin1,Vin1Vin2Vin3Vin2, Vin3))00000010010001111000101111011111 A capacitor with FE material (also referred to as a FEC) is a non-linear capacitor with its potential VF(QF) as a cubic function of its charge.FIG.1Cillustrates plot130showing characteristics of a FEC. Plot130is a charge-voltage (Q-V) plot for a block f Pb(Zr0.5Ti0.5)O3of area (100 nm)2and thickness 20 nm (nanometer). Plot shows local extrema at +/−Voindicated by the dashed lines. Here, the term Vcis the coercive voltage. In applying a potential V across the FEC, its charge can be unambiguously determined only for |V|>Vo. Otherwise, the charge of the FEC is subject to hysteresis effects. Referring back toFIG.1A, in some embodiments, N odd number of capacitors are coupled to a single FEC to form a majority gate. In this case, N=3. The measured charge on the FEC (QF) is the output of the majority gate. Solving for a steady-state solution, the parasitic resistors are ignored and the input potentials Vi(or Vin) are assumed to be constant. In this case, the charge across each linear capacitor (C1, C2, C3) is: Qi=Ci·(Vi−VF) (1) The charge summed at node Cn and across FEC105is express as: QF=∑iQi(2)QF=∑iCiVi-∑iCiVF(3)QF=∑iCiVi-CVF(QF)(4)VF(QF)=∑iCiCVi-QFC(5) Here, C=ΣiCiis the sum of the capacitances. In the limit, C→∞, the following is achieved: VF(QF)=∑iCiCVi=V¯(6) The potential across FEC105is the average of all the input potentials weighted by the capacitances (e.g., C1, C2, and C3). When Ci=C/N are all equal, VFis just a simple mean. To ensure that QF=V−1F(V) (7) is well defined, all possible values ofVhave magnitudes greater than Vc, the coercive potential. Assuming binary input of +/−Vs, the potential with the smallest magnitude is: V=Vs/N(8) This occurs when (N+1)/2 of the inputs are +Vsand (N−1)/2 are −Vs. Then, Vs>NVC(9) The output of the majority gate at node n1is expressed byFIG.1D.FIG.1Dillustrates plot140showing the output of a 3-input majority gate, in accordance with some embodiments. As an example, for N=3, the possible inputs are: V¯∈{-33Vs,-13Vs,+13Vs,+33Vs}(10) Referring back toFIG.1A, since capacitor105is a non-linear polar capacitor, both of its terminals of the capacitor are pre-discharged to ground or to a known predetermined voltage via n-type transistors pull-down transistors MN1and MN2, and p-type pull-up transistors. The predetermined voltage can be programmable. The pre-determined voltage can be positive or negative. In some embodiments, n-type transistor MN1is coupled to node Vout_int1(internal Vout node) and is controllable by clock or reset signal Clk1. In some embodiments, n-type transistor MN2is coupled to node Vout_int2(internal Vout node) and is controllable by clock or reset signal Clk2. In some embodiments, p-type transistor MP1is coupled to node Vout_int2, and is controllable by Clk3b. In some embodiments, the n-type transistors MN1and MN2are replaced with p-type transistors to pre-charge both terminals (Vout_int1and Vout_int2) of capacitor105to a supply voltage or another predetermined voltage, while the p-type transistor MP1is replaced with an n-type transistor coupled to ground or a negative supply rail. The predetermined voltage can be programmable. The pre-determined voltage can be positive or negative. In some embodiments, the pre-charge or pre-discharge of the terminals of capacitor105(or nodes cn and n1) is done periodically by a clock signals Clk1, Clk2, and Clk3b. The controls can be a non-clock signal that is generated by a control logic (not shown). For example, the control can be issued every predetermined or programmable time. In some embodiments, clock signals Clk1, Clk2, and Clk3bare issued in a reset phase, which is followed by an evaluation phase where inputs Vin1, Vin2, and Vin3are received and majority function is performed on them.FIG.1Eillustrates timing diagram150for resetting the ferroelectric capacitor for majority gates ofFIGS.1A-B, in accordance with some embodiments. Clk1has a pulse with larger than the pulse widths of Clk2and Clk3b. Clk3bis an inverse of Clk3(not shown). In some embodiments, Clk1is first asserted which begins to discharge node Vout_int1. While node Vout_int1is being discharged, Clk2is asserted. Clk2may have a pulse width which is substantially half of the pulse width of Clk1. When Clk2is asserted, node Vout_int2is discharged. This sequence assures that both terminals of the non-linear polar material of capacitor105are discharged sequentially. In various embodiments, before discharging node Vout_int2, Clk3bis de-asserted which turns on transistor MP1, causing Vout_int2to be charged to a predetermined value (e.g., supply level). The pulse width of Clk3bis smaller than the pulse width of clk1to ensure the Clk3bpulsing happens within the Clk1pulse window. This is useful to ensure non-linear polar capacitor105is initialized to a known programmed state along with the other capacitors (e.g., C1, C2, C3) which are initialized to 0 V across them. The pulsing on Vout_int2creates the correct field across the non-linear polar capacitor105in conjunction with Vout_int1to put it in the correct state, such that during operating mode, if Vout_int1goes higher than Vc value (coercive voltage value), it triggers the switching for non-linear polar capacitor105, thereby resulting into a voltage build up on Vout_int2. In some embodiments, load capacitor CL is added to node Vout_int2. In some embodiments, load capacitor CL is a regular capacitor (e.g., a non-ferroelectric capacitor). The capacitance value of CL on Vout_int2is useful to ensure that the FE switching charge (of FE capacitor105) provides the right voltage level. For a given FE size (area A), with polarization switching density (dP) and desired voltage swing of Vdd (supply voltage), the capacitance of CL should be approximately CL=dP*A/Vdd. There is slight deviation from the above CL value as there is charge sharing on Vout_int2due to dielectric component of FE capacitor105. The charge sharing responds relative to voltage on Vout_int1, and capacitor divider ratio between the dielectric component of the FE capacitor105, and load capacitor (CL). Note, the capacitance of CL can be aggregate of all the capacitances (e.g., parasitic routing capacitance on the node, gate capacitance of the output stage106, and drain or source capacitance of the reset devices (e.g., MN2, MP1) on the Vout_int2node. In some embodiments, for a given size of non-linear polar capacitor105, CL requirement can be met by just the load capacitance of Non-FE logic106, and parasitic component itself, and may not need to have it as a separate linear capacitor. Referring back toFIG.1A, in some embodiments, the non-linear polar material of capacitor105includes one of: ferroelectric (FE) material, para-electric material, relaxor ferroelectric, or non-linear dielectric. In various embodiments, para-electric material is same as FE material but with chemical doping of the active ferroelectric ion by an ion with no polar distortion. In some cases, the non-polar ions are non-s orbital ions formed with p, d, f external orbitals. In some embodiments, non-linear dielectric materials are same as para-electric materials, relaxors, and dipolar glasses. In some embodiments, f-orbital materials (e.g., lanthanides) are doped to the ferroelectric material to make paraelectric material. Examples of room temperature paraelectric material include: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95), HfZrO2, Hf—Si—O, La-substituted PbTiO3, PMN-PT based relaxor ferroelectrics. In various embodiments, the FE material can be any suitable low voltage FE material that allows the FE material to switch its state by a low voltage (e.g., 100 mV). In some embodiments, the FE material comprises a perovskite of the type ABO3, where ‘A’ and ‘B’ are two cations of different sizes, and ‘O’ is oxygen which is an anion that bonds to both the cations. Generally, the size of atoms of A is larger than the size of B atoms. In some embodiments, the perovskite can be doped (e.g., by La or Lanthanides). Perovskites can be suitably doped to achieve a spontaneous distortion in a range of 0.3 to 2%. For example, for chemically substituted lead titanate such as Zr in Ti site; La, Nb in Ti site, the concentration of these substitutes is such that it achieves the spontaneous distortion in the range of 0.3 to 2%. For chemically substituted BiFeO3, BiCrO3, BiCoO3 class of materials, La or rare earth substitution into the Bi site can tune the spontaneous distortion. Threshold in the FE material has a highly non-linear transfer function in the polarization vs. voltage response. The threshold is related a) non-linearity of switching transfer function, and b) to the squareness of the FE switching. The non-linearity of switching transfer function is the width of the derivative of the polarization vs. voltage plot. The squareness is defined by the ratio of the remnant polarization to the saturation polarization; perfect squareness will show a value of 1. The squareness of the FE switching can be suitably manipulated with chemical substitution. For example, in PbTiO3 a P-E (polarization-electric field) square loop can be modified by La or Nb substitution to create a S-shaped loop. The shape can be systematically tuned to ultimately yield a non-linear dielectric. The squareness of the FE switching can also be changed by the granularity of the FE layer. A perfectly epitaxial, single crystalline FE layer will show higher squareness (e.g., ratio is closer to 1) compared to a poly crystalline FE. This perfectly epitaxial can be accomplished by the use of lattice matched bottom and top electrodes. In one example, BiFeO (BFO) can be epitaxially synthesized using a lattice matched SrRuO3 bottom electrode yielding P-E loops that are square. Progressive doping with La will reduce the squareness. In some embodiments, the FE material is contacted with a conductive metal oxide that includes one of the conducting perovskite metallic oxides exemplified by: La—Sr—CoO3, SrRuO3, La—Sr—MnO3, YBa2Cu3O7, Bi2Sr2CaCu2O8, LaNiO3, and ReO3. In some embodiments, the FE material comprises a stack of layers including low voltage FE material between (or sandwiched between) conductive oxides. In various embodiments, when FE material is a perovskite, the conductive oxides are of the type AA′BB′O3. A′ is a dopant for atomic site A, it can be an element from the Lanthanides series. B′ is a dopant for atomic site B, it can be an element from the transition metal elements especially Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn. A′ may have the same valency of site A, with a different ferroelectric polarizability. In some embodiments, the FE material comprises hexagonal ferroelectrics of the type h-RMnO3, where R is a rare earth element such as cerium (Ce), dysprosium (Dy), erbium (Er), europium (Eu), gadolinium (Gd), holmium (Ho), lanthanum (La), lutetium (Lu), neodymium (Nd), praseodymium (Pr), promethium (Pm), samarium (Sm), scandium (Sc), terbium (Tb), thulium (Tm), ytterbium (Yb), and yttrium (Y). The ferroelectric phase is characterized by a buckling of the layered MnO5 polyhedra, accompanied by displacements of the Y ions, which lead to a net electric polarization. In some embodiments, hexagonal FE includes one of: YMnO3 or LuFeO3. In various embodiments, when the FE material comprises hexagonal ferroelectrics, the conductive oxides adjacent to the FE material are of A2O3 (e.g., In2O3, Fe2O3) and ABO3 type, where ‘A’ is a rare earth element and B is Mn. In some embodiments, the FE material comprises improper FE material. An improper ferroelectric is a ferroelectric where the primary order parameter is an order mechanism such as strain or buckling of the atomic order. Examples of improper FE material are LuFeO3 class of materials or super lattice of ferroelectric and paraelectric materials PbTiO3 (PTO) and SnTiO3 (STO), respectively, and LaAlO3 (LAO) and STO, respectively. For example, a super lattice of [PTO/STO]n or [LAO/STO]n, where ‘n’ is between 1 to 100. While various embodiments here are described with reference to ferroelectric material for storing the charge state, the embodiments are also applicable for paraelectric material. For example, the capacitor of various embodiments can be formed using paraelectric material instead of ferroelectric material. In some embodiments, the FE material includes one of: Hafnium (Hf), Zirconium (Zr), Aluminum (Al), Silicon (Si), their oxides or their alloyed oxides. In some embodiments, FE material includes one of: Al(1-x)Sc(x)N, Ga(1-x)Sc(x)N, Al(1-x)Y(x)N or Al(1-x-y)Mg(x)Nb(y)N, y doped HfO2, where x includes one of: Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y, wherein ‘x’ is a fraction. In some embodiments, the FE material includes Bismuth ferrite (BFO), lead zirconate titanate (PZT), BFO with doping material, or PZT with doping material, wherein the doping material is one of Nb or; and relaxor ferroelectrics such as PMN-PT. In some embodiments, the FE material includes Bismuth ferrite (BFO), BFO with a doping material where in the doping material is one of Lanthanum, or any element from the lanthanide series of the periodic table. In some embodiments, the FE material105includes lead zirconium titanate (PZT), or PZT with a doping material, wherein the doping material is one of La, Nb. In some embodiments, the FE material includes a relaxor ferroelectric includes one of lead magnesium niobate (PMN), lead magnesium niobate-lead titanate (PMN-PT), lead lanthanum zirconate titanate (PLZT), lead scandium niobate (PSN), Barium Titanium-Bismuth Zinc Niobium Tantalum (BT-BZNT), or Barium Titanium-Barium Strontium Titanium (BT-BST). In some embodiments, the FE material includes Hafnium oxides of the form, Hfl-x Ex Oy where E can be Al, Ca, Ce, Dy, er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y. In some embodiments, FE material105includes Niobate type compounds LiNbO3, LiTaO3, Lithium iron Tantalum Oxy Fluoride, Barium Strontium Niobate, Sodium Barium Niobate, or Potassium strontium niobate. In some embodiments, the FE material comprises multiple layers. For example, alternating layers of [Bi2O2]2+, and pseudo-perovskite blocks (Bi4Ti3O12 and related Aurivillius phases), with perovskite layers that are n octahedral layers in thickness can be used. In some embodiments, the FE material comprises organic material. For example, Polyvinylidene fluoride or polyvinylidene difluoride (PVDF). The FE material is between two electrodes. These electrodes are conducting electrodes. In some embodiments, the electrodes are perovskite templated conductors. In such a templated structure, a thin layer (e.g., approximately 10 nm) of a perovskite conductor (such as SrRuO3) is coated on top of IrO2, RuO2, PdO2, or PtO2 (which have a non-perovskite structure but higher conductivity) to provide a seed or template for the growth of pure perovskite ferroelectric at low temperatures. In some embodiments, when the ferroelectric comprises hexagonal ferroelectric material, the electrodes can have hexagonal metals, spinels, or cubic metals. Examples of hexagonal metals include: PtCoO2, PdCoO2, and other delafossite structured hexagonal metallic oxides such as Al-doped ZnO. Examples of spinels include Fe3O4and LiV2O4. Examples of cubic metals include Indium Tin Oxide (ITO) such as Sn-doped In2O3. The charge developed on node n1produces a voltage and current that is the output of the majority gate104. Any suitable driver106can drive this output. For example, a non-FE logic, FE logic, CMOS logic, BJT logic, etc. can be used to drive the output to a downstream logic. Examples of the drivers include inverters, buffers, NAND gates, NOR gates, XOR gates, amplifiers, comparators, digital-to-analog converters, analog-to-digital converters, etc. In some embodiments, output “out” is reset by driver106via Clk1signal. For example, NAND gate with one input coupled to Vout_int2and the other input coupled to Clk1can be used to reset “out” during a reset phase. WhileFIG.1Aillustrates a 3-input majority gate, the same concept can be extended to more than 3 inputs to make an N-input majority gate, where N is greater than 2. FIG.1Billustrates logic gate120with 5-input majority gate124, in accordance with some embodiments. 5-input majority gate124is similar to 3-input majority gate104but for additional inputs Vin3and Vin5. These inputs can come from the same drivers (e.g., any one of drivers101,102,103) or from different drivers such as driver121and122. Input Vin3and Vin5can be analog, digital, or a combination of them. For example, Vin3is a digital signal while Vin5is an analog signal. The additional inputs Vin3and Vin5are coupled to additional non-ferroelectric capacitors C4and C5, respectively. The composition and size of the capacitors C4and C5is similar to that of C1, C2, and C3. Here, resistors R4and R5are parasitic resistors. The majority function is performed at the common node cn, and the resulting voltage is projected on to capacitor105. For example, the majority function of the currents (I1, I2, I3, I4, and I5) on node cn results in a resultant current that charges capacitor105. Table 2 illustrates the majority function f(Majority Vin1, Vin2, Vin3, Vin4, Vin5) of 5-input majority gate124 TABLE 2cn (f(Majority Vin1, Vin2,Vin1Vin2Vin3Vin4Vin5Vin3, Vin4, Vin5))000000000010000100000110001000001010001100001111010000010010010100010110011000011011011101011111100000100010100100100111101000101011101101101111110000110011110101110111111001111011111101111101 FIG.1Fillustrates logic gate180with a 3-input majority gate with pass-gate based resetting mechanism, in accordance with some embodiments. Logic gate160is similar to logic gate100but for the reset mechanism to reset the terminals of non-linear polar capacitor105. Here, pull-down transistor MN2is removed and a pass-gate comprising p-type transistor MP1and n-type transistor MN3are coupled to Vout_int2node. In some embodiments, transistor MN3is controlled by Clk3while transistor MP1is controlled by Clk3b, where Clk3bis an inverse of Clk3. In some embodiments, Vpulse passes through the pass-gate to Vout_int2when Clk1and Clk3are asserted and before Clk1and Clk3are de-asserted. Vpulse is generated during a reset phase, and is de-asserted during the evaluation phase as illustrated byFIG.1G.FIG.1Gillustrates timing diagram170for resetting the ferroelectric capacitor for majority gate ofFIG.1F, in accordance with some embodiments. During reset phase, node Vout_Int1is first reset or discharged to ground by asserting a Clk1pulse. In the same phase, transistors MP3and MP1are turned on, and Vpulse is applied to node Vout_Int2. Here Vpulse eases out the relative timing control from the perspective of signal generation. Vpulse also minimizes charge injection on Vout_int2node due to differential nature of switching that happens on the pass gate. Note, the pass-gate reduces the charge injection due to charge sharing as transistors MP1and MN3of the pass-gate approximately cancel the charge injection at Vout_int2node due to switching event on the pass-gate. The gray dotted horizontal line shown for Vout_int1(cn) node indicates where the Vc of FE capacitor105will create switching action. For majority gate design, in some embodiments, this gray dotted horizontal line is positioned close to Vdd/2 (e.g., Vc=Vdd/2), where Vdd is logic high value. In some cases, when all inputs are zeros (e.g., Vin1=Vin2=Vin3=0 or Vss), which is referred to3L, then the voltage on Vout_int1and/or Vout_int1may fall below Vss (or ground) level. The same may occur when all inputs are ones (e.g., Vin1=Vin2=Vin3=1 or Vss), which is referred to3H, where the voltage on Vout_int1and/or Vout_int1may rise above Vdd (or supply) level. This, however, may depend on the exact amount of charge injection on the node cn at time 0 after assertion of the input signals. So, all three inputs being logic low (3L) translates into a slightly different levels compared to two inputs being logic low (2L). Here,3H refers to all three inputs being high,2H refers to two inputs being high and one input being low, and1H refers to one input being high and two inputs being low. The same explanation is used for nomenclature3L,2L, and1L. In the1H case, the voltage on node cn and n1may be slightly higher than ground. The same is the case with3H which translates into slightly higher voltage level on nodes cn and/or n1than in2H and1L cases. FIG.1Hillustrates logic gate180with a 3-input majority gate with input resetting mechanism, in accordance with some embodiments. Compared to the reset mechanisms described with reference toFIGS.1A-B, and FIGS. E-G, here the inputs (e.g., Vin1, Vin2, Vin3) are blocked from propagating during reset phase. Logic gate180is similar to logic gate100but for the determinism of input voltages during reset of capacitor105. In some embodiments, for reset mechanisms ofFIGS.1A-B, and FIGS. E-G logic that generates input signals (e.g., Vin1through Vin5) is aware of the reset timing, and as such ensures to send the right input signals (0 V in this illustration) for processing when capacitor105is being reset. Generating the input signals at predetermined voltage levels (e.g., 0 V) ensures predetermined voltage (e.g., 0V) across the linear capacitors (e.g., C1, C2, C3). When such predetermined input signals are generated, pass-gates on the input signal nodes can be removed to save area and cost. In some other embodiments for multiple stages of these majority gates between a logic cluster, the reset sequencing can be controlled from input vectors to correctly create the correct voltage levels during the reset phase at each one of the stages. In some embodiment, a logic gate is provided at the input (e.g., Vin) such that correct voltage level at all stages are driving the right logic. For example, a NAND gate, with one of the inputs being reset signal, and other the logic level (e.g., Vin1), that ensure during reset phase the correct voltage level is applied at input of each one of the stages. In another example, the output of each logic is conditioned during reset to cause the subsequent logic (e.g., majority gate logic) to receive the correct input voltage level during reset. In one such example, non-FE logic106comprises a NAND gate with one of its input being a reset signal, and other the logic level (e.g., coupled to Vout_int2), that ensures during reset phase the correct voltage level is propagated to the input of the next or subsequent majority gate stage. In some embodiments, a first pass-gate is coupled to first capacitor C1and driver that generates first input Vin1. The first pass-gate comprises p-type transistor MP1rcontrollable by Clk1and n-type transistor MN1rcontrollable by Clk1b. The first pass-gate blocks the propagation of Vin1while pull-down transistor MN2rcan set the input to capacitor C1to ground via Clk1. In some embodiments, a second pass-gate is coupled to second capacitor C2and driver that generates the second input Vin2. The second pass-gate comprises p-type transistor MP2rcontrollable by Clk1and n-type transistor MN2rcontrollable by Clk1b. The second pass-gate blocks the propagation of Vin2while pull-down transistor MN3rcan set the input to capacitor C2to ground via Clk1. In some embodiments, a third pass-gate is coupled to third capacitor C3and driver that generates the third input Vin3. The third pass-gate comprises p-type transistor MP3rcontrollable by Clk1and n-type transistor MN3rcontrollable by Clk1b. The third pass-gate blocks the propagation of Vin3while pull-down transistor MN4rcan set the input to capacitor C3to ground via Clk1. The same technique is applied to other inputs. FIG.1Iillustrates timing diagram190for resetting the ferroelectric capacitor for majority gate ofFIG.1H, in accordance with some embodiments. During reset phase, Clk1is asserted (and Clkb is de-asserted) to block the input voltages and to set the input to capacitor C1, C2, and C3to ground. Assertion of Clk1also discharges Vout_int1. As such, voltages on both terminals of input capacitors C1, C2, and C3are discharged. Clk3bis initially (during reset phase) de-asserted to turn on MP1to pre-charge Vout_int2. Thereafter, Clk2is asserted to discharge Vout_int2. The reset mechanism can be described in terms of two sequence of pulses. The first sequence of pulses is to create the right field across the FE capacitor105to initialize it in correct state for operation, while the second sequence of pulses ensures that all the nodes are initialized to 0 state, with all the linear caps (e.g., C1, C2, C3) having 0 V across them. The exact sequence also factors in glitch-less transition to minimize charge injection on high impedance nodes, and ensures that the ferroelectric device105does not see a transient due to reset pulsing that will compromise the initial programmed state for FE device105. The reset mechanism of various embodiments can also be described in terms of four phases. In the first phase (phase1), linear capacitors (C1, C2, C3) are initialized to zero state using Clk1(e.g., by asserting Clk1) and input conditioning (e.g., setting the inputs Vin1, Vin2, Vin3to zero). In the second phase (phase2), FE capacitor105continued to be initialized using Clk3b(e.g., de-asserting Clkb3) while keeping Clk1high (e.g., Clk1remains asserted). In the third phase (phase3), Vout_int2node and the dielectric component of FE capacitor105is initialized to zero state by de-asserting Clk2, asserting Clk3b, and while keeping Clk1high (e.g., Clk1remains asserted). In the fourth phase (phase4), the reset switches are deactivated. For example, transistors MN1r, MP1r(and other pass-gate switches at the inputs) are turned on, MN2r(and other pull-down transistors) on the input nodes (e.g., Vin1, Vin2) are turned off, pull-down transistors MN1and MN2are disabled or turned off, pull-up transistors MP1is disabled or turned off, Vpulse pass-gate having transistors MP1and MN3are disabled. While the embodiments here are described with reference to resetting the FE device105to ground and/or resetting the two terminals of non-ferroelectric linear capacitors (C1, C2, C3) to ground, the resetting voltage can be different voltage other than ground. For example, when input signals (e.g., Vin1, Vin2, Vin3) toggle between a positive supply level and a negative supply level, then the two terminals of FE device105and/or the two terminals of non-ferroelectric linear capacitors (C1, C2, C3) are reset to the negative supply rail. For example, the definition of logic low and logic high to control the various reset devices changes to be positive and negative, respectively. So, if earlier rails were 0 V and Vdd and now they are negative to positive rails, the 0 V maps to negative and Vdd maps to positive. FIG.1Jillustrates plot195showing the voltage on node Vout_int2relating to the behavior of FE capacitor105, in accordance with some embodiments. In this case, FE capacitor105stays within the window of Vc voltage drop across FE capacitor105, but switching helps to generate different voltages on Vout_int2. For example, at time 0 during reset (when Clk1is asserted and other signals such as Clk1b, Clk2, Clk3b, and Vpulse behave according toFIG.1GandFIG.1I), large reset field puts FE capacitor105in low state, and then FE capacitor105bounces between +Vc, and −Vc. FIG.2Aillustrates logic gate200with 3-input threshold gate204which can operate as a AND or OR gate, in accordance with some embodiments. Logic gate200is similar to logic gate100but for removing the third input Vin and adding an input Vbias. This additional input bias makes the logic gate a threshold gate204. Threshold gate204is referred to as a 3-input threshold gate because of the three inputs Vin1, Vin2, and Vbias. It can also be referred to as 2-input threshold gate if the Vbias input is not counted as a separate input. In various embodiments, threshold gate204comprises an additional capacitor Cbias that has one terminal coupled to node cn and another terminal coupled to Vbias. The material for capacitor Cbias can be same as the material for capacitors C1, C2, and C3. For example, capacitor Cbias comprises non-ferroelectric material. Vbias can be positive or negative voltage depending on the desired logic function of threshold gate204. Any suitable source can generate Vbias. For example, a bandgap reference generator, a voltage divider such as a resistor divider, a digital to analog converter (DAC), etc. can generate Vbias. Vbias can be fixed or programmable (or adjustable). For example, Vbias can be adjusted by hardware (e.g., fuses, register), or software (e.g., operating system). In some embodiments, when Vbias is positive, the majority function on node cn is an OR function. For example, the function at node cn is OR(Vin1, Vin2, 0). In some embodiments, when Vbias is negative, the majority function on node cn is an AND function. For example, the function at node cn is AND(Vin1, Vin2, 1). Table 2 and Table 3 summarizes the function of threshold gate206. TABLE 3Vin1Vin2Vbiascn OR(Vin1, Vin2, Vbias)00Positive or0logic 101Positive or1logic 110Positive or1logic 111Positive or1logic 1 TABLE 4Vin1Vin2Vbiascn AND(Vin1, Vin2, Vbias)00Negative or0logic 001Negative or0logic 010Negative or0logic 011Negative or1logic 0 Compared to transitional CMOS AND logic gate and OR logic gate, here the AND function and OR function are performed by a network of capacitors. The output of the majority or threshold function on node cn is then stored in the non-linear polar capacitor105. This capacitor provides the final state of the logic in a non-volatile form. As such, the logic gate of various embodiments describes a non-volatile multi-input AND or OR gate with one or two transistors for pre-discharging or pre-charging nodes cn and n1. The silicon area of the AND or OR gates of various embodiments is orders of magnitude smaller than traditional AND or OR gates. WhileFIG.2Aillustrates a 3-input threshold gate, the same concept can be extended to more than 3 inputs to make an N-input threshold gate, where N is greater than 2 and an odd number. The reset mechanism ofFIG.2Ais similar to the one described with reference toFIG.1A. FIG.2Billustrates logic gate220with a 3-input threshold gate, with pass-gate based reset mechanism, where the 3-input threshold gate can operate as a AND or OR gate, in accordance with some embodiments. Logic220is similar to logic200but for the reset mechanism. The reset mechanism ofFIG.2Bis similar to the one described with reference toFIG.1F. FIG.2Cillustrates logic gate230with a 3-input threshold gate, with input resetting mechanism, where the 3-input threshold gate can operate as a AND or OR gate, in accordance with some embodiments. Logic230is similar to logic200but for the reset mechanism. The reset mechanism ofFIG.2Cis similar to the one described with reference toFIG.1H. FIG.2Dillustrates logic gate240with a 5-input AND/OR majority gate222which can operate as an AND or OR gate with majority function, in accordance with some embodiments. For purposes of explaining the 5-input AND/OR majority gate222, consider the capacitances to be Cbias=C3=C4=C/2, C1=C, and C2=C with corresponding input potentials: Vbias=Vß, Vin3=VA, Vin4=VB, Vin1=VC, and Vin2=Vs, where Vß=−Vois a constant bias voltage and the rest are binary input voltages of +/−Vofor some yet to be determined Vo. Gate222has a function of (A AND B, C, S). Here, the AND gate function is absorbed into the majority gate at the cost of a bias voltage. If both VS=VC=+Vo, then regardless of VA, VB, it is desired that the output is greater than Vcin magnitude, the coercive voltage. For VA=VB=−Vo, the average potential is expressed as: VF=C.2Vo-C/2·2Vo-Vo.·C/23.5C(11)VF=17Vo>Vc·4(12) If VA=VB=+Voand Vc=Vd=−Vo, the following is achieved: VF=-C.2Vo+C/2.2Vo-Vo·C/23.5C(13)VF=-17Vo<-Vc·4(14) To check the equivalence to an AND operation, consider VA=−VB=Vo, then VF=VA+VB-Vo3.5C(15)VF∈{-37Vo,-17Vo,17Vo}(16) As designed, merely when VA=VB=+Vo, gate222produces a positive output. It is further observed that all outputs are greater than VCby setting Vo>7Vc, in accordance with some embodiments. Here, AND function is performed between Vin3and Vin4, and the resulting output is used to perform majority function with Vin1and Vin2, which is describe as: Majority (Vin3AND Vin4, Vin1, Vin2). Table 4 illustrates the truth table of AND majority gate222. Applying a negative voltage on Vbias can be akin to applying an input signal logic low as well. TABLE 5Vin3Vin4Vbiascn Majority of ANDAND(Vin1, Vin2) Vin3,Vin1Vin2FunctionVin4, Vbias0000negative00001negative00010negative00011negative00100negative00101negative00110negative00111negative11000negative01001negative01010negative01011negative11100negative11101negative11110negative11111negative1 In the OR majority function case, OR function is performed between Vin3and Vin4, and the resulting output is used to perform majority function with Vin1and Vin2, which is describe as: Majority (Vin3OR Vin4, Vin1, Vin2). Table 5 illustrates the truth table of OR majority gate222. Applying a positive voltage on Vbias can be akin to applying an input signal logic high as well. TABLE 6Vin3Vin4Vbiascn Majority ofOROR(Vin3, Vin4)Vin1Vin2FunctionVin3, Vin4, Vbias0000positive00001positive00010positive00011positive00100positive00101positive10110positive10111positive11000positive01001positive11010positive11011positive11100positive11101positive11110positive11111positive1 Logic gate222can perform AND majority and OR majority functions depending on the bias value for Vbias. Here, merely two transistors (MN1and MN2) that can be condensed to a single transistor for pre-charging or pre-discharging nodes cn and n1, are used while a complex function of AND majority and OR majority are realized. In various embodiments, majority gate222coupled to inverter106forms a minority threshold gate (majority-invert threshold), resulting in a universal logic gate. FIG.3Aillustrates waveform300showing operation of 3-input majority gate ofFIG.1Bin accordance with some embodiments.FIG.3Aillustrates a majority function of inputs Vin1, Vin2, and Vin3. FIGS.3B-Eillustrate waveforms320,330,340, and350showing operation of 5-input threshold gate with different Vbias values, respectively, in accordance with some embodiments. FIG.4illustrates combinational logic400including the logic gate ofFIG.1Awith a 3D (three-dimensional) view of the 3-input majority gate that couples to an inverter or buffer, in accordance with some embodiments. Any of the reset mechanisms described herein (e.g., with reference toFIGS.1A-I) are applicable to logic400. In this example, capacitors C1(401), C2(402), and C3(403) are MIM capacitors that receive inputs Vin1, Vin2, and Vin3, respectively, on their first terminals from buffers or drivers101,102, and103, respectively. However, other types of capacitors can be used. For example, hybrid of metal and transistor can be used to implement the capacitor. The second terminals of capacitors C1(401), C2(402), and C3(403) are coupled to common node interconnect404(Vout_int1). The output of drivers101,102, and103are Vin1d, Vin2d, and Vin3d, respectively. Interconnect404can be on any suitable metal layer. In some embodiments, interconnect404comprises a material which includes one or more of: Cu, Al, Ag, Au, Co, or W. In some embodiments, capacitors C1(401), C2(402), and C3(403) are formed in the backend of the die. In some embodiments, capacitors C1(401), C2(402), and C3(403) are formed in the frontend of the die. Interconnect404is coupled to a first terminal of non-linear polar capacitor105. In this example, capacitor105comprises ferroelectric material and hence labeled as CFE. However, other non-linear polar material described herein can be used to fabricate capacitor105. The second terminal of capacitor105is coupled to node n1(Vout_int2). In some embodiments, capacitor105is a pillar capacitor. A pillar capacitor is taller than its width and allows for compact layout in the z-direction. In one embodiment, capacitors C1(401), C2(402), and C3(403) are fabricated below or under pillar capacitor forming a vertical majority gate. FIG.5illustrates combination logic500having logic gate ofFIG.1Bwith a 3D view of the 3-input threshold gate that couples to an inverter or buffer, in accordance with some embodiments. Here, 3-input threshold gate204is similar to majority gate ofFIG.4but for removing capacitor C3and its associated input and adding an extra capacitor501Cbias which is biased by Vbias. Vbias can be positive or negative. The various embodiments described with reference toFIG.1BandFIG.4are applicable here. Any of the reset mechanisms described herein (e.g., with reference toFIGS.1A-I) are applicable to logic500. FIG.6Aillustrates 1-bit full adder600comprising a 3-input majority gate and a 5-input majority gate, in accordance with some embodiments. A full adder adds binary numbers and accounts for values carried in as well values that are output. A one-bit full-adder adds three one-bit numbers, A, B, and Cin, where A and B are the operands, and Cinis a bit carried in from the previous less-significant stage. However, the embodiments are not limited to the inputs being binary. In some embodiments, the inputs are analog signals. The full adder is usually a component in a cascade of adders, which add 8, 16, 32, etc. bit binary numbers. The circuit produces a 2-bit output, with are carry out Cout, and sum. The sum is typically represented by the signals Coutand S, where the sum equals 2Cout+S. Implementing a 1-bit adder with complementary metal oxide semiconductor (CMOS) logic requires many logic gates such as AND logic gate, OR logic gate, inverters, and sometime state elements such as flip-flops. In some embodiments, 1-bit adder is implemented with 3-input majority gate601, inverter602, 5-input majority gate603, inverter604, and buffer605. An output n1from 3-input majority gate601is inverted by inverter602. The inverted output Cb is input two times (as inputs Vin1and Vin2) to 5-input majority gate603. To keep the polarity of Cout correct, an additional inverter604drives the output of Cb as Cout. Other inputs (A, B, and Cin) to the 5-input majority gate are same as those of the 3-input majority gate. The output Sum_d of the 5-input majority gate603is a sum while the output of the 3-input majority gate601is the carry. In various embodiments, the output Sum_d is buffered by buffer605to generate the final Sum for driving to a next stage. Table 5 illustrates the truth table of the 1-bit full adder. TABLE 7OutputsInputsCarry outABCin(Cout)Sum0000000101010010111010001101101101011111 The 1-bit full adder600ofFIG.6Ais scaled down to eight capacitors that can be fabricated or positioned in the backend of the die. The active devices or transistors of inverters602,604, and buffer605can be fabricated in the frontend or backend depending on the transistor technology. While each majority gate is shown to have two additional transistors MN1and MN2to discharge common node cn and node n1, these transistors can be shared between the two majority gates601and603. In some embodiments, a single transistor MN (or a p-type transistor, not shown) can be used to pre-discharge (or pre-charge it a p-type transistor is used) common node cn and node n1for both majority gates601and603. As such, nine transistors can implement a 1-bit full adder, which is much smaller in area and power footprint than traditional CMOS based 1-bit full adders. Another way to describe the 1-bit full-adder is in view of linear and non-linear outputs generated by various circuities of 1-bit full-adder600. In some embodiments, adder600comprises 3-input majority gate601including a first circuitry (e.g., interconnects and capacitors C1, C2, and C3) to receive at least three signals (A, B, and Cin) and apply linear summation to the at least three signals, and generate a first summed output on node cn. In various embodiments, A, B, and Cin are driven by CMOS drivers with full rail-to-rail signaling. The 3-input majority gate601comprises a second circuitry (e.g., interconnect cn, capacitor105) to receive the first summed output (e.g., voltage and/or current on node cn) and apply non-linear function via a first FE material (e.g., by capacitor105), wherein the second circuitry to generate a first non-linear output (e.g., on node n1) compared to the first summed output (e.g., on node cn). Adder600further comprises an inverting logic gate602to convert the first non-linear output to a first output Cb, wherein the first output is an inversion of the first non-linear output. The inverting logic gate602can be an inverter, a NAND gate, or NOR gate, wherein the NAND and NOR gates are configured as inverters and are capable of disabling the signal path. Adder600further comprises a 5-input majority gate603coupled to the inverting logic gate602. The 5-input majority gate603comprises a third circuitry (e.g., interconnects and capacitors C1, C2, C3, C4, C5) to receive at least five signals including the at least three signals (e.g., A, B, Cin) and two inverted first outputs (2× Cb), and apply linear summation to the at least five signals, and generate a second summed output on a common node. The 5-input majority gate603comprises a fourth circuitry (e.g., the common node and capacitor105) to receive the second summed output and apply non-linear function via a second FE material, wherein the fourth circuitry to generate a second non-linear output compared to the second summed output. The output voltage developed on the second FE material is the summed output which can be buffered by buffer605. FIG.6Billustrates 1-bit full adder640, in accordance with some embodiments. Adder640is another version of adder600. Adder640is another version of adder600. Adder640comprises first 3-input majority gate641, first inverting logic642, second inverting logic643, third inverting logic644, second 3-input majority gate645, first non-inversion logic646, third 3-input majority gate647, and second non-inversion logic648coupled as shown. The inverting logic can be any suitable inverting logic such as an inverter, tri-state inverter, NAND gate, NOR gate, or multiplexer configured as an inverter. The non-inverting logic can be any suitable non-inverting logic such as a buffer, amplifier, etc. The first 3-input majority gate641generates the carry-out signal, which is inverted and provided as input to the third 3-input majority gate647. The output of the 3-input majority gate is the sum. The third 3-input majority gate647receives the carry-in input and a buffered output (buffered by buffer646) of the second 3-input majority gate645. The second 3-input majority gate645receives inputs A and B, and an inverted version of carry-in. The output third-input majority gate647is buffered by buffer648to generate the sum. FIG.7illustrates plot700showing operation of 1-bit full adder ofFIG.6A, in accordance with some embodiments. The waveforms show the various input combinations of Table 5, and the outputs Cin and Sum. FIG.8illustrates 3-D view800of a 1-bit full adder, in accordance with some embodiments. Here inputs A, B, and Cin are driven by buffers101,102, and103, respectively. These buffers may or may not be part of the adder since these inputs are driven by another logic block (not shown). The 3-input majority gate receives inputs A_d, B_d, and Cin_d, which are buffered versions of input signals A, B, and Cin. In this example, capacitors Cla (401), C2a(402), and C3a(403) are MIM capacitors that the inputs A_d, B_d, and Cin_d, respectively, on their first terminals. However, other types of capacitors can be used. For example, hybrid of metal and transistor can be used to implement the capacitor. The second terminals of capacitors Cla (401), C2a(402), and C3a(403) are coupled to common node cn interconnect404. Interconnect404can be on any suitable metal layer. In some embodiments, interconnect404comprises a material which includes one or more of: Cu, Al, Ag, Au, Co, or W. In some embodiments, capacitors C1a(401), C2a(402), and C3(403) are formed in the backend of the die. In some embodiments, capacitors Cla (401a), C2a(402), and C3a(403) are formed in the frontend of the die. Interconnect404is coupled to a first terminal of non-linear polar capacitor105. In this example, capacitor105comprises ferroelectric material and hence labeled as CFE. However, other non-linear polar material described herein can be used to fabricate capacitor105. The second terminal of capacitor105is coupled to node n1. In some embodiments, capacitor105is a pillar capacitor. A pillar capacitor is taller than its width and allows for compact layout in the z-direction. In one embodiment, capacitors C1a(401), C2a(402), and C3a(403) are fabricated below or under pillar capacitor forming a vertical majority gate. The voltage on node n1is the carry out signal, which is inverted by inverter602and driven as Cb to capacitors C1band C2b. Other capacitors C3b, C4b, and C5bof the 5-input majority gate receive inputs A_d, B_d, and Cin_d, respectively. In this example, capacitors C1b(801), C2b(802), C3b(803), C4b(804), and C5b(805) are MIM capacitors that the inputs A_d, B_d, and Cin_d, respectively, on their first terminals. However, other types of capacitors can be used. For example, hybrid of metal and transistor can be used to implement the capacitor. The second terminals of capacitors C1b(801), C2b(802), C3b(803), C4b(804), and C5b(805) are coupled to common node interconnect806. Interconnect806can be on any suitable metal layer. In some embodiments, interconnect806comprises a material which includes one or more of: Cu, Al, Ag, Au, Co, or W. In some embodiments, capacitors C1b(801), C2b(802), C3b(803), C4b(804), and C5b(805) are formed in the backend of the die. In some embodiments, capacitors C1b(801), C2b(802), C3b(803), C4b(804), and C5b(805) are formed in the frontend of the die. Interconnect806is coupled to a first terminal of non-linear polar capacitor807. In this example, capacitor807comprises ferroelectric material and hence labeled as CFE. However, other non-linear polar material described herein can be used to fabricate capacitor807. The second terminal of capacitor807is coupled to node Sum_d. Buffer605drives Sum_d as Sum. FIG.9illustrates top-down layout900of a 1-bit full adder, in accordance with some embodiments. Layout900illustrates a compact layout of 1-bit full adder600with a pitch of four minimum sized transistors. The pitch can be further reduced to two minimum sized transistors if transistors MN1is used to pre-discharge nodes cn404and806n1for both the 3-input majority gate601and the 5-input majority gate603. Non-ferroelectric capacitors C1b(801), C2b(802), C3b(803), C4b(804), and C5b(805) and non-linear polar capacitors (FE cap) are positioned in the place of via for metal layer1(M1) to metal layer2(M2). Transistors MN1, MN2, and inverters602and604are in the frontend of the die. Inputs A, B, and Cin are on metal layer M2. Common nodes cn404and806are on metal layer M1. While non-ferroelectric capacitors C1, C2, C3, C4, and C5, and non-linear polar capacitors (FE cap) are positioned in location of ViaM1M2, then can be further located in the backend of the die. For example, non-ferroelectric capacitors C1, C2, C3, C4, and C5and the non-linear polar capacitors (FE cap) can be positioned in ViaM4M5or higher. As such, lower metal layers are freed up for routing of other signals. Transistors MN1and MN2, and other of inverters602and604, can be planar or non-planar transistors. In some embodiments, transistors MN1and MN2, and other of inverters602and604, can be formed in the frontend or backend. In some embodiments, one or more of non-ferroelectric capacitors C1, C2, C3, C4, and C5and non-linear polar capacitor (FE cap) are formed in the frontend or backend. While transistors MN1and MN2are illustrated as n-type transistors, they can be replaced with p-type transistors. In that case, nodes cn404/806and n1/sum_d are pre-charged to a predetermined or programmable voltage. The transistors here can be Square Wire, Rectangular Ribbon Transistors, Gate All Around Cylindrical Transistors, Tunneling FETs (TFET), ferroelectric FETs (FeFETs), bipolar transistors (BJT), BiCMOS, or other devices implementing transistors functionality, for instance, carbon nanotubes or spintronic devices. In some embodiments, the transistors are typical metal oxide semiconductor (MOS) transistors or their derivative including Tri-Gate and FinFET transistors. While MOSFET have symmetrical source and drain terminals, TFET device has asymmetric source and drain terminals. FIG.10Aillustrates multiplier cell1000comprising the 1-bit full adder and AND gate, in accordance with some embodiments. Cell1000comprises a 1-bit full adder (e.g., one of600or640) and a AND gate1002. 1-bit full adder600/640receives inputs A and B along with carry-in (Cin). Input A is a sum-in input and C_in is a carry-in input from another multiplier cell. In an array of multipliers, for the first multiplier cell, Sum_in and C_in have fixed values (e.g., logical 0). The two inputs X and Y are multiplied in view of Sum_in (sum input) and C_in (carry_in) inputs. AND gate1002receives the two inputs X and Y and provides an output which is received as input B by 1-bit full adder600/640. In some embodiments, AND gate1002is a CMOS (or any other transistor technology) based AND gate (e.g., a NAND gate followed by an inverter). In some embodiments, AND gate1002is a threshold gate204. AND gate1002produces a partial multiplication result of multiplying X and Y, while adder600/640adds that partial multiplication result with a multiplication result Sum_in from a previous multiplier cell (not shown) to generate a full multiplication result as Sum_out. The Carry-out (Cout) of adder600/640becomes the Cin for the subsequent multiplier cell. Sum_out can be used as a result and/or as Sum_in for a subsequent multiplier cell. As such, an N×N multiplier is made using the basic multiplier cell1000repeated N×N times and connected as discussed herein. FIG.10Billustrates multiplier cell1020comprising the 1-bit full adder600ofFIG.6Aand a AND gate based onFIG.2A, in accordance with some embodiments. AND gate204/1002receives inputs X and Y, and a bias voltage vbias. The output of the ANG=D gate204/1002is buffered by buffer1021and provided as input B to Vin2of 3-input majority gate601and Vin4of 5-input majority gate603. To implement an AND function, vbias is set to a negative voltage. In some embodiments, the pre-charge or pre-discharge transistors MN1and/or MN2in majority gates204/1002,601, and603, are shared. In some embodiments, a single pre-charge or pre-discharge transistor is shared by majority gates204/1002,601, and603to pre-charge or pre-discharged the nodes across capacitors105. Compared to traditional CMOS multipliers, merely a few transistors are used herein resulting in a lower power and faster multiplier. Further, by fabricating the capacitors (non-ferroelectric capacitors and/or the non-linear polar capacitors) in the backend of the die reduces the footprint (or layout pitch) of the multiplier cell. FIG.10Cillustrates multiplier cell1030comprising the 1-bit full adder640ofFIG.6Band a AND gate based onFIG.2A, in accordance with some embodiments. Multiplier cell1030is same as multiplier1020but for different implementation of 1-bit full adder that uses three 3-input Majority gates as described with reference toFIG.6B. FIG.11illustrates multiplier cell1100comprising majority-gate AND gates, in accordance with some embodiments. In some embodiments, multiplier cell1100comprises a first majority AND logic gate1101, a second majority AND logic gate1102, first inversion logic1103, second inversion logic1104, and non-inversion logic1105. First majority AND logic gate1101receives a first input (Vin1), a second input (Vin2), a third input (Vin3); a fourth input (Vin4), and a first bias input (Vbias). The first input Vin1is coupled to input VA, second input Vin2is coupled to input VB, the third input Vin3is coupled to carry-in input Vcin, the fourth input Vin4is coupled to Sum input (Vsum_in), while the first bias input is coupled to Vbias. The output of first majority AND logic gate1101is Vc_out_d which is received as input by first inversion circuitry1103. In various embodiments, the output Vc_out_b of first inversion circuitry1103is received as input by second inversion circuitry1104. The output of second inversion circuitry1104is Vc_out (carry-out output) for the next multiplier cell. Second majority AND logic gate1102receives a first input (Vin1), a second input (Vin2), a third input (Vin3); a fourth input (Vin4), fifth input (Vin5), sixth input (Vin6), and a second bias input (Vbias). The first input Vin1and second input Vin2are coupled to Vc_out_b, output of first inversion circuitry1103. The third input Vin3is coupled to input VA, fourth input Vin4is coupled to input VB, the fifth input Vin5is coupled to carry-in input Vcin, the sixth input Vin6is coupled to Sum input (Vsum_in), while the second bias input is coupled to Vbias. As such, the first input Vin1of first majority AND logic gate1101is coupled to the third input Vin3of second majority AND logic gate1102. The second input Vin2of first majority AND logic gate1102is coupled to the fourth input of the second majority AND logic gate. The third input Vin3of first majority AND logic gate1101is coupled to the fifth input Vin5of second majority AND logic gate1102. The fourth input Vin4of first majority AND logic gate1101is coupled to the sixth input Vin6of the second majority AND logic gate1002. The first bias input and the second bias input are coupled to the same bias Vbias. In some embodiments, the first bias input and the second bias input are coupled to different bias voltages (e.g., Vbias1and Vbias2, respectively). The first inversion logic1103can be any suitable inversion logic such as inverter, NAND gate configured as an inverter with a controllable input, a NOR gate configured as an inverter with a controllable input, a multiplexer that provides the output of majority gate1101in normal condition and any other predetermined or programmable input as output on Vc_out_b. The second logic1104can be a suitable inversion logic such as inverter, NAND gate configured as an inverter with a controllable input, a NOR gate configured as an inverter with a controllable input, a multiplexer that provides the output of first inversion circuitry1103in normal condition and any other predetermined or programmable input as output on Vc_out. In some embodiments, non-inversion logic1105comprises a buffer or any other non-inversion logic such as non-inverting amplifier, AND gate with a controllable input that can mask the output Vsum_out, OR gate with a controllable input that can mask the output Vsum_out, etc. FIG.12illustrates schematic1200of the multiplier cell ofFIG.11, in accordance with some embodiments. In some embodiments, first majority AND gate1101comprises a common node cn, a first capacitor C1, a second capacitor C2, a third capacitor C3, and a fourth capacitor C4. The first capacitor C1has a first terminal to receive the first input Vin1of first majority AND gate1101, and a second terminal coupled to the node cn. The second capacitor C2has a first terminal to receive the second input Vin2of first majority AND gate1101, and a second terminal coupled to the node cn. The third capacitor C3has a first terminal to receive the third input Vin3of the first majority AND gate1101, and a second terminal coupled to the node cn. The fourth capacitor C4has a first terminal to receive the fourth input Vin4of first majority AND gate1101, and a second terminal coupled to the node cn. The fifth capacitor Cbias has a first terminal coupled to the node cn, and a second terminal coupled to the first bias input Vbias. First majority AND gate1101comprises a sixth capacitor105comprising non-linear polar material. The sixth capacitor105includes a first terminal coupled to the node cn and a second terminal coupled to the input Vc_out_d of first inversion circuitry1103. Here, resistors R1, R2, R3, R4, R5, and R6are interconnect parasitic resistances, and capacitors Cp1, Cp2, Cp3, Cp4, Cp5, and Cp6are interconnect parasitic capacitance coupled to capacitors C1, C2, C3, C4, C5, and C6respectively. In some embodiments, the third capacitor C3and the fourth capacitor C4have a first capacitance (e.g. C), while the first capacitor C1, the second capacitor C2, and fifth capacitor C5have a second capacitance (e.g., C/2), wherein the first capacitance is higher than the second capacitance. In some embodiments, first capacitance is substantially twice as large as the second capacitance. In some embodiments, the first, second, third, fourth, and fifth capacitors comprises one of: metal-insulator-metal (MIM) capacitor, transistor gate capacitor, hybrid of metal and transistor capacitor; capacitor comprising para-electric material; non-linear dielectric capacitor, or linear dielectric capacitor. In some embodiments, the non-linear polar material includes one of ferroelectric material, para-electric material, or non-linear dielectric. In some embodiments, the ferroelectric material includes Bismuth ferrite (BFO), BFO with a doping material where in the doping material is one of Lanthanum, or elements from lanthanide series of periodic table. In some embodiments, the ferroelectric material includes Lead zirconium titanate (PZT), or PZT with a doping material, wherein the doping material is one of La or Nb. In some embodiments, the ferroelectric material includes a relaxor ferroelectric includes one of lead magnesium niobate (PMN), lead magnesium niobate-lead titanate (PMN-PT), lead lanthanum zirconate titanate (PLZT), lead scandium niobate (PSN), Barium Titanium-Bismuth Zinc Niobium Tantalum (BT-BZNT), or Barium Titanium-Barium Strontium Titanium (BT-BST). In some embodiments, the ferroelectric material includes perovskite, which includes one of: BaTiO3, PbTiO3, KNbO3, or NaTaO3. In some embodiments, the ferroelectric material includes hexagonal ferroelectric, which includes one of: YMnO3 or LuFeO3. In some embodiments, the ferroelectric material includes hexagonal ferroelectrics of a type h-RMnO3, where R is a rare earth element such as cerium (Ce), dysprosium (Dy), erbium (Er), europium (Eu), gadolinium (Gd), holmium (Ho), lanthanum (La), lutetium (Lu), neodymium (Nd), praseodymium (Pr), promethium (Pm), samarium (Sm), scandium (Sc), terbium (Tb), thulium (Tm), ytterbium (Yb), or yttrium (Y). In some embodiments, the ferroelectric material includes Hafnium (Hf), Zirconium (Zr), Aluminum (Al), Silicon (Si), their oxides or their alloyed oxides. In some embodiments, the ferroelectric material includes Hafnium oxides of the form, Hfl-x Ex Oy where E can be Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y. In some embodiments, the ferroelectric material includes Al(1-x)Sc(x)N, Ga(1-x)Sc(x)N, Al(1-x)Y(x)N or Al(1-x-y)Mg(x)Nb(y)N, y doped HfO2, where x includes one of: Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y, wherein ‘x’ is a fraction. In some embodiments, the ferroelectric material includes Niobate type compounds LiNbO3, LiTaO3, Lithium iron Tantalum Oxy Fluoride, Barium Strontium Niobate, Sodium Barium Niobate, or Potassium strontium niobate. In some embodiments, the ferroelectric material includes improper ferroelectric includes one of: [PTO/STO]n or [LAO/STO]n, where ‘n’ is between 1 to 100. In some embodiments, the capacitor comprising non-linear polar material is positioned in a backend of a die, while transistors of first inversion circuitry1103, second inversion circuitry1104, and/or non-inversion circuitry1105are positioned in a frontend of a die. In some embodiments, non-inversion circuitry1105is coupled to an output n1of second majority AND logic gate1102. In some embodiments, second majority AND gate1102comprises a common node cn, a first capacitor C1, a second capacitor C2, a third capacitor C3, a fourth capacitor C4, a fifth capacitor C5, a sixth capacitor C6, seventh capacitor Cbias, and eighth capacitor105. The first capacitor of second majority AND gate1102has a first terminal to receive the output v_out_b of first inversion circuitry1103and a second terminal coupled to the node cn of second majority AND gate1102. The second capacitor C2of second majority AND gate1102has a first terminal to receive the output of the first inversion circuitry1103and a second terminal coupled to the node cn of second majority AND gate1102. Third capacitor C3of second majority AND gate1102has a first terminal to receive the first input VA(Vin1) of first majority AND gate1101, and a second terminal coupled to the node cn of second majority AND gate1102. The fourth capacitor C4second majority AND gate1102has a first terminal to receive the second input VB(Vin2) of first majority AND gate1101, and a second terminal coupled to the node cn of second majority AND gate1102. The fifth capacitor C5of second majority AND gate1102has a first terminal coupled to third input Vcin(Vin3) of first majority AND gate1101, and a second terminal coupled to the node of second majority AND gate1102. The sixth capacitor C6of second majority AND gate1102has a first terminal coupled to the fourth input Vsum_in(Vin4), and a second terminal coupled to the node cn of second majority AND gate1102. The seventh capacitor Cbias of second majority AND gate1102has a first terminal coupled to the second bias input Vbias and a second terminal coupled to the node; cn second majority AND gate1102. The seventh capacitor is non-linear polar capacitor105of second majority AND gate1102and includes non-linear polar material. This non-linear polar capacitor105includes a first terminal coupled to the node cn and a second terminal coupled to an input Vsum_out_dof non-inversion circuitry1105. In some embodiments, with reference to second majority gate1102, the first capacitor C1, second capacitor C2, fifth capacitor C5and the sixth capacitor C6have a first capacitance (e.g., C), wherein the third capacitor C3, fourth capacitor C4, and seventh capacitor Cbias having a second capacitance (e.g., C/2), wherein the first capacitance is higher than the second capacitance. In some embodiments, the first capacitance is substantially twice as large as the second capacitance. In some embodiments, with reference to second majority gate1102, the first C1, second C2, third C4, fourth C4, fifth C5, and sixth C6capacitors comprises one of: metal-insulator-metal (MIM) capacitor, transistor gate capacitor, hybrid of metal and transistor capacitor; capacitor comprising para-electric material; non-linear dielectric capacitor, or linear dielectric capacitor. The material for capacitor105of second majority gate1102can be any of the material discussed with reference to capacitor105of first majority gate1101. In some embodiments, non-inversion circuitry1105comprises one of a buffer, a non-inverting amplifier, or any other suitable driver (e.g., AND, OR) that can be configured to drive a non-inverting output. The output of non-inversion circuitry1105is Vsum_out. In various embodiments, both terminals of capacitor105of first and second majority AND gates1101and1102, respectively, are coupled to transistors MN1and MN2controllable by clock signals Clk1and Clk2respectively, to pre-discharge nodes cn and n1. In some embodiments, transistors MN1and MN2are replaced with p-type transistors coupled to a supply rail Vcc to pre-charge nodes cn and n1. In some embodiments, clock signals Clk1and Clk2are the same. In some embodiments, Clk1and CLk2signals are phase offset from one another. In some embodiments, a single transistors MN or MP can be used to pre-discharge or pre-charge node cn and/or n1of gates1101and/or1102. FIG.13illustrates N×N multiplier1300comprising majority-gate AND gates, in accordance with some embodiments. To form N×N multiplier1300, AND-Majority gates1100are organized in an array (e.g., rows and columns), where N is a number. Inputs A are shown as columns while input B are shown as rows. The first row of majority gate multipliers with integrated AND functions (e.g.,110000to110004) have sum_input (Si) and carry-in input (Ci) that are set to predetermined or programmable values (e.g., 0). In some embodiments, full 1-bit adders (e.g.,600,640) that do not receive input B from another multiplier cell, have that input set to a predetermined or programmable value (e.g., 0). The values can be programmed by software (e.g., firmware, operating system) or hardware (e.g., fuses, registers). Full 1-bit adder (e.g.,600/640) is provided for each column that sums a locally computed partial product (X·Y), an input passed into the majority gate multiplier cell from above (Sum In), and a carry Ci passed from a majority gate multiplier cell diagonally above. It generates a carry-out (Cout or Co) and a new sum (Sum Out or So). N×N multiplier1300shows the interconnection of 16 of these majority gate multiplier cells to implement the full multiplier function. However, any number of majority gate multiplier cells can be used. The input Ai values are distributed along cell diagonals and the input Bi values are passed along rows. This implementation uses the same gate count as the previous one: 16 AND gates and 12 adders. In various embodiments, the top row may not use adders. The outputs S0though S6of 1-bit adders (e.g.,6000to6006) are the results of the bit-wise multiplication. While the embodiments ofFIG.13are illustrated with reference to multiplier cells having majority gates with integrated AND function, they are all applicable to multiplier cells ofFIGS.10A-C. FIG.14illustrates a system-on-chip (SOC)1400that includes a multiplier cell or N×N multiplier, in accordance with some embodiments. SOC1400comprises memory1401having static random-access memory (SRAM) or FE based random access memory FE-RAM, or any other suitable memory. The memory can be non-volatile (NV) or volatile memory. Memory1401may also comprise logic1403to control memory1402. For example, write and read drivers are part of logic1403. These drivers and other logic are implemented using the majority or threshold gates of various embodiments. The logic can comprise majority or threshold gates and traditional logic (e.g., CMOS based NAND, NOR etc.). Any block of SOC1400can include the 1-bit full adder described with reference to the various embodiments. SOC further comprises a memory I/O (input-output) interface1404. The interface may be double-data rate (DDR) compliant interface or any other suitable interface to communicate with a processor. Processor1405of SOC1400can be a single core or multiple core processor. Processor1405can be a general-purpose processor (CPU), a digital signal processor (DSP), or an Application Specific Integrated Circuit (ASIC) processor. In some embodiments, processor1405is an artificial intelligence (AI) processor (e.g., a dedicated AI processor, a graphics processor configured as an AI processor). AI is a broad area of hardware and software computations where data is analyzed, classified, and then a decision is made regarding the data. For example, a model describing classification of data for a certain property or properties is trained over time with large amounts of data. The process of training a model requires large amounts of data and processing power to analyze the data. When a model is trained, weights or weight factors are modified based on outputs of the model. Once weights for a model are computed to a high confidence level (e.g., 95% or more) by repeatedly analyzing data and modifying weights to get the expected results, the model is deemed “trained.” This trained model with fixed weights is then used to make decisions about new data. Training a model and then applying the trained model for new data is hardware intensive activity. In some embodiments, Al processor1405has reduced latency of computing the training model and using the training model, which reduces the power consumption of such AI processor systems. Processor1405may be coupled to a number of other chip-lets that can be on the same die as SOC1400or on separate dies. These chip-lets include connectivity circuitry1406, I/O controller1407, power management1408, and display system1409, and peripheral connectivity1410. Connectivity1406represents hardware devices and software components for communicating with other devices. Connectivity1406may support various connectivity circuitries and standards. For example, connectivity1406may support GSM (global system for mobile communications) or variations or derivatives, CDMA (code division multiple access) or variations or derivatives, TDM (time division multiplexing) or variations or derivatives, 3rd Generation Partnership Project (3GPP) Universal Mobile Telecommunications Systems (UMTS) system or variations or derivatives, 3GPP Long-Term Evolution (LTE) system or variations or derivatives, 3GPP LTE-Advanced (LTE-A) system or variations or derivatives, Fifth Generation (5G) wireless system or variations or derivatives, 5G mobile networks system or variations or derivatives, 5G New Radio (NR) system or variations or derivatives, or other cellular service standards. In some embodiments, connectivity1406may support non-cellular standards such as WiFi. I/O controller1407represents hardware devices and software components related to interaction with a user. I/O controller1407is operable to manage hardware that is part of an audio subsystem and/or display subsystem. For example, input through a microphone or other audio device can provide input or commands for one or more applications or functions of SOC1400. In some embodiments, I/O controller1407illustrates a connection point for additional devices that connect to SOC1400through which a user might interact with the system. For example, devices that can be attached to the SOC1400might include microphone devices, speaker or stereo systems, video systems or other display devices, keyboard or keypad devices, or other I/O devices for use with specific applications such as card readers or other devices. Power management1408represents hardware or software that perform power management operations, e.g., based at least in part on receiving measurements from power measurement circuitries, temperature measurement circuitries, charge level of battery, and/or any other appropriate information that may be used for power management. By using majority and threshold gates of various embodiments, non-volatility is achieved at the output of these logic. Power management1408may accordingly put such logic into low power state without the worry of losing data. Power management may select a power state according to Advanced Configuration and Power Interface (ACPI) specification for one or all components of SOC1400. Display system1409represents hardware (e.g., display devices) and software (e.g., drivers) components that provide a visual and/or tactile display for a user to interact with the processor1405. In some embodiments, display system1409includes a touch screen (or touch pad) device that provides both output and input to a user. Display system1409may include a display interface, which includes the particular screen or hardware device used to provide a display to a user. In some embodiments, the display interface includes logic separate from processor1405to perform at least some processing related to the display. Peripheral connectivity1410may represent hardware devices and/or software devices for connecting to peripheral devices such as printers, chargers, cameras, etc. Peripheral connectivity1410say support communication protocols, e.g., PCIe (Peripheral Component Interconnect Express), USB (Universal Serial Bus), Thunderbolt, High Definition Multimedia Interface (HDMI), Firewire, etc. Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments. The various appearances of “an embodiment,” “one embodiment,” or “some embodiments” are not necessarily all referring to the same embodiments. If the specification states a component, feature, structure, or characteristic “may,” “might,” or “could” be included, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to “a” or “an” element, that does not mean there is only one of the elements. If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional elements. Furthermore, the particular features, structures, functions, or characteristics may be combined in any suitable manner in one or more embodiments. For example, a first embodiment may be combined with a second embodiment anywhere the particular features, structures, functions, or characteristics associated with the two embodiments are not mutually exclusive. While the disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications and variations of such embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. The embodiments of the disclosure are intended to embrace all such alternatives, modifications, and variations as to fall within the broad scope of the appended claims. In addition, well-known power/ground connections to integrated circuit (IC) chips and other components may or may not be shown within the presented figures, for simplicity of illustration and discussion, and so as not to obscure the disclosure. Further, arrangements may be shown in block diagram form in order to avoid obscuring the disclosure, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the present disclosure is to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that the disclosure can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting. Following examples are provided that illustrate the various embodiments. The examples can be combined with other examples. As such, various embodiments can be combined with other embodiments without changing the scope of the invention. Example 1: An apparatus comprising: a first majority AND logic gate having a first input, a second input, a third input, a fourth input, and a first bias input; a first inversion circuitry coupled to an output of the first majority AND logic gate; a second inversion circuitry coupled to an output of the first inversion circuitry; and a second majority AND logic gate having a first input, a second input, a third input; a fourth input, fifth input, sixth input and a second bias input, wherein: the first and second inputs of the second majority AND logic gate coupled to the output of the first inversion circuitry; the first input of the first majority AND logic gate is coupled to the third input of the second majority AND logic gate; the second input of the first majority AND logic gate is coupled to the fourth input of the second majority AND logic gate; the third input of the first majority AND logic gate is coupled to the fifth input of the second majority AND logic gate; the fourth input of the first majority AND logic gate is coupled to the sixth input of the second majority AND logic gate; and the first bias input and the second bias input are coupled. Example 2: The apparatus of example 1, the first majority AND gate comprises: a node; a first capacitor having a first terminal to receive the first input of the first majority AND gate, and a second terminal coupled to the node; a second capacitor having a first terminal to receive the second input of the first majority AND gate, and a second terminal coupled to the node; a third capacitor having a first terminal to receive the third input of the first majority AND gate, and a second terminal coupled to the node; a fourth capacitor having a first terminal to receive the fourth input of the first majority AND gate, and a second terminal coupled to the node; a fifth capacitor having a first terminal coupled to the node, and a second terminal coupled to the first bias input; and a sixth capacitor comprising non-linear polar material, wherein the sixth capacitor includes a first terminal coupled to the node and a second terminal coupled to the input of the first inversion circuitry. Example 3: The apparatus of example 2, comprising: a first transistor coupled to a first terminal of the sixth capacitor, wherein first transistor is controllable by a first clock; a second transistor coupled to a second terminal of the sixth capacitor, wherein the second transistor is controllable by a second clock; and a third transistor coupled to the second terminal of the sixth capacitor, wherein the third transistor is controllable by third clock. Example 4: The apparatus of example 3, wherein first clock has a pulse width greater than a pulse width of the second clock and a pulse width of the third clock. Example 5: The apparatus of example 3, wherein the third clock de-asserts prior to an assertion of the second clock. Example 6: The apparatus of example 3, wherein the first transistor is a first n-type transistor, wherein the second transistor is a second n-type transistor, and wherein the third transistor is a p-type transistor. Example 7: The apparatus of example 3, wherein the first transistor, the second transistor, and the third transistor are disabled in an evaluation phase, and enabled in a reset phase, wherein the reset phase is prior to the evaluation phase. Example 8: The apparatus of example 2, wherein the third capacitor and the fourth capacitor have a first capacitance, wherein the first capacitor, the second capacitor, and fifth capacitor have a second capacitance, wherein the first capacitance is higher than the second capacitance. Example 9: The apparatus of example 8, wherein the first capacitance is substantially twice as large as the second capacitance. Example 10: The apparatus of example 2, wherein the first, second, third, fourth, and fifth capacitors comprises one of: metal-insulator-metal (MIM) capacitor, transistor gate capacitor, hybrid of metal and transistor capacitor; capacitor comprising para-electric material; non-linear dielectric capacitor, or linear dielectric capacitor. Example 11: The apparatus of example 2, wherein the non-linear polar material includes one of: ferroelectric material, para-electric material, or non-linear dielectric. Example 12: The apparatus of example 11, wherein the ferroelectric material includes one of: Bismuth ferrite (BFO), BFO with a doping material where in the doping material is one of Lanthanum, or elements from lanthanide series of periodic table; Lead zirconium titanate (PZT), or PZT with a doping material, wherein the doping material is one of La or Nb; a relaxor ferroelectric includes one of lead magnesium niobate (PMN), lead magnesium niobate-lead titanate (PMN-PT), lead lanthanum zirconate titanate (PLZT), lead scandium niobate (PSN), Barium Titanium-Bismuth Zinc Niobium Tantalum (BT-BZNT), or Barium Titanium-Barium Strontium Titanium (BT-BST); perovskite includes one of: BaTiO3, PbTiO3, KNbO3, or NaTaO3; hexagonal ferroelectric includes one of: YMnO3 or LuFeO3; hexagonal ferroelectrics of a type h-RMnO3, where R is a rare earth element viz. cerium (Ce), dysprosium (Dy), erbium (Er), europium (Eu), gadolinium (Gd), holmium (Ho), lanthanum (La), lutetium (Lu), neodymium (Nd), praseodymium (Pr), promethium (Pm), samarium (Sm), scandium (Sc), terbium (Tb), thulium (Tm), ytterbium (Yb), or yttrium (Y); Hafnium (Hf), Zirconium (Zr), Aluminum (Al), Silicon (Si), their oxides or their alloyed oxides; Hafnium oxides of the form, Hfl-x Ex Oy where E can be Al, Ca, Ce, Dy, er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y; Al(1-x)Sc(x)N, Ga(1-x)Sc(x)N, Al(1-x)Y(x)N or Al(1-x-y)Mg(x)Nb(y)N, y doped HfO2, where x includes one of: Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y, wherein ‘x’ is a fraction; Niobate type compounds LiNbO3, LiTaO3, Lithium iron Tantalum Oxy Fluoride, Barium Strontium Niobate, Sodium Barium Niobate, or Potassium strontium niobate; or improper ferroelectric includes one of: [PTO/STO]n or [LAO/STO]n, where ‘n’ is between 1 to 100. Example 13: The apparatus of example 2, wherein the sixth capacitor comprising non-linear polar material is positioned in a backend of a die, while transistors of the first inversion circuitry and the second inversion circuitry are positioned in a frontend of a die. Example 14: The apparatus of example 1 comprises a non-inversion circuitry coupled to an output of the second majority AND logic gate. Example 15: The apparatus of example 14, the second majority AND gate comprises: a node; a first capacitor having a first terminal to receive the output of the first inversion circuitry and a second terminal coupled to the node; a second capacitor having a first terminal to receive the output of the first inversion circuitry and a second terminal coupled to the node; a third capacitor having a first terminal to receive the first input of the first majority AND gate, and a second terminal coupled to the node; a fourth capacitor having a first terminal to receive the second input of the first majority AND gate, and a second terminal coupled to the node; a fifth capacitor having a first terminal coupled to the third input of the first majority AND gate, and a second terminal coupled to the node; a sixth capacitor having a first terminal coupled to the fourth input of the first majority AND gate, and a second terminal coupled to the node; a seventh capacitor having a first terminal coupled to a second bias voltage, and a second terminal coupled to the node; and an eighth capacitor comprising non-linear polar material, wherein the eighth capacitor includes a first terminal coupled to the node and a second terminal coupled to an input of the non-inversion circuitry. Example 16: The apparatus of example 15, wherein the first capacitor, second capacitor, fifth capacitor and the sixth capacitor have a first capacitance, wherein the third capacitor, fourth capacitor, and seventh capacitor having a second capacitance, wherein the first capacitance is higher than the second capacitance. Example 17: The apparatus of example 15, wherein the first, second, third, fourth, fifth, and sixth capacitors comprises one of: metal-insulator-metal (MIM) capacitor, transistor gate capacitor, hybrid of metal and transistor capacitor; capacitor comprising para-electric material; non-linear dielectric capacitor, or linear dielectric capacitor. Example 18: The apparatus of example 15, wherein the non-linear polar material includes one of: ferroelectric material, para-electric material, or non-linear dielectric. Example 19: The apparatus of example 18, wherein the ferroelectric material includes one of: Bismuth ferrite (BFO), BFO with a doping material where in the doping material is one of Lanthanum, or elements from lanthanide series of periodic table; Lead zirconium titanate (PZT), or PZT with a doping material, wherein the doping material is one of La or Nb; a relaxor ferroelectric includes one of lead magnesium niobate (PMN), lead magnesium niobate-lead titanate (PMN-PT), lead lanthanum zirconate titanate (PLZT), lead scandium niobate (PSN), Barium Titanium-Bismuth Zinc Niobium Tantalum (BT-BZNT), or Barium Titanium-Barium Strontium Titanium (BT-BST); perovskite includes one of: BaTiO3, PbTiO3, KNbO3, or NaTaO3; hexagonal ferroelectric includes one of: YMnO3 or LuFeO3; hexagonal ferroelectrics of a type h-RMnO3, where R is a rare earth element viz. cerium (Ce), dysprosium (Dy), erbium (Er), europium (Eu), gadolinium (Gd), holmium (Ho), lanthanum (La), lutetium (Lu), neodymium (Nd), praseodymium (Pr), promethium (Pm), samarium (Sm), scandium (Sc), terbium (Tb), thulium (Tm), ytterbium (Yb), or yttrium (Y); Hafnium (Hf), Zirconium (Zr), Aluminum (Al), Silicon (Si), their oxides or their alloyed oxides; Hafnium oxides of the form, Hfl-x Ex Oy where E can be Al, Ca, Ce, Dy, er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y; Al(1-x)Sc(x)N, Ga(1-x)Sc(x)N, Al(1-x)Y(x)N or Al(1-x-y)Mg(x)Nb(y)N, y doped HfO2, where x includes one of: Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y, wherein ‘x’ is a fraction; Niobate type compounds LiNbO3, LiTaO3, Lithium iron Tantalum Oxy Fluoride, Barium Strontium Niobate, Sodium Barium Niobate, or Potassium strontium niobate; or improper ferroelectric includes one of: [PTO/STO]n or [LAO/STO]n, where ‘n’ is between 1 to 100. Example 20: The apparatus of example 14, wherein the non-inversion circuitry comprises a buffer. Example 21: The apparatus of example 1, wherein the third input of the first majority AND logic gate is coupled to a carry-in input. Example 22: The apparatus of example 1, wherein the fourth input of the first majority AND logic gate is coupled to a sum-in input. Example 23: An apparatus comprising: a first majority logic gate with integration majority and AND logic functions, wherein the first majority logic gate includes a capacitor comprising non-linear polar material, wherein the first majority logic gate is a five-input majority logic gate; a second majority logic gate with integration majority and AND logic functions, wherein the second majority logic gate includes a capacitor comprising non-linear polar material, wherein the second majority logic gate is coupled to the first majority logic gate; wherein the first majority logic gate is a seven-input majority logic gate; a first inversion circuitry coupled to an output of the first majority AND logic gate; a second inversion circuitry coupled to an output of the first inversion circuitry; and a non-inversion circuitry coupled to an output of the second majority logic gate. Example 24: The apparatus of example 23, wherein the non-linear polar material includes one of: ferroelectric material, para-electric material, or non-linear dielectric. Example 25: The apparatus of example 23, wherein the ferroelectric material includes one of: Bismuth ferrite (BFO), BFO with a doping material where in the doping material is one of Lanthanum, or elements from lanthanide series of periodic table; Lead zirconium titanate (PZT), or PZT with a doping material, wherein the doping material is one of La or Nb; a relaxor ferroelectric includes one of lead magnesium niobate (PMN), lead magnesium niobate-lead titanate (PMN-PT), lead lanthanum zirconate titanate (PLZT), lead scandium niobate (PSN), Barium Titanium-Bismuth Zinc Niobium Tantalum (BT-BZNT), or Barium Titanium-Barium Strontium Titanium (BT-BST); perovskite includes one of: BaTiO3, PbTiO3, KNbO3, or NaTaO3; hexagonal ferroelectric includes one of: YMnO3 or LuFeO3; hexagonal ferroelectrics of a type h-RMnO3, where R is a rare earth element viz. cerium (Ce), dysprosium (Dy), erbium (Er), europium (Eu), gadolinium (Gd), holmium (Ho), lanthanum (La), lutetium (Lu), neodymium (Nd), praseodymium (Pr), promethium (Pm), samarium (Sm), scandium (Sc), terbium (Tb), thulium (Tm), ytterbium (Yb), or yttrium (Y); Hafnium (Hf), Zirconium (Zr), Aluminum (Al), Silicon (Si), their oxides or their alloyed oxides; Hafnium oxides of the form, Hfl-x Ex Oy where E can be Al, Ca, Ce, Dy, er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y; Al(1-x)Sc(x)N, Ga(1-x)Sc(x)N, Al(1-x)Y(x)N or Al(1-x y)Mg(x)Nb(y)N, y doped HfO2, where x includes one of: Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y, wherein ‘x’ is a fraction; Niobate type compounds LiNbO3, LiTaO3, Lithium iron Tantalum Oxy Fluoride, Barium Strontium Niobate, Sodium Barium Niobate, or Potassium strontium niobate; or improper ferroelectric includes one of: [PTO/STO]n or [LAO/STO]n, where ‘n’ is between 1 to 100. Example 26: A system comprising: a processor; a communication interface communicatively coupled to the processor; and a memory coupled to the processor, wherein the processor comprises a multiplier which includes: a first majority AND logic gate having a first input, a second input, a third input; a fourth input, and a first bias input; a first inversion circuitry coupled to an output of the first majority AND logic gate; a second inversion circuitry coupled to an output of the first inversion circuitry; and a second majority AND logic gate having a first input, a second input, a third input; a fourth input, fifth input, sixth input and a second bias input, wherein: the first and second inputs of the second majority AND logic gate coupled to the output of the first inversion circuitry; the first input of the first majority AND logic gate is coupled to the third input of the second majority AND logic gate; the second input of the first majority AND logic gate is coupled to the fourth input of the second majority AND logic gate; the third input of the first majority AND logic gate is coupled to the fifth input of the second majority AND logic gate; the fourth input of the first majority AND logic gate is coupled to the sixth input of the second majority AND logic gate; and the first bias input and the second bias input are coupled. Example 27: The system of example 26, wherein the processor is one of an accelerator or an artificial intelligence (AI) processor. Example 28: The system of example 26, wherein the first and second bias inputs have programmable bias voltage. An abstract is provided that will allow the reader to ascertain the nature and gist of the technical disclosure. The abstract is submitted with the understanding that it will not be used to limit the scope or meaning of the claims. The following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment. | 109,677 |
11863184 | DETAILED DESCRIPTION Some embodiments describe asynchronous circuits using threshold gate(s) and/or majority gate(s) (or minority gate(s)). The new class of asynchronous circuits can operate at lower power supply levels (e.g., less than 1V on advanced technology nodes) because stack of devices between a supply node and ground are significantly reduced compared to traditional asynchronous circuits. The asynchronous circuits here result in area reduction (e.g., 3× reduction compared to traditional asynchronous circuits) and provide higher throughput/mm2(e.g., 2× higher throughput compared to traditional asynchronous circuits). The threshold gate(s), majority/minority gate(s) can be implemented using capacitive input circuits. The capacitors of the capacitive input circuits can have linear dielectric or nonlinear polar material (e.g., paraelectric or ferroelectric) as dielectric. While the circuits here are described with reference to asynchronous circuits, the circuits can also be used in synchronous circuits. For example, combinational logic associated with synchronous circuits can use the asynchronous circuits discussed herein. In some embodiments, input signals to threshold or majority gates can be clock signals, which allow these asynchronous circuits to operate as synchronous circuits. Some embodiments describe a consensus element (c-element) that outputs a consensus of inputs. For example, when inputs are all logic 1, then the output of the c-element is logic 1 and when inputs are all logic 0, then the output of the c-element is logic 0. In some embodiments, the output is state-holding when all inputs are not the same logic value. For example, a 3-input c-element is state holding when all three inputs are not the same logical value. In this case the output will be the output from a previous state. In some embodiments, the c-element is implemented with a majority or minority gate. As discussed herein, the majority or minority gate may be implemented by adjusting a threshold of a capacitive input circuit. Some embodiments describe a completion tree which is a network or tree of c-elements. The output of the completion tree is a logic 1 when all inputs to the completion tree are logic 1, in accordance with some embodiments. The output of the completion tree is a logic 0 when all inputs to the completion tree are a logic 0, in accordance with some embodiments. The output of the completion tree is state-holding when all inputs are not the same logic value. Some embodiments describe a validity tree. In some embodiments, the validity tree comprises OR-gates and c-elements coupled in a tree-like arrangement where the OR gates receive the inputs and the output of the OR gates are input to the c-elements. In some embodiments, the OR gates are implemented as threshold gates whose threshold is programmed or adjusted to generate an OR function. An output of a validity tree is logic 1 when all input bits are valid, in accordance with some embodiments. The output of the validity tree is logic 0 when all input bits are neutral. When the inputs bits are either valid or neutral, the output of the validity tree holds its state, in accordance with some embodiments. In various embodiments, an individual input comprises two bits, ‘f’ and ‘t’. For example, a data input channel X includes a first bit X.f and a second bit X.t. An individual input X has a valid 0 state if X.f is logic 1 and if X.t is logic 0, in accordance with some embodiments. An individual input X has a valid 1 state if X.f is logic 0 and if X.t is logic 1, in accordance with some embodiments. An individual input X has a neutral state if X.f is logic 0 and if X.t is logic 0, in accordance with some embodiments. Some embodiments provide an apparatus and configuring scheme where capacitive input circuit can be programmed to perform different logic functions by adjusting the switching threshold of the capacitive input circuit. These capacitive circuits can become the basic building blocks for the c-element, the completion tree, and/or the validity tree. Digital inputs are received by respective capacitors on first terminals of those capacitors. In various embodiments, these capacitors comprise linear dielectric, paraelectric dielectric material, or ferroelectric dielectric material. The second terminals of the capacitors are connected to a summing node, in accordance with various embodiments. In some embodiments, a pull-up and/or pull-down device is coupled to the summing node. The pull-up and/or pull-down devices are controlled separately. In some embodiments, during a reset phase, depending on the type of capacitor (linear, paraelectric, or ferroelectric), the inputs to the capacitive input circuit are conditioned and the pull-up or pull-down device is turned on or off. As such the threshold of the capacitive input circuit is set. In some embodiments, when the capacitors have linear dielectric or paraelectric dielectric, one of pull-up or pull-down devices may couple to the summing node. In some embodiments, when the capacitors have ferroelectric dielectric then both pull-up and pull-down devices may couple to the summing node. In one such embodiment, the pull-up and pull-down devices are turned on and off in a sequence and inputs are conditioned to adjust the threshold of the capacitive input circuit. After the reset phase, an evaluation phase follows, in accordance with some embodiments. In the evaluation phase, the output of the capacitive input circuit is determined based on the inputs and the logic function configured during the reset phase, in accordance with various embodiments. For example, the capacitive input circuit may operate as a NAND/AND gate, NOR/OR gate, majority/minority, threshold gate, or other complex gates based on its threshold configuration. In various embodiments, during the evaluation phase, the pull-up and pull-down devices coupled to the summing node are turned off. In some embodiments, all input capacitors have the same capacitance (e.g., same weight or ratio). In some embodiments, the input capacitors may have different capacitance. In that case, the switching threshold for the input capacitor circuit is modified differently by the reset phase. In some embodiments, a different logic gate can be realized by sequencing turning on/off of the pull-up and pull-down devices and changing inputs to the input capacitor circuit during the reset phase. While the embodiments are described with reference to up-to 5-input capacitive circuit using equal ratio for the capacitance, the same idea can be expanded to any number of input capacitive circuit with equal or unequal ratio for capacitances. In various embodiments, the capacitances are nonlinear capacitors. For example, instead of linear dielectric, the capacitors include nonlinear dielectric material. Examples of nonlinear dielectric material include ferroelectric material and paraelectric material. In some embodiments, the capacitor are planar capacitors. In some embodiments, the capacitors are pillar or trench capacitors. In some embodiments, the capacitors are vertically stacked capacitors to reduce the overall footprint of the multi-input capacitive circuit. In some embodiments, the transistors (MP1 and/or MN1) that charge or discharge the summing node n1 are planar or non-planar transistors. In some embodiments, transistors MP1 and/or MN1 are fabricated in the front-end of the die on a substrate. In some embodiments, when the capacitors have ferroelectric material, one of the transistors (e.g., MP1 or MN1) is fabricated in the front-end of the die while another one of the transistors is fabricated in the backend such that the stack of capacitors is between the frontend of the die and the backend of the die or between the two transistors. As such, the footprint of the multi-input capacitive circuit may be a footprint of a single transistor or slightly more than that. The various possible implementations of the c-element, the completion tree, and the validity tree using the adjustable threshold gate-based logic circuit allows for lower power and smaller area based asynchronous circuits compared to traditional asynchronous circuits. In the following description, numerous details are discussed to provide a more thorough explanation of embodiments of the present disclosure. It will be apparent, however, to one skilled in the art, that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, to avoid obscuring embodiments of the present disclosure. Note that in the corresponding drawings of the embodiments, signals are represented with lines. Some lines may be thicker, to indicate more constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. Such indications are not intended to be limiting. Rather, the lines are used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit or a logical unit. Any represented signal, as dictated by design needs or preferences, may actually comprise one or more signals that may travel in either direction, and may be implemented with any suitable type of signal scheme. It is pointed out that those elements of the figures having the same reference numbers (or names) as the elements of any other figure can operate or function in any manner like that described but are not limited to such. FIG.1illustrates a set of plots showing behavior of a ferroelectric capacitor, a ferroelectric capacitor, and a linear capacitor. Plot100compares the transfer function for a linear capacitor, a ferroelectric (PE) capacitor (a nonlinear capacitor) and a ferroelectric (FE) capacitor (a nonlinear capacitor). Here, x-axis is input voltage or voltage across the capacitor, while the y-axis is the charge on the capacitor. The ferroelectric material can be any suitable low voltage FE material that allows the FE material to switch its state by a low voltage (e.g., 100 mV). Threshold in the FE material has a highly nonlinear transfer function in the polarization vs. voltage response. The threshold is related to: a) nonlinearity of switching transfer function; and b) the squareness of the FE switching. The nonlinearity of switching transfer function is the width of the derivative of the polarization vs. voltage plot. The squareness is defined by the ratio of the remnant polarization to the saturation polarization, perfect squareness will show a value of 1. The squareness of the FE switching can be suitably manipulated with chemical substitution. For example, in PbTiO3 a P-E (polarization-electric field) square loop can be modified by La or Nb substitution to create an S-shaped loop. The shape can be systematically tuned to ultimately yield a nonlinear dielectric. The squareness of the FE switching can also be changed by the granularity of an FE layer. A perfectly epitaxial, single crystalline FE layer will show higher squareness (e.g., ratio is closer to 1) compared to a polycrystalline FE. This perfect epitaxial can be accomplished using lattice matched bottom and top electrodes. In one example, BiFeO (BFO) can be epitaxially synthesized using a lattice matched SrRuO3 bottom electrode yielding P-E loops that are square. Progressive doping with La will reduce the squareness. Plot120shows the charge and voltage relationship for a ferroelectric capacitor. A capacitor with ferroelectric material (also referred to as a FEC) is a nonlinear capacitor with its potential VF(QF) as a cubic function of its charge. Plot120illustrates characteristics of an FEC. Plot120is a charge-voltage (Q-V) plot for a block of Pb(Zr0.5Ti0.5)O3of area (100 nm)2and thickness 30 nm (nanometer). Plot120shows local extrema at +/−Voindicated by the dashed lines. Here, the term Vcis the coercive voltage. In applying a potential V across the FEC, its charge can be unambiguously determined only for |V|>Vo. Otherwise, the charge of the FEC is subject to hysteresis effects. In some embodiments, the FE material comprises a perovskite of the type ABO3, where ‘A’ and ‘B’ are two cations of different sizes, and ‘O’ is oxygen which is an anion that bonds to both the cations. Generally, the size of atoms of A is larger than the size of B atoms. In some embodiments, the perovskite can be doped (e.g., by La or Lanthanides). In some embodiments, the FE material is perovskite, which includes one or more of: La, Sr, Co, Sr, Ru, Y, Ba, Cu, Bi, Ca, and Ni. For example, metallic perovskites such as: (La,Sr)CoO3, SrRuO3, (La,Sr)MnO3, YBa2Cu3O7, Bi2Sr2CaCu2O8, LaNiO3, BaTiO3, KNbO3, NaTaO3, etc. may be used for the FE material. Perovskites can be suitably doped to achieve a spontaneous distortion in a range of 0.3 to 2%. For example, for chemically substituted lead titanate such as Zr in Ti site; La, Nb in Ti site, the concentration of these substitutes is such that it achieves the spontaneous distortion in the range of 0.3-2%. For chemically substituted BiFeO3, BrCrO3, BuCoO3 class of materials, La or rare earth substitution into the Bi site can tune the spontaneous distortion. In some embodiments, the FE material is contacted with a conductive metal oxide that includes one of the conducting perovskite metallic oxides exemplified by: La—Sr—CoO3, SrRuO3, La—Sr—MnO3, YBa2Cu3O7, Bi2Sr2CaCu2O8, and LaNiO3. In some embodiments, the FE material comprises a stack of layers including low voltage FE material between (or sandwiched between) conductive oxides. In various embodiments, when FE material is a perovskite, the conductive oxides are of the type AA′BB′O3. A′ is a dopant for atomic site A, it can be an element from the Lanthanides series. B′ is a dopant for atomic site B, it can be an element from the transition metal elements, especially Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn. A′ may have the same valency of site A, with a different ferroelectric polarizability. In various embodiments, when metallic perovskite is used for the FE material, conductive oxides can include one or more of: IrO2, RuO2, PdO2, OsO2, or ReO3. In some embodiments, the perovskite is doped with La or Lanthanides. In some embodiments, thin layer (e.g., approximately 10 nm) perovskite template conductors such as SrRuO3 coated on top of IrO2, RuO2, PdO2, PtO2, which have a non-perovskite structure but higher conductivity to provide a seed or template for the growth of pure perovskite ferroelectric at low temperatures, are used as conductive oxides. In some embodiments, ferroelectric materials are doped with s-orbital material (e.g., materials for first period, second period, and ionic third and fourth periods). In some embodiments, f-orbital materials (e.g., lanthanides) are doped to the ferroelectric material to make paraelectric material. Examples of room temperature paraelectric materials include: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.05 or 0.5, and y is 0.95), HfZrO2, Hf—Si—O, La-substituted PbTiO3, PMN-PT based relaxor ferroelectrics. In some embodiments, the FE material comprises one or more of: Hafnium (Hf), Zirconium (Zr), Aluminum (Al), Silicon (Si), their oxides or their alloyed oxides. In some embodiments, FE material includes one or more of: Al(1−x)Sc(x)N, Ga(1−x)Sc(x)N, Al(1−x)Y(x)N or Al(1−x−y)Mg(x)Nb(y)N, y doped HfO2, where x includes one of: Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y, wherein ‘x’ is a fraction. In some embodiments, FE material includes one or more of: Bismuth ferrite (BFO), lead zirconate titanate (PZT), BFO with doping material, or PZT with doping material, wherein the doping material is one of Nb or La; and relaxor ferroelectrics such as PMN-PT. In some embodiments, the FE material includes Bismuth ferrite (BFO) with a doping material where in the doping material is one of Lanthanum, or any element from the lanthanide series of the periodic table. In some embodiments, FE material includes lead zirconium titanate (PZT), or PZT with a doping material, wherein the doping material is one of La, Nb. In some embodiments, FE material includes a relaxor ferro-electric including one of lead magnesium niobate (PMN), lead magnesium niobate-lead titanate (PMN-PT), lead lanthanum zirconate titanate (PLZT), lead scandium niobate (PSN), Barium Titanium-Bismuth Zinc Niobium Tantalum (BT-BZNT), or Barium Titanium-Barium Strontium Titanium (BT-BST). In some embodiments, the FE material includes Hafnium oxides of the form, Hf1−x Ex Oy where E can be Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y. In some embodiments, the FE material includes Niobate type compounds LiNbO3, LiTaO3, Lithium iron Tantalum Oxy Fluoride, Barium Strontium Niobate, Sodium Barium Niobate, or Potassium strontium niobate. In some embodiments, the FE material comprises multiple layers. For example, alternating layers of [Bi2O2]2+, and pseudo-perovskite blocks (Bi4Ti3O12 and related Aurivillius phases), with perovskite layers that are ‘n’ octahedral layers in thickness can be used. In some embodiments, the FE material comprises organic material. For example, polyvinylidene fluoride or polyvinylidene difluoride (PVDF). In some embodiments, the FE material comprises hexagonal ferroelectrics of the type h-RMnO3, where R is a rare earth element viz. cerium (Ce), dysprosium (Dy), erbium (Er), europium (Eu), gadolinium (Gd), holmium (Ho), lanthanum (La), lutetium (Lu), neodymium (Nd), praseodymium (Pr), promethium (Pm), samarium (Sm), scandium (Sc), terbium (Tb), thulium (Tm), ytterbium (Yb), and yttrium (Y). The ferroelectric phase is characterized by a buckling of the layered MnO5 polyhedra, accompanied by displacements of the Y ions, which lead to a net electric polarization. In some embodiments, hexagonal FE includes one of: YMnO3 or LuFeO3. In various embodiments, when the FE material comprises hexagonal ferroelectrics, the conductive oxides are of A2O3 (e.g., In2O3, Fe2O3) and ABO3 type, where ‘A’ is a rare earth element and B is Mn. In some embodiments, the FE material comprises improper FE material. An improper ferroelectric is a ferroelectric where the primary order parameter is an order mechanism such as strain or buckling of the atomic order. Examples of improper FE material are LuFeO3 class of materials or super lattice of ferroelectric and paraelectric materials PbTiO3 (PTO) and SnTiO3 (STO), respectively, and LaAlO3 (LAO) and STO, respectively. For example, a super lattice of [PTO/STO]n or [LAO/STO]n, where ‘n’ is between 1 to 100. In some embodiments, the paraelectric material includes one of: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95), BaTiO3, HfZrO2, Hf—Si—O, La-substituted PbTiO3, or PMN-PT based relaxor ferroelectrics. FIG.2Aillustrates 2-input consensus element (c-element)200comprising a 3-input minority gate and an inverter, where an adjustable threshold gate is programmed as a 3-input minority gate, in accordance with some embodiments. In some embodiments, c-element200comprises 3-input minority gate201and inverter202. Inputs to 3-input minority gate201are in1, in2, and output “out” of inverter202. Input in1 is connected to input pin 1, input in2 is connected to input pin 2, and output “out” (which is the third input in3) is connected to input pin 3. Output (o) of 3-input minority gate201is provided to node n1 which is input to inverter202. In various embodiments, c-element200outputs on node “out” a consensus of inputs in1 and in2. For example, when inputs in1 and in2 are all logic 1, then the output “out” of c-element200is logic 1, and when inputs in1 and in2 are all logic 0, then the output “out” of the c-element200is logic 0. In some embodiments, the output is state-holding when all inputs (e.g., in1 and in2) are not the same logic value. For example, a 3-input c-element is state holding when all three inputs are not the same logical value. In this case the output will be the output from a previous state (e.g., holding logic 0 or 1 from a previous state). In various embodiments, 3-input minority gate201is implemented as an inverted threshold gate which is configured or programmed to have a threshold of 2 to implement a minority gate. The inverted threshold gate is a capacitive input circuit where capacitors can have linear dielectric, paraelectric dielectric or ferroelectric dielectric, in accordance with various embodiments. FIG.2Billustrates 3-input c-element220comprising a 5-input minority gate and an inverter, where an adjustable threshold gate is programmed as a 5-input minority gate, in accordance with some embodiments. In some embodiments, c-element220comprises 5-input minority gate221and inverter202. Inputs to 5-input minority gate221are in1, in2, in3, and output “out” of inverter202. Output of 5-input minority gate221is provided to node n1 which is input to inverter202. Two of the inputs (inputs ‘4’ and ‘5’) of 5-input minority gate221are connected to node “out”. In some embodiments, input in1 is connected to input pin 1, input in2 is connected to input pin 2, input in3 is connected to input pin 3, output “out” (which is the fourth input in4) is connected to input pin 4, and output “out” (which is the fifth input in5) is connected to input pin 5. In various embodiments, c-element220outputs on node “out” a consensus of inputs in1, in2, and in3. For example, when inputs in1, in2, and in3 are all logic 1, then the output “out” of c-element220is logic 1, and when inputs in1, in2, and in3 are all logic 0, then the output “out” of the c-element220is logic 0. In some embodiments, the output “out” is state-holding when one of the inputs (e.g., one of in1, in2, or in3) is a logic 1 and one of the inputs (e.g., one of in1, in2, or in3) is a logic 0. The output is state-holding when all inputs are not the same logic value. In various embodiments, 5-input minority gate221is implemented as an inverted threshold gate which is configured or programmed to have a threshold of 3 to implement a minority gate. In some embodiments, the inverted threshold gate is a capacitive input circuit where capacitors can have linear dielectric, paraelectric dielectric or ferroelectric dielectric. While the embodiments here illustrate a 2-input c-element and a 3-input c-element, other number of inputs can be used too. In one such embodiment, the threshold of the capacitive input circuits can be adjusted to perform a desired function according to the number of inputs of the c-element. FIG.3Aillustrates 2-input c-element300comprising a 3-input majority gate, where an adjustable threshold gate is programmed as a 3-input majority gate, in accordance with some embodiments. 2-input c-element300is like 2-input c-element200but without inverter202and 3-input minority gate replaced with a 3-input majority gate. In some embodiments, inputs to 3-input majority gate301are in1, in2, and output “out”. Input in1 is connected to input pin 1, input in2 is connected to input pin 2, and output “out” (which is the third input in3) is connected to input pin 3. In various embodiments, c-element300outputs on node “out” a consensus of inputs in1 and in2. For example, when inputs in1 and in2 are all logic 1, then the output “out” of c-element300is logic 1, and when inputs in1 and in2 are all logic 0, then the output “out” of the c-element300is logic 0. In some embodiments, the output “out” is state-holding when one of the inputs (e.g., one of in1 or in2) is a logic 1 and one of the inputs (e.g., one of in1 or in2) is a logic 0. The output is state-holding when all inputs are not the same logic value In various embodiments, 3-input majority gate301is implemented as a threshold gate which is configured or programmed to have a threshold of 2 to implement a majority gate. In some embodiments, the threshold gate is a capacitive input circuit where capacitors can have linear dielectric, paraelectric dielectric or ferroelectric dielectric. FIG.3Billustrates 3-input c-element320comprising a 5-input majority gate, where an adjustable threshold gate is programmed as a 5-input majority gate, in accordance with some embodiments. Inputs to 5-input majority gate321are in1, in2, in3, and output “out”. Two of the inputs of 5-input majority gate321are connected to node “out”. In some embodiments, input in1 is connected to input pin 1, input in2 is connected to input pin 2, input in3 is connected to input pin 3, output “out” (which is the fourth input in4) is connected to input pin 4, and output “out” (which is the fifth input in5) is connected to input pin 5. In various embodiments, c-element320outputs on node “out” a consensus of inputs in1, in2, and in3. For example, when inputs in1, in2, and in3 are all logic 1, then the output “out” of c-element320is logic 1, and when inputs in1, in2, and in3 are all logic 0, then the output “out” of the c-element320is logic 0. In some embodiments, the output “out” is state-holding when one of the inputs (e.g., one of in1, in2, or in3) is a logic 1 and one of the inputs (e.g., one of in1, in2, or in3) is a logic 0. The output is state-holding when all the inputs are not the same logic value. In various embodiments, 5-input majority gate321is implemented as a threshold gate which is configured or programed to have a threshold of 3 to implement a majority gate. In some embodiments, the threshold gate is a capacitive input circuit where capacitors can have linear dielectric, paraelectric dielectric or ferroelectric dielectric. FIG.4illustrates 8-input completion tree400comprising c-elements, in accordance with some embodiments. In various embodiments, 8-input completion tree400is a network or tree of c-elements (e.g., c-elements200,220,300,320). The output of the completion tree is a logic 1 when all inputs to the completion tree are logic 1, in accordance with some embodiments. The output of the completion tree is a logic 0 when all inputs to the completion tree are a logic 0, in accordance with some embodiments. The output of the completion tree is state-holding when the inputs have at least one input having logic 1 and one input having logic 0. The output is state-holding when all the inputs are not the same logic value. In some embodiments, 8-input completion tree400comprises c-elements401,402,403,404,405,406, and407. In some embodiments, c-element401receives inputs in1 and in2 and generates an output o1 which is indicative of a consensus of inputs in1 and in2. In some embodiments, c-element402receives inputs in3 and in4 and generates an output o2 which is indicative of a consensus of inputs in3 and in4. In some embodiments, c-element403receives outputs o1 and o2 and generates an output o5 which is a consensus of outputs o1 and o2. In some embodiments, c-element404receives inputs in5 and in6 and generates an output o3 which is indicative of a consensus of inputs in5 and in6. In some embodiments, c-element405receives inputs in7 and in8 and generates an output o4 which is indicative of a consensus of inputs in7 and in8. In some embodiments, c-element406receives outputs o3 and o4 and generates an output o6 which is a consensus of outputs o3 and o4. In some embodiments, c-element407receives outputs o5 and o6 and generates a final output which is a consensus of outputs o5 and o6. The c-elements of the completion tree can be implemented according to any of c-element implementations discussed herein. FIG.5illustrates 16-input completion tree500comprising the 8-input completion trees and a c-element, in accordance with some embodiments. 16-input completion tree500provides an example of how an N-input completion tree can be constructed. In some embodiments, 16-input completion tree500comprises first 8-input completion tree501, second 8-input completion tree502, and 2-input c-element503. In some embodiments, first 8-input completion tree501and second 8-input completion tree502is according to 8-input completion tree400. In some embodiments, 8-input completion tree501receives a first set of eight inputs in1, in2, in3, in4, in5, in6, in7, and in8, and generates an output o1 which indicates a completion function of a first set of eight inputs. For example, output o1 is 1 when all inputs in1, in2, in3, in4, in5, in6, in7, and in8 are logic 1. Output o1 is 0 when all inputs in1, in2, in3, in4, in5, in6, in7, and in8 are logic 0. Output o1 holds its logic state when at least one input and at least one output to 8-input completion tree501is a logic 1 and a logic 0. In some embodiments, 8-input completion tree502receives a second set of eight inputs in9, in10, in11, in12, in13, in14, in15, and in16, and generates an output o1 which indicates a completion function of the second set of eight inputs. For example, output o2 is 1 when all inputs in9, in10, in11, in12, in13, in14, in15, and in16 are logic 1. Output o1 is 0 when all inputs in9, in10, in11, in12, in13, in14, in15, and in16 are logic 0. Output o1 holds its logic state when at least one input and at least one output to 8-input completion tree502is a logic 1 and a logic 0. In various embodiments, 2-input c-element503receives outputs o1 and o2 and generates Output which indicates a consensus of outputs o1 and o2. 2-input c-element503can be any of one of c-elements200,220,300,320. While the various illustrate two examples of completion tree, the concept can be applied to an N-input completion tree. FIG.6illustrates 8-input validity tree600comprising OR gates and c-elements, in accordance with some embodiments. In some embodiments, 8-input validity tree600comprises OR gates601,602,603, and604, and c-elements403,406, and407. In some embodiments, 8-input validity tree600is like 8-input completion tree600but for OR gates that replace c-elements401,402,404, and405. In some embodiments, OR gates receive the inputs. For example, OR gate601receives inputs in1 and in2 and generates output o1 which is an OR function of inputs in1 and in2. In some embodiments, OR gate602receives inputs in3 and in4 and generates output o2 which is an OR function of inputs in3 and in4. In some embodiments, OR gate603receives inputs in5 and in6 and generates output o3 which is an OR function of inputs in5 and in6. In some embodiments, OR gate604receives inputs in7 and in8 and generates output o4 which is an OR function of inputs in7 and in8. The outputs of the OR gates are input to the c-elements. In some embodiments, c-element403receives outputs o1 and o2 and generates an output o5 which is a consensus of outputs o1 and o2. In some embodiments, c-element406receives outputs o3 and o4 and generates an output o6 which is a consensus of outputs o3 and o4. In some embodiments, c-element407receives outputs o5 and o6 and generates a final output which is a consensus of outputs o5 and o6. In some embodiments, the OR gates are implemented as threshold gates whose threshold is programmed or adjusted to generate an OR function. In some embodiments, the output of 8-input validity tree600is logic 1 when all input bits are valid, in accordance with some embodiments. The output of 8-input validity tree600is logic 0 when all input bits are neutral. When the inputs bits are either valid or neutral, the output of 8-input validity tree600holds its state, in accordance with some embodiments. In various embodiments, an individual input comprises two bits, ‘f’ and ‘t’. For example, a data input channel X includes a first bit X.f and a second bit X.t. In this example, would in1 be X.f and in2 be X.t. An individual input X has a valid 0 state if X.f is logic 1 and if X.t is logic 0, in accordance with some embodiments. An individual input X has a valid 1 state if X.f is logic 0 and if X.t is logic 1, in accordance with some embodiments. An individual input X has a neutral state if X.f is logic 0 and if X.t is logic 0, in accordance with some embodiments. FIG.7illustrates 16-input validity tree700comprising the 8-input validity trees and a c-element, in accordance with some embodiments. In some embodiments, 16-input completion tree700comprises first 8-input validity tree701, second 8-input validity tree702, and 2-input c-element703. In some embodiments, first 8-input validity tree701and second 8-input validity tree702is according to 8-input validity tree600. In some embodiments, 8-input completion tree701receives a first set of eight inputs in1, in2, in3, in4, in5, in6, in7, and in8, and generates an output o1 which indicates a validity function of the first set of eight inputs. In some embodiments, 8-input completion tree702receives a second set of eight inputs in9, in10, in11, in12, in13, in14, in15, and in16, and generates an output o1 which indicates a validity function of the first set of eight inputs. In various embodiments, 2-input c-element703receives outputs o1 and o2 and generates Output which indicates a consensus of outputs o1 and o2. 2-input c-element703can be any of one of c-elements200,220,300,320. While the various illustrate two examples of validity tree, the concept can be applied to an N-input validity tree. The following section describes various embodiments of adjustable threshold gate that can be used as basis for the c-element, completion tree, and/or validity tree, in accordance with various embodiments. FIG.8Aillustrates a 2-input adjustable threshold gate800with linear or paraelectric capacitors and a pull-up device on a summing node, in accordance with some embodiments. In some embodiments, 2-input capacitive circuit800comprises a first input (a), a second input (b), summing node (n1), first capacitor C1, second capacitor C2, pull-up-device MP1, driver801, and output (out) coupled as shown. In some embodiments, the first capacitor C1 includes a first terminal coupled to the first input and a second terminal coupled to the summing node n1. In some embodiments, the second capacitor C2 includes a third terminal coupled to the second input and a fourth terminal coupled to the summing node n1. In some embodiments, the pull-up device MP1 is coupled to the summing node n1 and a power supply rail Vdd, wherein the pull-up device MP1 is controlled by a first control (up). In various embodiments, during the reset phase, node n1 is pulled-up by transistor MP1 to Vdd, and inputs ‘a’ and ‘b’ are conditioned via conditioning circuit802to adjust the threshold of 2-input capacitive circuit800. Conditioning circuitry802may receive inputs in1 and in2, and configuration setting (e.g., reset or evaluation) to determine the outputs ‘a’, ‘b’, and control “up”. During the evaluation phase, in1 is passed on to output ‘a’ and in2 is passed on to ‘b’. During the reset phase, depending on a desired threshold, outputs ‘a’ and ‘b’ are conditioned. Here the term threshold generally refers to a number that indicates a number of inputs that should be set to logic high to perform a function of a threshold gate. For instance, by turning on/off the pull-up device MP1 and conditioning the inputs ‘a’ and ‘b’ during a reset phase, the charge at node n1 is set so that in an evaluation phase when the pull-up device MP1 is disabled, the input capacitive circuit attains a desired function. In one instance, when the threshold is set to 2 in a reset phase by a particular sequencing of turning on/off the pull-up device and conditioning of the inputs ‘a’ and ‘b; it means that during an evaluation phase when both inputs ‘a’ and ‘b’ are logic high, then voltage on node n1 is logic high. Continuing with this example, when any of the inputs ‘a’ and ‘b’ is a logic low, then the voltage on node n1 resolves to logic low. As such, 2-input capacitive circuit800is programmed or configured as a AND gate at node n1 and a NAND gate at output out. Likewise, when the threshold is set to 1 in a reset phase by a particular sequencing of turning on/off the pull-up device and conditioning of the inputs ‘a’ and ‘b; it means that during an evaluation phase when either input ‘a’ and ‘b’ is logic high, then voltage on node n1 is logic high. Continuing with this example, when both the inputs ‘a’ and ‘b’ are a logic low, then the voltage on node n1 resolves to logic low. As such, 2-input capacitive circuit800is programmed or configured as an OR gate at node n1 and a NOR gate at output out. So, the same circuit can be used as a AND/NAND or OR/NOR gate by conditioning the inputs and resetting or setting the voltage on the summing node during a reset phase. Subsequently, in the evaluation phase the circuit will behave as AND/NAND or OR/NOR gate. In some embodiments, conditioning circuitry802turns off the pull-up device MP1 during an evaluation phase separate from the reset phase. The reset phase or evaluation phase are indicated by the logic level of Config. For example, conditioning circuitry802sets the first control (up) to logic high (Vdd) and the second control (down) to logic low (ground) during an evaluation phase (e.g., Config is set to logic 1). Likewise, in a reset phase, Config is set to 0. This is just an example, and the logic level of Config can be modified to present the evaluation phase and the reset phase. Table 1 illustrates that when inputs ‘a’ and ‘b’ are conditioned as logic 1 and pull-up device MP1 is enabled during the reset phase, then the threshold is set to 1. In the evaluation phase, 2-input capacitive circuit800can then behave as a NOR gate. Here, the capacitors comprise linear dielectric. Note, this example assumes equal weights (or substantially equal) for C1 and C2 (e.g., C1=C2). In some embodiments, the threshold may change (e.g., from 1 to 2) when the ratio of capacitances of capacitors C1 and C2 are modified. TABLE 1Input ‘a’Input ‘b’First control (Up)Threshold000 (enable MP1)0100 (enable MP1)0110 (enable MP1)1 A threshold of 0 means that the capacitive-input circuit is an always on circuit regardless of the logic levels of the inputs. In one such embodiment, during the evaluation phase for the circuit configured with threshold of zero, the logic value on node n1 is logic 1, and the logic value on output out is logic 0 (assuming the driver is an inverter). When the capacitors comprise paraelectric material, different thresholds are achieved compared to the linear dielectric material for the same input conditioning. Table 2 illustrates that when inputs ‘a’ and ‘b’ are conditioned as logic 1 and pull-up device MP1 is enabled during the reset phase, then the threshold is set to 1. In the evaluation phase, 2-input capacitive circuit800can then behave as a NOR gate. In some embodiments, when inputs ‘a’ and ‘b’ are conditioned as logic 1 and logic 0, respectively, and pull-up device MP1 is enabled during the reset phase, then the threshold is set to 1. In the evaluation phase, 2-input capacitive circuit800can then behave as an OR/NOR gate when the threshold is set to 1. In some embodiments, when inputs ‘a’ and ‘b’ are conditioned as logic 0 and pull-up device MP1 is enabled during the reset phase, then the threshold is set to 0. Note, this example assumes equal weights (or substantially equal) for C1 and C2 (e.g., C1=C2). In some embodiments, the threshold may change (e.g., from 1 to 2 or to some other level) when the ratio of capacitances of capacitors C1 and C2 are modified. Table 2 is the case when capacitors are paraelectric capacitors. TABLE 2Input ‘a’Input ‘b’First control (Up)Threshold000 (enable MP1)0100 (enable MP1)1110 (enable MP1)1 While the embodiment ofFIG.8Aillustrates an inverter as driver801, driver801can be any suitable logic. In some embodiments, driver801is a non-inverting circuit such as a buffer, AND, OR, a capacitive input circuit, or any non-inverting circuit. In some embodiments, driver801is an inverting circuit such as an inverter, NAND, NOR, XOR, XNOR, or any inverting circuit etc. In some embodiments, driver801is a multiplexer that connects summing nodes of other capacitive circuits to its inputs. In some embodiments, one or more inputs of the multiplexer are driven from a transistor-based logic. As such, the multiplexer can selectively output a desired output. In some embodiments, driver801is another capacitive input circuit where one of the inputs is coupled to the summing node n1 and other input(s) are coupled to other inputs. As such, complex logic can be formed with configurable threshold and thus function(s). FIG.8Billustrates 2-input adjustable threshold gate820with linear or paraelectric capacitors and a pull-down device on a summing node, in accordance with some embodiments. Compared toFIG.8A, here pull-up device MP1 is removed and replaced with a pull-down device MN1 coupled to summing node n1 and ground supply terminal. In various embodiments, during the reset phase, node n1 is pulled-down by transistor MN1 to ground, and inputs ‘a’ and ‘b’ are conditioned via conditioning circuit822to adjust the threshold of 2-input capacitive circuit820. Table 3 illustrates input conditioning that provides a threshold of 2 when capacitors are linear capacitors. Conditioning circuitry822may receive inputs in1 and in2, and configuration setting (e.g., reset or evaluation) to determine the outputs ‘a’, ‘b’, and down. During the evaluation phase, in1 is passed on to output ‘a’ and in2 is passed on to ‘b’. During the reset phase, depending on a desired threshold, outputs ‘a’ and ‘b’ are conditioned. TABLE 3Input ‘a’Input ‘b’Second control (down)Threshold001 (enable MN1)2101 (enable MN1)2111 (enable MN1)2 In this case, when inputs ‘a’ and ‘b’ are conditioned as shown in Table 3 and pull-down device MN1 is enabled during the reset phase, then the threshold is set to 2. In the evaluation phase, 2-input capacitive circuit820can then behave as an NAND gate. Note, this example assumes equal weights for C1 and C2 (e.g., C1=C2). In some embodiments, the threshold may change (e.g., from 2 to 1) when the ratio of capacitances of capacitors C1 and C2 are modified. Table 4 illustrates input conditioning that provides a threshold of 2. When the capacitors comprise paraelectric material, different thresholds are achieved compared to the linear dielectric material for the same input conditioning. Table 4 is the case when capacitors comprise paraelectric material. Conditioning circuitry822may receive inputs in1 and in2, and configuration setting (e.g., reset or evaluation) to determine the outputs ‘a’, ‘b’, and down. During the evaluation phase, in1 is passed on to output ‘a’ and in2 is passed on to ‘b’. During the reset phase, depending on a desired threshold, outputs ‘a’ and ‘b’ are conditioned. TABLE 4Input ‘a’Input ‘b’Second control (down)Threshold001 (enable MN1)2101 (enable MN1)2111 (enable MN1)3 In this case, when inputs ‘a’ and ‘b’ are conditioned as shown in Table 4 and pull-down device MN1 is enabled during the reset phase, then the threshold is set to 2. In the evaluation phase, 2-input capacitive circuit820can then behave as an AND or NAND gate. In some embodiments, when inputs ‘a’ and ‘b’ are conditioned as logic 1 and pull-down device MN1 is enabled during the reset phase, then the threshold is set to 3. In the evaluation phase, when the threshold is higher than the number of inputs, 2-input capacitive circuit820behaves as a disconnected circuit where internal node n1 is floating and the output of driver out may be a don't care logic value. Note, this example assumes equal weights for C1 and C2 (e.g., C1=C2). In some embodiments, the threshold may change (e.g., from 2 to 1) when the ratio of capacitances of capacitors C1 and C2 are modified. While the embodiments are illustrated with reference to same capacitances for first capacitor C1 and the second capacitor C2, the threshold can be affected by changing the capacitive ratio of C1 and C2. For example, the input conditioning scheme and the pull-up and pull-down device control can result in a different threshold than that in Table 4 when the capacitive ratio of C1 and C2 is not 1:1. Overall, the configuring scheme of various embodiments herein provide the flexibility of programming the threshold for 2-input capacitive circuit820in a reset phase to achieve a certain logic function in the evaluation phase. FIG.9Aillustrates 3-input adjustable threshold gate900with linear or paraelectric capacitors and a pull-up device on a summing node, in accordance with some embodiments. 3-input capacitive circuit900is like 2-input capacitive circuit800but for additional input ‘c’ and associated capacitor C3. In some embodiments, a first terminal of capacitor C3 is coupled to input ‘c’ while a second terminal of capacitor C3 is coupled to summing node n1. Conditioning circuit902is replaced with a conditioning circuit902. Conditioning circuitry902may receive inputs in1, in2, and in3 and configuration setting (e.g., reset or evaluation) to determine the outputs ‘a’, ‘b’, ‘c’, and up. During the evaluation phase, in1 is passed on to output ‘a’, in2 is passed on to ‘b’, and in3 is passed on to ‘c’. During the reset phase, depending on a desired threshold, outputs ‘a’, ‘b’, and ‘c’ are conditioned. In some embodiments, by turning on/off the pull-up device MP1 and conditioning the inputs ‘a’, ‘b’, and ‘c’ during a reset phase, the charge at node n1 is set so that in an evaluation phase when the pull-up device is disabled, 3-input capacitive circuit900attains a desired function. In one instance, when the threshold is set to 2 in a reset phase by a particular sequencing of turning on/off the pull-up device and conditioning of the inputs ‘a’, ‘b’, and ‘c’; it means that during an evaluation phase when at least two of the three inputs ‘a’, ‘b’, and ‘c’ are logic high, then voltage on node n1 is logic high. Continuing with this example, when at least two inputs of the three inputs ‘a’ ‘b’, and ‘c’ is a logic low, then the voltage on node n1 resolves to logic low. As such, 3-input capacitive circuit900is programmed or configured as a majority gate at node n1 and a minority gate at output out (when the driver circuitry is an inverter). In some cases, depending upon the leakage balance of pull-up transistor MP1 as it impacts charge on the summing node n1, 3-input capacitive circuit900may lose its majority logic functionality over time. This loss in functionality of the majority function can be restored by resetting the summing node n1 via transistor MP1, in accordance with some embodiments. In some embodiments, when the threshold is set to 3 in a reset phase by a particular sequencing of turning on/off the pull-up device and conditioning of the inputs ‘a’, ‘b’, and ‘c’; it means that during an evaluation phase when all three inputs ‘a’, ‘b’, and ‘c’ are logic high, then voltage on node n1 is logic high. Continuing with this example, when any of the three inputs ‘a’ ‘b’, and ‘c’ is a logic low, then the voltage on node n1 resolves to logic low. As such, 3-input capacitive circuit900is programmed or configured as a 3-input AND at node n1 and a 3-input NAND gate at output out (assuming the driver circuitry is an inverter). In some embodiments, when the threshold is set to 1 in a reset phase by a particular sequencing of turning on/off the pull-up device and conditioning of the inputs ‘a’ and ‘b; it means that during an evaluation phase when any of the inputs ‘a’ ‘b’, or ‘c’ is logic high, then voltage on node n1 is logic high. Continuing with this example, when all inputs ‘a’, ‘b’, or ‘c’ is a logic low, then the voltage on node n1 resolves to logic low. As such, 3-input capacitive circuit900is programmed or configured as an OR gate at node n1 and a NOR gate at output out. So, the same circuit can be used as a majority/minority gate, AND/NAND or OR/NOR gate by conditioning the inputs and resetting or setting the voltage on the summing node during a reset phase. Subsequently, in the evaluation phase the circuit will behave as a 3-input majority/minority, 3-input AND/NAND or 3-input OR/NOR gate. In some embodiments, conditioning circuitry902sets the threshold to 0 in a reset phase by enabling the pull-up device MP1 and providing logic 1 to the first input ‘a’, logic 0 to the second input ‘b’, and logic 0 to the third input ‘c’. In some embodiments, conditioning circuitry902sets the threshold to 0 in a reset phase by turning on or enabling the pull-up device MP1 and providing logic 0 to all inputs ‘a’, ‘b’, and ‘c’. A threshold of 0 means that the capacitive-input circuit is an always on circuit regardless of the logic levels of inputs. In one such embodiment, during the evaluation phase for the circuit configured with threshold of zero, the logic value on node n1 is logic 1, and the logic value on output out is logic 0 (assuming the driver is an inverter). In some embodiments, conditioning circuitry902(or any other conditioning circuit) sets the threshold to 4. A threshold of 4 for a 3-input capacitive circuit means that capacitive input circuit is an always off circuit regardless of the logic levels of the inputs. In one such embodiment, during the evaluation phase for the circuit configured with threshold of n+1 (e.g., 4, where ‘n’ is the number of capacitive inputs), the logic value on node n1 is floating and may eventually discharge to ground or charge to supply level. In some embodiments, the voltage on node n1 is zero volts regarding of input setting when the threshold in 4 (e.g., n+1). Table 5 illustrates that when inputs ‘a’, ‘b’, and ‘c’ are conditioned and pull-up device MP1 is enabled during the reset phase, then the threshold is set to 0, 1, or 2. In this example, the capacitors comprise linear dielectric. TABLE 5‘a’‘b’‘c’First control (Up)Threshold0000 (enable MP1)01000 (enable MP1)01100 (enable MP1)11110 (enable MP1)2 In the evaluation phase, 3-input capacitive circuit900can then behave as an OR/NOR gate (when threshold is 1) or a majority/minority gate (when threshold is 2). Note, this example assumes equal weights for C1, C2, and C3 (e.g., C1=C2=C3). In some embodiments, the threshold may change (e.g., from 1 to 2) when the ratio of capacitances of capacitors C1, C2, and/or C3 are modified. Table 6 illustrates that when inputs ‘a’, ‘b’, and ‘c’ are conditioned and pull-up device MP1 is enabled during the reset phase, then the threshold is set to 0, 1, or 2. When the capacitors comprise paraelectric material, different thresholds are achieved compared to the linear dielectric material for the same input conditioning. Table 6 is the case when capacitors comprise paraelectric material. TABLE 6‘a’‘b’‘c’First control (Up)Threshold0000 (enable MP1)01000 (enable MP1)11100 (enable MP1)11110 (enable MP1)2 In the evaluation phase, 3-input capacitive circuit900can then behave as a logic1/logic0 driver (when threshold is 0), an OR/NOR gate (when threshold is 1), a minority/minority gate (when threshold is 2). Note, this example assumes equal weights for C1, C2, and C3 (e.g., C1=C2=C3). In some embodiments, the threshold may change (e.g., from 1 to 2 or to another other value) when the ratio of capacitances of capacitors C1, C2, and/or C3 are modified. FIG.9Billustrates a 3-input adjustable threshold gate920with linear or paraelectric capacitors and a pull-down device on a summing node, in accordance with some embodiments. Compared toFIG.9A, here the pull-up device MP1 is removed and pull-down device MN1 is added which is coupled to node n1 and ground supply rail. In various embodiments, during the reset phase, node n1 is pulled-down by MN1 to ground, and inputs ‘a’, ‘b’, and ‘c’ are conditioned via configuration circuit922to adjust the threshold of 3-input capacitive circuit920. Conditioning circuitry922may receive inputs in1, in2, and in3 and configuration setting(s) (e.g., reset or evaluation) to determine the outputs ‘a’, ‘b’, ‘c’, and down. During the evaluation phase, in1 is passed on to output ‘a’, in2 is passed on to ‘b’, and in3 is passed on to ‘c’. During the reset phase, depending on a desired threshold, outputs ‘a’, ‘b’, and ‘c’ are conditioned. Table 7 illustrates that when inputs ‘a’, ‘b’, and ‘c’ are conditioned and pull-down device MN1 is enabled during the reset phase, then the threshold is set to 2 or 3. In this example, the capacitors comprise linear dielectric material. TABLE 7‘a’‘b’‘c’Second control (down)Threshold0001 (enable MN1)21001 (enable MN1)31101 (enable MN1)31111 (enable MN1)3 In the evaluation phase, 3-input capacitive circuit920can then behave as a majority/majority gate (when threshold is 2) or an AND/NAND gate (when threshold is 3). Note, this example assumes equal weights for C1, C2, and C3 (e.g., C1=C2=C3). In some embodiments, the threshold may change (e.g., from 3 to 2 or to 1) when the ratio of capacitances of capacitors C1, C2, and/or C3 are modified. Table 8 illustrates that when inputs ‘a’, ‘b’, and ‘c’ are conditioned and pull-down device MN1 is enabled during the reset phase, then the threshold is set to 2, 3, or 4. When the capacitors comprise paraelectric material, different thresholds are achieved compared to the linear dielectric material for the same input conditioning. Table 8 is the case when capacitors comprise paraelectric material. TABLE 8‘a’‘b’‘c’Second control (down)Threshold0001 (enable MN1)21001 (enable MN1)31101 (enable MN1)31111 (enable MN1)4 In the evaluation phase, 3-input capacitive circuit920can then behave as a logic1/logic0 driver (when threshold is 0), a majority/majority gate (when threshold is 2), an AND/NAND gate (when threshold is 3), or a disconnected circuit (when threshold is 4). Note, this example assumes equal weights for C1, C2, and C3 (e.g., C1=C2=C3). In some embodiments, the threshold may change (e.g., from 3 to 2 or to 1, or any other value) when the ratio of capacitances of capacitors C1, C2, and/or C3 are modified. FIG.10Aillustrates 5-input adjustable threshold gate1000with linear or paraelectric capacitors and a pull-up device on a summing node, in accordance with some embodiments.FIG.10Ais comparable toFIG.9A, but for additional input ‘d’ and associated capacitor C4 and additional input ‘e’ and associated capacitor C5. In some embodiments, a first terminal of capacitor C4 is coupled to input ‘d’ while a second terminal of capacitor C4 is coupled to summing node n1. In some embodiments, a first terminal of capacitor C5 is coupled to input ‘e’ while a second terminal of capacitor C5 is coupled to summing node n1. Conditioning circuit902is replaced with a conditioning circuit1002. Conditioning circuitry1002may receive inputs in1, in2, in3, in4, and in5 and configuration setting (e.g., reset or evaluation) to determine the outputs ‘a’, ‘b’, ‘c’, ‘d’, ‘e’, and control “up”. During the evaluation phase, in1 is passed on to output ‘a’, in2 is passed on to ‘b’, in3 is passed on to ‘c’, in4 is passed on to ‘d’, and in5 is passed on to ‘e’. During the reset phase, depending on a desired threshold, outputs ‘a’, ‘b’, ‘c’ ‘d’, and ‘e’ are conditioned. In various embodiments, during the reset phase, node n1 is pulled-up by MP1 to Vdd, and inputs ‘a’, ‘b’, ‘c’, ‘d’, and ‘e’ are conditioned via configuration circuitry1002to adjust the threshold of 5-input capacitive circuit1000. Conditioning circuitry1002may receive inputs in1, in2, in3, in4, and in5 and configuration setting (e.g., reset or evaluation) to determine the outputs ‘a’, ‘b’, ‘c’, ‘d’, ‘e’, and control “up”. During the evaluation phase, in1 is passed on to output ‘a’, in2 is passed on to ‘b’, in3 is passed on to ‘c’, in4 is passed on to ‘d’, and in5 is passed on to ‘e’. During the reset phase, depending on a desired threshold, outputs ‘a’, ‘b’, ‘c’ ‘d’, and ‘e’ are conditioned. Table 9 illustrates that when inputs ‘a’, ‘b’, ‘c’, ‘d’, and ‘e’ are conditioned and pull-up device MP1 is enabled during the reset phase, then the threshold is set to 1 or 3. In this example, the capacitors comprise linear dielectric material. TABLE 9‘a’‘b’‘c’‘d’‘e’First control (Up)Threshold000000 (enable MP1)0100000 (enable MP1)0110000 (enable MP1)0111000 (enable MP1)1111100 (enable MP1)2111110 (enable MP1)3 In the evaluation phase, 5-input capacitive circuit1000can then behave as an OR/NOR gate (when threshold is 1), a majority-0/minority-0 gate (when threshold is 2), or a majority/minority gate (when threshold is 3). Note, this example assumes equal weights for C1, C2, C3, C4, and C5 (e.g., C1=C2=C3=C4=C5). In some embodiments, the threshold may change (e.g., from 1 to 2 or to 3, 4, or 5) when the ratio of capacitances of capacitors C1, C2, C3, C4, and/or C5 are modified. Table 10 illustrates that when inputs ‘a’, ‘b’, ‘c’, ‘d’, and ‘e’ are conditioned and the pull-up device MP1 is enabled during the reset phase, then the threshold is set to 1 or 3. When the capacitors comprise paraelectric material, different thresholds are achieved compared to the linear dielectric material for the same input conditioning. Table 10 is the case when capacitors comprise paraelectric material. TABLE 10‘a’‘b’‘c’‘d’‘e’First control (Up)Threshold000000 (enable MP1)0100000 (enable MP1)1110000 (enable MP1)1111000 (enable MP1)2111100 (enable MP1)2111110 (enable MP1)3 In the evaluation phase, 5-input capacitive circuit1000can then behave as an always on circuit that drives a constant logic value on node n1 (when threshold is 0), an OR/NOR gate (when threshold is 1), a majority-0/minority-0 gate or a threshold gate (when threshold is 2), or a majority/minority gate (when threshold is 3). Note, this example assumes equal weights for C1, C2, C3, C4, and C5 (e.g., C1=C2=C3=C4=C5). In some embodiments, the threshold may change (e.g., from 1 to 2 or to 3, 4, or 5 or any other value) when the ratio of capacitances of capacitors C1, C2, C3, C4, and/or C5 are modified. FIG.10Billustrates 5-input adjustable threshold gate1020with linear or paraelectric capacitors and a pull-down device on a summing node, in accordance with some embodiments. Compared toFIG.10A, here pull-up device MP1 is removed and pull-down device MN1 is coupled to node n1 and ground power supply rail. In various embodiments, during the reset phase, node n1 is pulled-down by MN1 to ground, and inputs ‘a’, ‘b’, ‘c’, ‘d’ and ‘e’ are conditioned via configuration circuit1022to adjust the threshold of 5-input capacitive circuit1000. Conditioning circuitry1022may receive inputs in1, in2, in3, in4, and in5 and configuration setting (e.g., reset or evaluation) to determine the outputs ‘a’, ‘b’, ‘c’, ‘d’, ‘e’, and control “down”. During the evaluation phase, in1 is passed on to output ‘a’, in2 is passed on to ‘b’, in3 is passed on to ‘c’, in4 is passed on to ‘d’, and in5 is passed on to ‘e’. During the reset phase, depending on a desired threshold, outputs ‘a’, ‘b’, ‘c’ ‘d’, and ‘e’ are conditioned. Table 11 illustrates that when inputs ‘a’, ‘b’, ‘c’, ‘d’, and ‘e’ are conditioned and pull-down device MN1 is enabled during the reset phase, then the threshold is set to 3, 4, or 5. In this example, the capacitors comprise linear dielectric material. TABLE 11‘a’‘b’‘c’‘d’‘e’Second control (down)Threshold000001 (enable MN1)3100001 (enable MN1)4110001 (enable MN1)5111001 (enable MN1)5111101 (enable MN1)5111111 (enable MN1)5 In the evaluation phase, 5-input capacitive circuit1020can then behave as a majority/minority gate (when threshold is 3) or a threshold gate (when threshold is 4), or an AND/NAND gate (when threshold is 5). Note, this example assumes equal weights for C1, C2, C3, C4, and C5 (e.g., C1=C2=C3=C4=C5). In some embodiments, the threshold may change (e.g., from 3 to 2 or to 1, 4, or 5 or any other value) when the ratio of capacitances of capacitors C1, C2, C3, C4 and/or C5 are modified. While the various embodiments illustrate the first input ‘a’, second input ‘b’, third input ‘c’, fourth input ‘d’, and/or fifth input ‘e’, these inputs are labeled for reference purposes and can be swapped in any order assuming all capacitors have the same capacitance. Input associated with capacitors of the same capacitance can be swapped with one another, in accordance with some embodiments. While the embodiments are illustrated for capacitive input circuits with up to 5 inputs, the adaptive or configurable threshold for the capacitive circuit can be achieved for any number of inputs (e.g., n number of inputs) using the scheme discussed herein. Table 12 illustrates that when inputs ‘a’, ‘b’, ‘c’, ‘d’, and ‘e’ are conditioned and pull-down device MN1 is enabled during the reset phase, then the threshold is set to 3, 4, 5, or 6. When the capacitors comprise paraelectric material, different thresholds are achieved compared to the linear dielectric material for the same input conditioning. Table 12 is the case when capacitors comprise paraelectric material. TABLE 12‘a’‘b’‘c’‘d’‘e’Second control (down)Threshold000001 (enable MN1)3100001 (enable MN1)4110001 (enable MN1)4111001 (enable MN1)5111101 (enable MN1)5111111 (enable MN1)6 In the evaluation phase, 5-input capacitive circuit1020can then behave as a majority/minority gate (when threshold is 3), a threshold gate (when threshold is 4), an AND/NAND gate (when threshold is 5), or a disconnected circuit (when threshold is 6). Note, this example assumes equal weights for C1, C2, C3, C4, and C5 (e.g., C1=C2=C3=C4=C5). In some embodiments, the threshold may change (e.g., from 3 to 2 or to 1, 4, or 5 or any other value) when the ratio of capacitances of capacitors C1, C2, C3, C4 and/or C5 are modified. FIG.11illustrates 2-input adjustable threshold gate1100with ferroelectric capacitors and a pull-down device and a pull-up device on a summing node, in accordance with some embodiments. In some embodiments, 2-input capacitive circuit1100comprises a first input (a), a second input (b), summing node (n1), first capacitor C1, second capacitor C2, pull-up-device MP1, pull-down device MN1, driver801, and output (out) coupled as shown. In some embodiments, the first capacitor C1 includes a first terminal coupled to the first input and a second terminal coupled to the summing node n1. In some embodiments, the second capacitor C2 includes a third terminal coupled to the second input and a fourth terminal coupled to the summing node n1. In some embodiments, the pull-up device MP1 is coupled to the summing node n1 and a power supply rail Vdd, wherein the pull-up device MP1 is controlled by a first control (up). In some embodiments, the pull-down device MN1 is coupled to the summing node n1 and a ground, wherein the pull-down device is controlled by a second control (down). In some embodiments, conditioning circuitry1102is provided which is used to control or condition the first input, the second input, the first control, and the second control during a reset phase to adjust a threshold of 2-input capacitive circuit1100. Conditioning circuitry1102may receive inputs in1 and in2, and configuration setting (e.g., reset or evaluation) to determine the outputs ‘a’, ‘b’, up, and down. During the evaluation phase, in1 is passed on to output ‘a’ and in2 is passed on to ‘b’. During the reset phase, depending on a desired threshold, outputs ‘a’ and ‘b’ are conditioned. In various embodiments, the pull-up device MP1 and pull-down device MN1 are turn on in a sequence during reset phase while inputs to the capacitors are kept constant for a particular threshold setting. In some embodiments, for different input values, the threshold can be configured differently. The sequence of turning on the pull-up device MP1 first and then the pull-down device MN1 can be reversed to readjust the threshold of the circuit. In various embodiments, the pull-up device MP1 and pull-down device MN1 are turned off after the reset phase is complete. Here the term threshold generally refers to a number that indicates a number of inputs that should be set to logic high to perform a function of a threshold gate. For instance, by turning on/off one or more of the pull-up device MP1 and/or pull-down device MN1, and conditioning the inputs ‘a’ and ‘b’ during a reset phase, the charge at node n1 is set so that in an evaluation phase when the pull-up and pull-down devices (MP1 and MN1) are disabled, the input capacitive circuit attains a desired function. In one instance, when the threshold is set to 2 in a reset phase by a particular sequencing of turning on/off the pull-up and/or the pull-down devices and conditioning of the inputs ‘a’ and ‘b; it means that during an evaluation phase when both inputs ‘a’ and ‘b’ are logic high, then voltage on node n1 is logic high. Continuing with this example, when any of the inputs ‘a’ and ‘b’ is a logic low, then the voltage on node n1 resolves to logic low. As such, 2-input capacitive circuit1100is programmed or configured as a AND gate at node n1 and a NAND gate at output out. Likewise, when the threshold is set to 1 in a reset phase by a particular sequencing of turning on/off the pull-up and the pull-down devices and conditioning of the inputs ‘a’ and ‘b; it means that during an evaluation phase when either input ‘a’ and ‘b’ is logic high, then voltage on node n1 is logic high. Continuing with this example, when both the inputs ‘a’ and ‘b’ is a logic low, then the voltage on node n1 resolves to logic low. As such, 2-input capacitive circuit1100is programmed or configured as an OR gate at node n1 and a NOR gate at output out. So, the same circuit can be used as an AND/NAND or OR/NOR gate by conditioning the inputs and resetting or setting the voltage on the summing node during a reset phase. Subsequently, in the evaluation phase the circuit will behave as AND/NAND or OR/NOR gate. In some embodiments, conditioning circuitry1102turns off the pull-up device MP1 and the pull-down device MN1 during an evaluation phase separate from the reset phase. The reset phase or evaluation phase are indicated by the logic level of Config. For example, conditioning circuitry1102sets the first control (up) to logic high (Vdd) and the second control (down) to logic low (ground) during an evaluation phase (e.g., Config is set to logic 1). Likewise, in a reset phase, Config is set to 0. This is just an example, and the logic level of Config can be modified to present the evaluation phase and the reset phase. In some embodiments, conditioning circuitry1102sets the threshold to 0 in a reset phase by first enabling or turning on the pull-down device MN1, and then turning on or enabling the pull-up device MP1, and providing logic 0 to the first input and the second input. A threshold of 0 means that the capacitive input circuit is an always on circuit regardless of the logic levels of inputs. In one such embodiment, during the evaluation phase for the circuit configured with threshold of zero, the logic value on node n1 is logic 1, and the logic value on output out is logic 0 (assuming the driver is an inverter). In some embodiments, conditioning circuitry1102sets the threshold to 1 in a reset phase by first enabling or turning on the pull-down device MN1, and then turning on the pull-up device MP1, and providing logic 1 to the first input ‘a’ and logic 0 to the second input ‘b’. In some embodiments, conditioning circuitry1102sets the threshold to 1 in a reset phase by first enabling or turning on the pull-up device MP1, and then turning on or enabling the pull-down device MN1, and providing logic 0 to the first input ‘a’ and to the second input ‘b’. In some embodiments, conditioning circuitry1102sets the threshold to 2 in a reset phase by first enabling or turning on the pull-up device MP1, and then turning on the pull-down device MN1, and providing logic 1 to the first input ‘a’ and logic 0 the second input ‘b’. In some embodiments, conditioning circuitry1102sets the threshold to 2 in a reset phase by first enabling or turning on the pull-down device MN1, and then turning on the pull-up device MP1, and providing logic 1 (e.g., Vdd) to the first input ‘a’ and to the second input ‘b’. In some embodiments, conditioning circuitry1102sets the threshold to 3 in a reset phase by first enabling or turning on the pull-up device MP1, and then turning on or enabling the pull-down device MN1, and providing logic 1 to the first input and the second input. A threshold of 3 for a 2-input capacitive circuit means that capacitive input circuit is an always off circuit regardless of the logic levels of inputs. In one such embodiment, during the evaluation phase for the circuit configured with threshold of n+1 (e.g., 3, where ‘n’ is the number of capacitive inputs), the logic value on node n1 is floating or drifting and the charge on that node may eventually discharge to ground. In some cases, the voltage on node n1 may charge to supply level via the pull-up device when the node n1 is floating. For example, initially the voltage on the floating node discharges to zero voltages, but then it may charge up via leakage to the supply voltage over time. In some embodiments, when the threshold is n+1, the capacitive input circuit may not turn on even when the inputs to the capacitors are changing. In some embodiments, the voltage on node n1 is zero volts regarding of input setting when the threshold in n+1. In some embodiments, a logic decides about the kind of logic function to configure 2-input capacitive circuit1100. For example, a control logic block or a conditioning circuit1102may determine whether 2-input capacitive circuit1100is to behave as an AND/NAND gate, an OR/NOR gate, an always-on circuit, or a disconnected circuit. In some embodiments, control logic block or a conditioning circuit1102places 2-input capacitive circuit1100in a reset phase. In the reset phase, the inputs ‘a’ and ‘b’ and controls for the pull-up device MP1 and pull-down device MN1 are set or conditioned to configure or adjust the threshold for the 2-input capacitive circuit. In some embodiments, control logic block or conditioning circuit1102may adjust a threshold of 2-input capacitive circuit1100to configure the 2-input capacitive circuit1100as a particular logic function. When the input capacitors are ferroelectric capacitors (because they include ferroelectric material for their dielectric), control logic block or conditioning circuit1102sequences the turning on of the pull-up device MP1 and the pull-down device MN1 to achieve a particular threshold for a given set of inputs to the capacitors. In some embodiments, the pull-up device MP1 is turned on before the pull-down device MN1. In some embodiments, the pull-down device MN1 is turned on before the pull-up device MP1. Table 13 illustrates an example of input conditioning to set various thresholds during a reset phase for 2-input capacitive circuit1100. In various embodiments, during the sequence one of pull-up or pull-down device is on at a time to avoid crossbar current or short circuit current. For example, when the pull-down device MN1 is enabled, the pull-up device MP1 is disabled. Likewise, when the pull-up device MP1 is enabled, the pull-down device MN1 is disabled. Here, time T3 (or event T3) occurs after time T2 (or event T2) which occurs after time T1 (or event T1). In some embodiments, the separation between T1, T2, and T3 is between ½ cycle to 1 cycle, where a cycle is in GHz (e.g., 1 GHz or more). TABLE 13InputInput‘a’‘b’Time T1Time T2Time T3Threshold001000(enable MN1)(disable MN1)(enable MP1)101001(enable MN1)(disable MN1)(enable MP1)111002(enable MN1)(disable MN1)(enable MP1)000111(enable MP1)(disable MP1)(enable MN1)100112(enable MP1)(disable MP1)(enable MN1)110113(enable MP1)(disable MP1)(enable MN1) While the embodiments are illustrated with reference to same capacitances for first capacitor C1 and the second capacitor C2, the threshold can be affected by changing the capacitive ratio of C1 and C2. For example, the input conditioning scheme and the pull-up and pull-down device control can result in a different threshold than that in Table 13 when the capacitive ratio of C1 and C2 is not 1:1. Overall, the configuring scheme of various embodiments herein provide the flexibility of programming the threshold for 2-input capacitive circuit1100in a reset phase to achieve a certain logic function in the evaluation phase. In some embodiments, control logic block or a conditioning circuit1102releases the reset phase and allows the 2-input capacitive circuit to evaluate the inputs in the evaluation phase. Table 14 illustrates a logic function achieved in the evaluation phase by configuring or adjusting the threshold in the reset phase for 2-input capacitive circuit1100. In various embodiments, the pull-up device MP1 and the pull-down device MN1 are disabled during the evaluation phase. TABLE 14Logic FunctionLogic functionThresholdon node n1on node “out”3Logic 0Logic 12ANDNAND1ORNOR0Logic 1Logic 0 FIG.12illustrates 3-input adjustable threshold gate1200with ferroelectric capacitors and a pull-down device and a pull-up device on a summing node, in accordance with some embodiments. 3-input capacitive circuit1200is like 2-input capacitive circuit1100but for additional input ‘c’ and associated capacitor C3. In some embodiments, a first terminal of capacitor C3 is coupled to input ‘c’ while a second terminal of capacitor C3 is coupled to summing node n1. Conditioning circuit1102is replaced with a conditioning circuit1202. Conditioning circuitry1202may receive inputs in1, in2, and in3 and configuration setting (e.g., reset or evaluation) to determine the outputs ‘a’, ‘b’, ‘c’, controls “up”, and “down”. During the evaluation phase, in1 is passed on to output ‘a’, in2 is passed on to ‘b’, and in3 is passed on to ‘c’. During the reset phase, depending on a desired threshold, outputs ‘a’, ‘b’, and ‘c’ are conditioned. In some embodiments, by turning on/off the pull-up device MP1 and pull-down device MN1 in a sequence, and conditioning the inputs ‘a’, ‘b’, and ‘c’ during a reset phase, the charge at node n1 is set. As such, in an evaluation phase when the pull-up and the pull-down devices (MP1 and MN1) are disabled, 3-input capacitive circuit300attains a desired function. In some embodiments, conditioning circuitry1202sets the threshold to 0 in a reset phase by first enabling or turning on the pull-down device MN1, and then turning on or enabling the pull-up device MP1, and providing logic 0 to the first input ‘a’, logic 0 to the second input ‘b’, and logic 0 to the third input ‘c’. A threshold of 0 means that the capacitive input circuit is an always on circuit regardless of the logic levels of inputs. In one such embodiment, during the evaluation phase for the circuit configured with threshold of zero, the logic value on node n1 is logic 1, and the logic value on output out is logic 0 (assuming the driver is an inverter). In some embodiments, conditioning circuitry1202sets the threshold to 1 in a reset phase by first enabling or turning on the pull-down device MN1, and then turning on or enabling the pull-up device MP1, and providing logic 1 to the first input ‘a’, logic 0 to the second input ‘b’, and logic 0 to the third input ‘c’. In some embodiments, conditioning circuitry1202sets the threshold to 1 in a reset phase by first enabling or turning on the pull-up device MP1, and then turning on or enabling the pull-down device MN1, and providing logic 0 to the first input ‘a’, logic 0 to the second input ‘b’, and logic 0 to the third input ‘c’. In some embodiments, when the threshold is set to 1 in a reset phase by a particular sequencing of turning on/off the pull-up and the pull-down devices and conditioning of the inputs ‘a’, ‘b’, and ‘c’; it means that during an evaluation phase when any of the inputs ‘a’ ‘b’, or ‘c’ is logic high, then voltage on node n1 is logic high. Continuing with this example, in the evaluation phase when all inputs ‘a’, ‘b’, or ‘c’ are a logic low, then the voltage on node n1 resolves to logic low. As such, 3-input capacitive circuit1200is programmed or configured as an OR gate at node n1 and a NOR gate at output out (assuming the driver circuitry is an inverter). In some embodiments, conditioning circuitry1202sets the threshold to 2 in a reset phase by first enabling or turning on the pull-down device MN1, and then turning on or enabling the pull-up device MP1, and providing logic 1 to the first input ‘a’, logic 1 to the second input ‘b’, and logic 0 to the third input ‘c’. In some embodiments, conditioning circuitry1202sets the threshold to 2 in a reset phase by first enabling or turning on the pull-up device MP1, and then turning on or enabling the pull-down device MN1, and providing logic 1 to the first input ‘a’, logic 0 to the second input ‘b’, and logic 0 to the third input ‘c’. In some embodiments, when the threshold is set to 2 in a reset phase by a particular sequencing of turning on/off the pull-up and the pull-down devices and conditioning of the inputs ‘a’, ‘b’, and ‘c’; it means that during an evaluation phase when at least two of the three inputs ‘a’, ‘b’, and ‘c’ are logic high, then voltage on node n1 is logic high. Continuing with this example, when at least two inputs of the three inputs ‘a’ ‘b’, and ‘c’ is a logic low, then the voltage on node n1 resolves to logic low. As such, 3-input capacitive circuit1200is programmed or configured as a majority gate at node n1 and a minority gate at output out (when the driver circuitry is an inverter). In some cases, depending upon the leakage balance of pull-up transistor MP1 and pull-down MN1 as it impacts charge on the summing node n1, 3-input capacitive circuit1200may lose its majority logic functionality over time. This loss in functionality of the majority function can be restored by resetting the summing node n1 via transistors MP1 and MN1, in accordance with some embodiments. In some embodiments, conditioning circuitry1202sets the threshold to 3 in a reset phase by first enabling or turning on the pull-down device MN1, and then turning on or enabling the pull-up device MP1, and providing logic 1 to the first input ‘a’, logic 1 to the second input ‘b’, and logic 1 to the third input ‘c’. In some embodiments, conditioning circuitry1202sets the threshold to 3 in a reset phase by first enabling or turning on the pull-up device MP1, and then turning on or enabling the pull-down device MN1, and providing logic 1 to the first input ‘a’, logic 1 to the second input ‘b’, and logic 0 to the third input ‘c’. In some embodiments, when the threshold is set to 3 in a reset phase by a particular sequencing of turning on/off the pull-up and the pull-down devices and conditioning of the inputs ‘a’, ‘b’, and ‘c’; it means that during an evaluation phase when all three inputs ‘a’, ‘b’, and ‘c’ are logic high, then voltage on node n1 is logic high. Continuing with this example, when any of the three inputs ‘a’ ‘b’, and ‘c’ is a logic low, then the voltage on node n1 resolves to logic low. As such, 3-input capacitive circuit1200is programmed or configured as a 3-input AND at node n1 and a 3-input NAND gate at output out (assuming the driver circuitry is an inverter). In some embodiments, conditioning circuitry1202sets the threshold to 4 in a reset phase by first enabling or turning on the pull-up device MP1, and then turning on or enabling the pull-down device MN1, and providing logic 1 to the first input ‘a’, the second input ‘b’, and the third input ‘c’. A threshold of 4 for a 3-input capacitive circuit means that capacitive input circuit is an always off circuit regardless of the logic levels of the inputs. In one such embodiment, during the evaluation phase for the circuit configured with threshold of n+1 (e.g., 4, where ‘n’ is the number of capacitive inputs), the logic value on node n1 is floating and may eventually discharge to ground or charge to supply level. In some embodiments, the voltage on node n1 is zero volts regarding of input setting when the threshold in 4 (e.g., n+1). So, the same circuit can be used as a majority/minority gate, AND/NAND, OR/NOR, always-on gate, or a disconnected gate by conditioning the inputs and resetting or setting the voltage on the summing node during a reset phase. Subsequently, in the evaluation phase the circuit will behave as a 3-input majority/minority, a 3-input AND/NAND, a 3-input OR/NOR gate, a 3-input always-on gate, or a 3-input disconnected gate. In some embodiments, a logic decides about the kind of logic function to configure 3-input capacitive circuit1200. For example, a control logic block or a conditioning circuit1202may determine whether 3-input capacitive circuit1200is to behave as an always-on circuit, always disconnected circuit, a majority/minority, an AND/NAND gate, or an OR/NOR gate. In some embodiments, control logic block or conditioning circuit1202may adjust a threshold of 3-input capacitive circuit1200to configure the 3-input capacitive circuit1200as a particular logic function. In some embodiments, control logic block or a conditioning circuit1202places 3-input capacitive circuit1200in a reset phase. In the reset phase, the inputs ‘a’, ‘b’, and ‘c’ and controls for the pull-up device MP1 and pull-down device MN1 are set or conditioned to configure or adjust the threshold for the 3-input capacitive circuit. In some embodiments, control logic block or a conditioning circuit1202may adjust a threshold of 3-input capacitive circuit1200to configure the 3-input capacitive circuit1200as a particular logic function. When the input capacitors are ferroelectric capacitors (because they include ferroelectric material for their dielectric), control logic block or a conditioning circuit1202sequences the turning on of the pull-up device MP1 and the pull-down device MN1 to achieve a particular threshold for a given set of inputs to the capacitors. In some embodiments, the pull-up device MP1 is turned on before the pull-down device MN1. In some embodiments, the pull-down device MN1 is turned on before the pull-up device MP1. Table 15 illustrates an example of input conditioning to set various thresholds during a reset phase for 3-input capacitive circuit1200. In various embodiments, during the sequence one of pull-up or pull-down device is on at a time to avoid crossbar current or short circuit current. For example, when the pull-down device MN1 is enabled, the pull-up device MP1 is disabled. Likewise, when the pull-up device MP1 is enabled, the pull-down device MN1 is disabled. Here, time T3 (or event T3) occurs after time T2 (or event T2) which occurs after time T1 (or event T1). In some embodiments, the separation between T1, T2, and T3 is between ½ cycle to 1 cycle, where a cycle is in GHz (e.g., 1 GHz or more). TABLE 15‘a’‘b’‘c’T1T2T3Threshold0001000(enable MN1)(disable MN1)(enable MP1)1001001(enable MN1)(disable MN1)(enable MP1)1101002(enable MN1)(disable MN1)(enable MP1)1111003(enable MN1)(disable MN1)(enable MP1)0000111(enable MP1)(disable MP1)(enable MN1)1000112(enable MP1)(disable MP1)(enable MN1)1100113(enable MP1)(disable MP1)(enable MN1)1110114(enable MP1)(disable MP1)(enable MN1) While the embodiments are illustrated with reference to the same capacitances for the first capacitor C1, the second capacitor C2, and the third capacitor C3, the threshold can be affected by changing the capacitive ratio of C1, C2, and C3 relative to one another. For example, the input conditioning scheme and the pull-up and pull-down device control can result in a different threshold than that in Table 15 when the capacitive ratio of C1, C2, and C3 is not 1:1:1. Overall, the configuring scheme of various embodiments herein provide the flexibility of programming or adjusting the threshold for 3-input capacitive circuit1200in a reset phase to achieve a certain logic function in the evaluation phase. In some embodiments, control logic block or a conditioning circuit1202releases the reset phase and allows 3-input capacitive circuit to evaluate the inputs in the evaluation phase. Table 16 illustrates a logic function achieved in the evaluation phase by configuring the threshold in the reset phase for 3-input capacitive circuit1200. In various embodiments, the pull-up device MP1 and the pull-down device MN1 are disabled during the evaluation phase. TABLE 16Logic FunctionLogic functionThresholdon node n1on node “out”0Logic 1Logic 01ORNOR2MajorityMinority3ANDNAND4Logic 0Logic 1 FIG.13illustrates 5-input adjustable threshold gate1300with ferroelectric capacitors and a pull-down device and a pull-up device on a summing node, in accordance with some embodiments. 5-input capacitive circuit1300is like 3-input capacitive circuit1100but for additional inputs ‘d’ and ‘e’ and associated capacitors C4 and C5. In some embodiments, a first terminal of capacitor C4 is coupled to input ‘d’ while a second terminal of capacitor C4 is coupled to summing node n1. In some embodiments, a first terminal of capacitor C5 is coupled to input ‘e’ while a second terminal of capacitor C5 is coupled to summing node n1. Conditioning circuit1202is replaced with a conditioning circuit1302. Conditioning circuitry1302may receive inputs in1, in2, in3, in4, and in5 and configuration setting (e.g., reset or evaluation) to determine the outputs ‘a’, ‘b’, ‘c’, ‘d’, ‘e’, controls “up”, and “down”. During the evaluation phase, in1 is passed on to output ‘a’, in2 is passed on to ‘b’, in3 is passed on to ‘c’, in4 is passed on to ‘d’, and in5 is passed on to ‘e’. During the reset phase, depending on a desired threshold, outputs ‘a’, ‘b’, ‘c’ ‘d’, and ‘e’ are conditioned. In some embodiments, by turning on/off the pull-up device MP1 and pull-down device MN1 in a sequence, and conditioning the inputs ‘a’, ‘b’, ‘c’, ‘d’, and ‘e’ during a reset phase, the charge at node n1 is set. As such, in an evaluation phase when the pull-up and pull-down devices (MP1 and MN1) are disabled, 5-input capacitive circuit500attains a desired function. In some embodiments, conditioning circuitry1302sets the threshold to 0 in a reset phase by first enabling or turning on the pull-down device MN1, and then turning on or enabling the pull-up device MP1, and providing logic 0 to the first input ‘a’, logic 0 to the second input ‘b’, and logic 0 to the third input ‘c’, logic 0 to the fourth input ‘d’, and logic 0 to the fifth input ‘e’. A threshold of 0 means that conditioning circuitry1302is an always on circuit regardless of the logic levels of inputs. In one such embodiment, during the evaluation phase for the circuit configured with threshold of zero, the logic value on node n1 is logic 1, and the logic value on output out is logic 0 (assuming the driver is an inverter). In some embodiments, conditioning circuitry1302sets the threshold to 1 in a reset phase by first enabling or turning on the pull-down device MN1, and then turning on or enabling the pull-up device MP1, and providing logic 1 to the first input ‘a’, logic 0 to the second input ‘b’, and logic 0 to the third input ‘c’, logic 0 to the fourth input ‘d’, and logic 0 to the fifth input ‘e’. In some embodiments, conditioning circuitry1302sets the threshold to 1 in a reset phase by first enabling or turning on the pull-up device MP1, and then turning on or enabling the pull-down device MN1, and providing logic 0 to the first input ‘a’, logic 0 to the second input ‘b’, and logic 0 to the third input ‘c’, logic 0 to the fourth input ‘d’, and logic 0 to the fifth input ‘e’. In some embodiments, when the threshold is set to 1 in a reset phase by a particular sequencing of turning on/off the pull-up and the pull-down devices and conditioning of the inputs ‘a’ ‘b’, ‘c’, ‘d’, and ‘e’; it means that during an evaluation phase when any of the inputs ‘a’ ‘b’, ‘c’, ‘d’, or ‘e’ is logic high, then voltage on node n1 is logic high. Continuing with this example, when all inputs ‘a’, ‘b’, ‘c’ ‘d’, or ‘e’ is a logic low, then the voltage on node n1 resolves to logic low. As such, 5-input capacitive circuit1300is programmed or configured as an OR gate at node n1 and an NOR gate at output out. In some embodiments, conditioning circuitry1302sets the threshold to 2 in a reset phase by first enabling or turning on the pull-down device MN1, and then turning on or enabling the pull-up device MP1, and providing logic 1 to the first input ‘a’, logic 1 to the second input ‘b’, and logic 0 to the third input ‘c’, logic 0 to the fourth input ‘d’, and logic 0 to the fifth input ‘e’. In some embodiments, conditioning circuitry1302sets the threshold to 2 in a reset phase by first enabling or turning on the pull-up device MP1, and then turning on or enabling the pull-down device MN1, and providing logic 1 to the first input ‘a’, logic 0 to the second input ‘b’, and logic 0 to the third input ‘c’, logic 0 to the fourth input ‘d’, and logic 0 to the fifth input ‘e’. In one instance, when the threshold is set to 2 in a reset phase by a particular sequencing of turning on/off the pull-up and the pull-down devices and conditioning of the inputs ‘a’, ‘b’, ‘c’, ‘d’, and ‘e’; it means that during an evaluation phase when at least two of the five inputs ‘a’, ‘b’ ‘c’, ‘d’, and ‘e’ are logic high, then voltage on node n1 is logic high. Continuing with this example, when one or zero inputs of the five inputs ‘a’ ‘b’, ‘c’, ‘d’, and ‘e’ are a logic high, then the voltage on node n1 resolves to logic low. As such, 5-input capacitive circuit1300is programmed or configured as a 5-input majority 0 gate-like logic (e.g., a threshold gate with a threshold of 2) at node n1 and a 5-input minority 0 gate-like logic (e.g., an inverted threshold gate with a threshold of 2) at output out. In some embodiments, conditioning circuitry1302sets the threshold to 3 in a reset phase by first enabling or turning on the pull-down device MN1, and then turning on or enabling the pull-up device MP1, and providing logic 1 to the first input ‘a’, logic 1 to the second input ‘b’, and logic 1 to the third input ‘c’, logic 0 to the fourth input ‘d’, and logic 0 to the fifth input ‘e’. In some embodiments, conditioning circuitry1302sets the threshold to 3 in a reset phase by first enabling or turning on the pull-up device MP1, and then turning on or enabling the pull-down device MN1, and providing logic 1 to the first input ‘a’, logic 1 to the second input ‘b’, and logic 0 to the third input ‘c’, logic 0 to the fourth input ‘d’, and logic 0 to the fifth input ‘e’. In one instance, when the threshold is set to 3 in a reset phase by a particular sequencing of turning on/off the pull-up and/or the pull-down devices and conditioning of the inputs ‘a’, ‘b’, ‘c’, ‘d’, and ‘e’; it means that during an evaluation phase when at least three of the five inputs ‘a’, ‘b’ ‘c’, ‘d’, and ‘e’ are logic high, then voltage on node n1 is logic high. Continuing with this example, when at least two inputs of the five inputs ‘a’ ‘b’, ‘c’, ‘d’, and ‘e’ is a logic low (or2or fewer inputs are logic high), then the voltage on node n1 resolves to logic low. As such, 5-input capacitive circuit1300is programmed or configured as a 5-input majority gate logic at node n1 and a 5-input minority gate logic at output out (assuming driver circuitry801is an inverter). In some embodiments, conditioning circuitry1302sets the threshold to 4 in a reset phase by first enabling or turning on the pull-down device MN1, and then turning on or enabling the pull-up device MP1, and providing logic 1 to the first input ‘a’, logic 1 to the second input ‘b’, and logic 1 to the third input ‘c’, logic 1 to the fourth input ‘d’, and logic 0 to the fifth input ‘e’. In some embodiments, conditioning circuitry1302sets the threshold to 4 in a reset phase by first enabling or turning on the pull-up device MP1, and then turning on or enabling the pull-down device MN1, and providing logic 1 to the first input ‘a’, logic 1 to the second input ‘b’, and logic 1 to the third input ‘c’, logic 0 to the fourth input ‘d’, and logic 0 to the fifth input ‘e’. In some embodiments, when the threshold is set to 4 in a reset phase by a particular sequencing of turning on/off the pull-up and the pull-down devices and conditioning of the inputs ‘a’, ‘b’, ‘c’, ‘d’, and ‘e’; it means that during an evaluation phase when at least four inputs from the five inputs ‘a’, ‘b’ ‘c’, ‘d’ and ‘e’ are logic high, then voltage on node n1 is logic high. Continuing with this example, when three or fewer inputs from the five inputs ‘a’ ‘b’, ‘c’, ‘d’ and ‘e’ are logic high, then the voltage on node n1 resolves to logic low. As such, 5-input capacitive circuit1300is programmed or configured as a 5-input majority 1 gate-like logic (e.g., a threshold gate with a threshold of 4) at node n1 and a 5-input minority 1 gate-like logic (e.g., an inverted threshold gate with a threshold of 4) at output out. In some embodiments, conditioning circuitry1302sets the threshold to 5 in a reset phase by first enabling or turning on the pull-down device MN1, and then turning on or enabling the pull-up device MP1, and providing logic 1 to the first input ‘a’, logic 1 to the second input ‘b’, and logic 1 to the third input ‘c’, logic 1 to the fourth input ‘d’, and logic 1 to the fifth input ‘e’. In some embodiments, conditioning circuitry1302sets the threshold to 5 in a reset phase by first enabling or turning on the pull-up device MP1, and then turning on or enabling the pull-down device MN1, and providing logic 1 to the first input ‘a’, logic 1 to the second input ‘b’, and logic 1 to the third input ‘c’, logic 1 to the fourth input ‘d’, and logic 0 to the fifth input ‘e’. In some embodiments, when the threshold is set to 5 in a reset phase by a particular sequencing of turning on/off the pull-up and the pull-down devices and conditioning of the inputs ‘a’, ‘b’, ‘c’, ‘d’, and ‘e’; it means that during an evaluation phase when all five inputs ‘a’, ‘b’ ‘c’, ‘d’, and ‘e’ are logic high, then voltage on node n1 is logic high. Continuing with this example, when any of the five inputs ‘a’ ‘b’, ‘c’, ‘d’, and ‘e’ is a logic low, then the voltage on node n1 resolves to logic low. As such, 5-input capacitive circuit1300is programmed or configured as a 5-input AND at node n1 and a 5-input NAND gate at output out (assuming that the driver circuitry801is an inverter). In some embodiments, conditioning circuitry1302sets the threshold to 6 in a reset phase by first enabling or turning on the pull-up device MP1, and then turning on or enabling the pull-down device MN1, and providing logic 1 to the first input ‘a’, logic 1 to the second input ‘b’, and logic 1 to the third input ‘c’, logic 1 to the fourth input ‘d’, and logic 1 to the fifth input ‘e’. A threshold of 6 for a 5-input capacitive circuit means that capacitive input circuit is an always off circuit regardless of the logic levels of the inputs. In one such embodiment, during the evaluation phase for the circuit configured with threshold of n+1 (e.g., 6, where ‘n’ is the number of capacitive inputs), the logic value on node n1 is floating and may eventually discharge to ground or charge to supply level. In some embodiments, the voltage on node n1 is zero volts regarding of input setting when the threshold in 6 (e.g., n+1). So, the same circuit can be used as a majority/minority gate logic majority/minority gate-like logic (or threshold logic gate), AND/NAND, OR/NOR gate, a gate driving a predetermined output, or a disconnected gate by conditioning the inputs and resetting or setting the voltage on the summing node in a sequence during a reset phase. Subsequently, in the evaluation phase the circuit will behave as a 5-input majority/minority gate logic, 5-input majority/minority gate-like or threshold logic, 5-input AND/NAND gate, 5-input OR/NOR gate, an always-on gate, or a disconnected gate. Table 17 illustrates an example of input conditioning to set various thresholds during a reset phase for 5-input capacitive circuit1300. In various embodiments, during the sequence one of pull-up or pull-down device is on at a time to avoid crossbar current or short circuit current. For example, when the pull-down device MN1 is enabled, the pull-up device MP1 is disabled. Likewise, when the pull-up device MP1 is enabled, the pull-down device MN1 is disabled. Here, time T3 (or event T3) occurs after time T2 (or event T2) which occurs after time T1 (or event T1). In some embodiments, the separation between T1, T2, and T3 is between ½ cycle to 1 cycle, where a cycle is in GHz (e.g., 1 GHz or more). TABLE 17abcdeT1T2T3Threshold000001000(enable(disable(enableMN1)MN1)MP1)100001001(enable(disable(enableMN1)MN1)MP1)110001002(enable(disable(enableMN1)MN1)MP1)111001003(enable(disable(enableMN1)MN1)MP1)111101004(enable(disable(enableMN1)MN1)MP1)111111005(enable(disable(enableMN1)MN1)MP1)000000111(enable(disable(enableMP1)MP1)MN1)100000112(enable(disable(enableMP1)MP1)MN1)110000113(enable(disable(enableMP1)MP1)MN1)111000114(enable(disable(enableMP1)MP1)MN1)111100115(enable(disable(enableMP1)MP1)MN1)111110116(enable(disable(enableMP1)MP1)MN1) Table 18 illustrates a logic function achieved in the evaluation phase by configuring the threshold in the reset phase for 5-input capacitive circuit1300. In various embodiments, the pull-up device MP1 and the pull-down device MN1 are disabled during the evaluation phase. TABLE 18ThresholdLogic Function on node n1Logic function on node “out”0Logic 1Logic 01ORNOR2Majority 0 gate-likeMinority 0 gate-like(e.g., a threshold gate with(e.g., an inverted thresholda threshold of 2)gate with a threshold of 2)3Majority gateMinority gate4Majority 1 gate-likeMinority 1 gate-like(e.g., a threshold gate with(e.g., an inverted thresholda threshold of 4)gate with a threshold of 4)5ANDNAND6Logic 0Logic 1 By setting inputs to have a particular number of 0s and 1s and at the same time controlling the logic level appearing at the summation node (n1) by controlling the pull-up and pull-down devices in a sequence, two effects are accomplished, in accordance with various embodiments. First, each capacitor stores a deterministic charge. Second, a specific displacement charge is put on the summing or floating node n1. Setting a specific displacement charge value at the floating node n1 sets the threshold of when the floating node (n1) during the evaluation phase is allowed to go to the logic value of 0 or 1. For example, for an n-input threshold gate, if the threshold is set such that the floating node n1 goes closer to 1 logic level than 0 logic level, when all of the inputs are set to 1, then the capacitive input circuit becomes a NAND gate. Similarly, if it is desired that any one input becomes logic 1 in the evaluation phase to give voltage closer to logic level 1 at the floating node, then the circuit becomes an OR gate with n-inputs. Similarly, any intermediate threshold from 0 to n can be set. In some embodiments, a threshold of zero means that the gate becomes a buffer. For instance, the circuit is always turned on to input logic level 1. A threshold of n+1 for an n-input gate means that the summation node n1 may not go closer to logic level 1, even when all the inputs are set to 1. This would mean that that the capacitive input circuit becomes a disconnected circuit. In general, the input capacitive circuit when configured as a threshold gate, it can be expressed as: Y=1if∑j=1mWjXj≥T,Y=0if∑j=1mWjXj<T, Where ‘Y’ is the output (logic level on node n1), ‘X’ is the input, ‘W’ is the capacitive weight, and ‘T’ is the threshold. Assuming all Ws are ones (e.g., all capacitors have the same capacitance), when T is equal to the number of inputs, and AND gate is realized at node n1. In this example, for a 3-input capacitive circuit, a 3 input AND gate is realized when threshold is set to 3. In another example, when T equals 1, a NOR gate is realized at node n1. In yet another example, when T is equal to 0, the input capacitive circuit is always on, and the voltage on node n1 is logic 1. In yet another example, when T is greater than the number of inputs to the circuit, the circuit is always off or disconnected. In this case, voltage on node n1 is floating and may over time leak away. While the embodiments are described with reference to up-to 5-input capacitive circuit using equal ratio for the capacitance, the same idea can be expanded to any number of input capacitive circuit with equal or unequal ratio for capacitances. In various embodiments, the capacitances are ferroelectric capacitors. In some embodiments, the ferroelectric capacitors are planar capacitors. In some embodiments, the ferroelectric capacitors are pillar or trench capacitors. In some embodiments, the ferroelectric capacitors are vertically stacked capacitors to reduce the overall footprint of the multi-input capacitive circuit. In some embodiments, the transistors (MP1 and MN1) that charge or discharge the summing node n1 are planar or non-planar transistors. In some embodiments, transistors MP1 and MN1 are fabricated in the front-end of the die on a substrate. In some embodiments, one of the transistors (e.g., MP1 or MN1) is fabricated in the front-end of the die while another one of the transistors is fabricated in the backend of the end such that the stack of capacitors is between the frontend of the die and the backend of the die or between the two transistors. As such, the footprint of the multi-input capacitive circuit may be a footprint of a single transistor or slightly more than that. These backed transistors or switches can be fabricated using any suitable technology such as IGZO (indium gallium zinc oxide). In some embodiments, the ferroelectric capacitors can be formed using transistors configured as capacitors, where transistor gates have ferroelectric material. These capacitors can be on the frontend or the backend of the die. While the various embodiments are described with reference to driver circuitry801connected at node n1, driver circuitry801can be removed. When input capacitors for a capacitive input circuit are linear capacitors (e.g., comprising linear dielectric material), the voltage developed at node n1 may not reach rail-to-rail. As such, the subsequent driver circuitry801connected to node n1 may experience static leakage. Static leakage increases power consumption. In various embodiments, when input capacitors comprise nonlinear polar material (e.g., ferroelectric material), then the voltage developed on node n1 results in reduced static leakage in the subsequent driver circuitry801. One reason for this reduced leakage is because ferroelectric material in the input capacitors allow for voltage on node n1 to reach closer to rail-to-rail voltage, which reduces static leakage in subsequent driver circuitry801. Here, summation node n1 can maintain displacement charge (to provide logic 0 or logic 1 functions for the programmed threshold) for a longer period compared to linear capacitors. Consequently, the reset overhead of turning on/off the pull-up or pull-down devices is reduced. For example, when the leakage at the summation node n1 is low, the pull-up or pull-down devices may not need to turn on for tens of microseconds, which reduces the reset activity on node n1. Thus, circuit using nonlinear capacitors (e.g., ferroelectric capacitor) in this configuration becomes a viable option to realize low leakage logic circuits for advanced process technology nodes (e.g., advanced finFET process technology node). Since the voltage on node n1 for the various threshold gates described herein is closer to rail-to-rail voltage compared to the case when linear input capacitors are used, subsequent driver circuitry801can be removed. As such, the input capacitors with nonlinear polar material can drive another capacitive input circuit directly. Here, closer to rail-to-rail voltage on node n1 using nonlinear polar material based capacitors (e.g., ferroelectric or paraelectric capacitors) implies that the static leakage in the subsequent driver801is reduced compared to the case when voltage on n1 is not close to rail-to-rail voltage. When linear capacitors are used, a voltage divider is formed on node n1 based on the number of capacitors and their logic inputs. Such a voltage divider results in non-rail-to-rail voltage on node n1 that results in static leakage in the subsequent driver801. When nonlinear capacitors are used, the voltage divider is not a linear voltage divider. This results in a much closer rail-to-rail voltage on n1 which reduces static leakage in the subsequent driver801. Higher the nonlinearity, the closer the voltage on node n1 is rail-to-rail. Nonlinear capacitors as shown in various embodiments allow the logic gate to have more inputs compared to the case when linear capacitors are used while keeping the leakage through diver801low. FIG.14illustrates planar linear capacitor structure1400, in accordance with some embodiments. In some embodiments, capacitors for the multi-input capacitive structures are linear capacitors. These capacitors can take any planar form. One such form is illustrated inFIG.14. Here, planar capacitor structure1400is a metal-insulator-metal (MIM) capacitor comprising a bottom electrode, a top electrode, and a linear dielectric between the top electrode and the bottom electrode as shown. In some embodiments, conductive oxide layer(s) are formed between the bottom electrode and the linear dielectric. In some embodiments, conductive oxide layer(s) are formed between the top electrode and the linear dielectric. Examples of conductive oxides include: IrO2, RuO2, PdO2, OsO2, or ReO3. In some examples, conductive oxides are of the form A2O3 (e.g., In2O3, Fe2O3) and ABO3 type, where ‘A’ is a rare earth element and B is Mn. In some embodiments, the dielectric layer includes one or more of: SiO2, Al2O3, Li2O, HfSiO4, Sc2O3, SrO, HfO2, ZrO2, Y2O3, Ta2O5, BaO, WO3, MoO3, or TiO2. Any suitable conductive material may be used for the top electrode and the bottom electrode. For example, the material or the electrode may include one or more of: Cu, Al, Ag, Au, W, or Co. In some embodiments, the thickness along the z-axis of the top electrode and bottom electrode is in a range of 1 nm to 30 nm. In some embodiments, the thickness along the z-axis of the dielectric is in a range of 1 nm to 30 nm. In some embodiments, the thickness along the z-axis of the conductive oxide is in a range of 1 nm to 30 nm. FIG.15Aillustrates a non-planar linear capacitor structure1500, in accordance with some embodiments. In some embodiments, non-planar capacitor structure600is rectangular in shape. Taking the cylindrical shaped case for example, in some embodiments, the layers of non-planar capacitor structure1500from the center going outwards include bottom electrode1501a, first conductive oxide1512a, linear dielectric material1513, second conductive oxide1512b, and top electrode1501b. A cross-sectional view along the “ab” dashed line is illustrated in the middle ofFIG.15A. In some embodiments, conducting oxides are removed and the linear dielectric is directly connected to top electrode1501band bottom electrodes1501a. In some embodiments, linear dielectric material1513can include any suitable dielectric, where the thickness of dielectric film is a range of 1 nm to 20 nm. In some embodiments, linear dielectric material1513comprises a higher-K dielectric material. In some embodiments, linear dielectrics include one of: SIO2, Al2O3, Li2O, HfSiO4, Sc2O3, SrO, HfO2, ZrO2, Y2O3, Ta2O5, BaO, WO3, MoO3, or TiO2. The high-k dielectric material may include elements such as: zinc, niobium, scandium, lean yttrium, hafnium, silicon, strontium, oxygen, barium, titanium, zirconium, tantalum, aluminum, and lanthanum. Examples of high-k materials that may be used in the gate dielectric layer include lead zinc niobate, hafnium oxide, lead scandium tantalum oxide, hafnium silicon oxide, yttrium oxide, aluminum oxide, lanthanum oxide, barium strontium titanium oxide, lanthanum aluminum oxide, titanium oxide, zirconium oxide, tantalum oxide, and zirconium silicon oxide. In some embodiments, first conductive oxide1512ais conformally deposited over bottom electrode1501a. In some embodiments, dielectric material1513is conformally deposited over first conductive oxide1512a. In some embodiments, second conductive oxide1512bis conformally deposited over dielectric material1513. In some embodiments, top electrode1501bis conformally deposited over second conductive oxide1512b. In some embodiments, bottom electrode1501ais in the center while top electrode1501bis on an outer circumference of non-planar capacitor structure1500. In some embodiments, material for bottom electrode1501amay include one or more of: Cu, Al, Ag, Au, W, or Co, or their alloys. In some embodiments, material for first conductive oxide1512ainclude: IrO2, RuO2, PdO2, OsO2, or ReO3. In some examples, conductive oxides are of the form A2O3 (e.g., In2O3, Fe2O3) and ABO3 type, where ‘A’ is a rare earth element and B is Mn. In some embodiments, material for second conductive oxide1512bmay be same as the material for first conductive oxide1512a. In some embodiments, material for top electrode1501bmay include one or more of: Cu, Al, Ag, Au, W, or Co, or their alloys. In some embodiments, a first refractive inter-metallic layer (not shown) is formed between dielectric material1513and first conductive oxide1512a. In some embodiments, a second refractive inter-metallic layer (not shown) is formed between dielectric capacitor material1513and second conductive oxide1512b. In these cases, the first and second refractive inter-metallic layers are directly adjacent to their respective conductive oxide layers and to dielectric capacitor material1513. In some embodiments, refractive inter-metallic maintains the capacitive properties of the dielectric capacitor material1513. In some embodiments, refractive inter-metallic comprises Ti and Al (e.g., TiAl compound). In some embodiments, refractive inter-metallic comprises one or more of Ta, W, and/or Co. For example, refractive inter-metallic includes a lattice of Ta, W, and Co. In some embodiments, refractive inter-metallic includes one of: Ti—Al such as Ti3Al, TiAl, TiAl3; Ni—Al such as Ni3Al, NiAl3, NiAl; Ni—Ti, Ni—Ga, Ni2MnGa; FeGa, Fe3Ga; borides, carbides, or nitrides. In some embodiments, TiAl material comprises Ti-(45-48)Al-(1-10)M (at. X trace amount %), with M being at least one element from: V, Cr, Mn, Nb, Ta, W, and Mo, and with trace amounts of 0.1-5% of Si, B, and/or Mg. In some embodiments, TiAl is a single-phase alloy γ(TiAl). In some embodiments, TiAl is a two-phase alloy γ(TiAl)+α2(Ti3Al). Single-phase γ alloys contain third alloying elements such as Nb or Ta that promote strengthening and additionally enhance oxidation resistance. The role of the third alloying elements in the two-phase alloys is to raise ductility (V, Cr, Mn), oxidation resistance (Nb, Ta) or combined properties. Additions such as Si, B, and Mg can markedly enhance other properties. The thicknesses of the layers of capacitor1500in the x-axis are in the range of 1 nm to 30 nm. In some embodiment, refractive inter-metallic layers are not used for non-planar capacitor structure1500. FIG.15Billustrates non-planar linear capacitor structure1520without conductive oxides, in accordance with some embodiments. Compared toFIG.15A, here the linear dielectric is adjacent to the top electrode and the bottom electrode. FIG.16Aillustrates multi-input capacitive circuit1600with stacked planar capacitor structure, wherein the multi-input capacitive circuit includes a pull-up device, in accordance with some embodiments. In this example, pull-up device is shown which is controlled by the Up control on its gate terminal. The source and drain terminals of transistor MP1 is coupled to contact (CA). Etch stop layer is used in the fabrication of vias (via0) to connect the source or drain of the transistors to summing node n1 on metal-1 (M1) layer. Another etch stop layer is formed over M1 layer to fabricate vias (vial) to couple to respective M1 layers. In some embodiments, metal-2 (M2) is deposited over vias (vial). M2 layer is then polished. In some embodiments, the capacitor can be moved further up in the stack, where the capacitor level processing is done between different layers. In some embodiments, oxide is deposited over the etch stop layer. Thereafter, dry, or wet etching is performed to form holes for pedestals. The holes are filled with metal and land on the respective M2 layers. Fabrication processes such as interlayer dielectric (ILD) oxide deposition followed by ILD etch (to form holes for the pedestals), deposition of metal into the holes, and subsequent polishing of the surface are used to prepare for post pedestal fabrication. A number of fabrication processes of deposition, lithography, and etching takes place to form the stack of layers for the planar capacitor. In some embodiments, the linear dielectric capacitors are formed in a backend of the die. In some embodiments, deposition of ILD is followed by surface polish. In some embodiments, a metal layer is formed over top electrode of each capacitor to connect to a respective input. For example, metal layer over the top electrode of capacitor C1 is connected to input ‘a’. Metal layer over the top electrode of capacitor C2 is connected to input ‘b’. Metal layer over the top electrode of capacitor C3 is connected to input ‘c’. Metal layer over the top electrode of capacitor C4 is connected to input ‘d’. The metal layers coupled to the bottom electrodes of capacitors C1, C2, C3, and C4 are coupled to summing node n1 through respective vias. In this case, after polishing the surface, ILD is deposited, in accordance with some embodiments. Thereafter, holes are etched through the ILD to expose the top electrodes of the capacitors, in accordance with some embodiments. The holes are then filled with metal, in accordance with some embodiments. Followed by filling the holes, the top surface is polished, in accordance with some embodiments. As such, the capacitors are connected to input electrode (e.g., input ‘a’, input ‘b’, input ‘c’, and input ‘d’) and summing node n1 (through the pedestals), in accordance with some embodiments. In some embodiments, ILD is deposited over the polished surface. Holes for via are then etched to contact the M2 layer, in accordance with some embodiments. The holes are filled with metal to form vias (via2), in accordance with some embodiments. The top surface is then polished, in accordance with some embodiments. In some embodiments, process of depositing metal over the vias (via2), depositing ILD, etching holes to form pedestals for the next capacitors of the stack, forming the capacitors, and then forming vias that contact the M3 layer are repeated. This process is repeated ‘n’ times for forming ‘n’ capacitors in a stack for ‘n’ number of inputs, in accordance with some embodiments. In some embodiments, the bottom electrode of each capacitor is allowed to directly contact with the metal below. For example, the pedestals that connect to the top and bottom electrodes are removed. In this embodiment, the height of the stacked capacitors is lowered, and the fabrication process is simplified because the extra steps for forming the pedestals are removed. In some embodiments, pedestals or vias are formed for both the top and bottom electrodes of the planar capacitors. In this embodiment, the height of the stacked capacitors is raised, and the fabrication process adds an additional step of forming a top pedestal or via which contacts with respective input electrodes (e.g., input ‘a’, input ‘b’, input ‘c’, and input ‘d’). FIG.16Billustrates multi-input capacitive circuit1620with stacked planar capacitor structure, wherein the multi-input capacitive circuit includes a pull-down device, in accordance with some embodiments. Multi-input capacitive circuit1620is like multi-input capacitive circuit1600, but with pull-down device MN1. Here, pull-up device MP1 is removed from the summing node. FIG.17Aillustrates multi-input capacitive circuit1700with stacked non-planar capacitor structure wherein the multi-input capacitive circuit includes a pull-up device, in accordance with some embodiments. In this example four capacitors are stacked. In some embodiments, a column of shared metal passes through the center of the capacitors, where the shared metal is the summing node n1 which is coupled to the stub and then to the source or drain terminals of the pull-up (MP1) transistor. Top electrode of each of the capacitor is partially adjacent to a respective input electrode. For example, the top electrode of capacitor C1 is coupled to input electrode ‘a’, the top electrode of capacitor C2 is coupled to input electrode ‘b’, the top electrode of capacitor C3 is coupled to input electrode ‘c’, and the top electrode of capacitor C4 is coupled to input electrode ‘d’. In this instance, the capacitors are formed between regions reserved for Via1 through Via5 (e.g., between M1 through M6 layers). The capacitors here can be capacitors with linear dielectric or capacitors with paraelectric dielectric. FIG.17Billustrates multi-input capacitive circuit1720with stacked non-planar capacitor structure wherein the multi-input capacitive circuit includes a pull-down device, in accordance with some embodiments. Multi-input capacitive circuit1720is like multi-input capacitive circuit1700, but with pull-down device MN1. Here, pull-up device MP1 is removed from the summing node. The capacitors here can comprise linear dielectric or paraelectric material. FIG.18Aillustrates planar ferroelectric or paraelectric capacitor structure1800, in accordance with some embodiments. In some embodiments, capacitors for the multi-input capacitive structures are ferroelectric capacitors. These capacitors can take any planar form. One such simplified form is illustrated inFIG.18A. Here, planar capacitor structure1800is a metal-insulator-metal (MIM) capacitor comprising a bottom electrode, a top electrode, and a ferroelectric dielectric between the top electrode and the bottom electrode as shown. In some embodiments, conductive oxide layer(s) are formed between the bottom electrode and the ferroelectric dielectric. FIG.18Billustrates three planar ferroelectric or paraelectric capacitor structures, respectively, in accordance with some embodiments. Here, any one of the three planar capacitor structures1823a,1823b, and1823cis represented by the simplified planar capacitor structure1800. In some embodiments, planar capacitor1823aincudes encapsulation portions1821aand1821bthat are adjacent to the side walls of the plurality of layers of the planar capacitor. In some embodiments, planar capacitor1823bincudes encapsulation portions1821cand1821dthat are partially adjacent to sidewall barrier seal1821aand1821b, and refractive inter-metallic layers1811a. In various embodiments, encapsulation portions1821cand1821dterminate into a via (not shown). The material for encapsulation portions1821cand1821dis same as those for sidewall barrier seal1821aand1821b. In some embodiments, the barrier material includes one or more of an oxide of: Ti, Al, or Mg. In some embodiments, planar capacitor1823cincludes encapsulation portions1821eand1821fthat are partially adjacent to sidewall barrier seal1821aand1821b, and refractive inter-metallic layers1811b. In various embodiments, encapsulation portions1821eand1821fterminate into a via (not shown). The material for encapsulation portions1821eand1821fis same as those for sidewall barrier seal1821aand1821b. Material for1812aand1821bincludes one or more of: Ti—Al—O, Al2O3, MgO, or nitride. Material for1812aand1821bis a sidewall barrier (e.g., insulative material) that protects the stack of layers from hydrogen and/or oxygen diffusion. In various embodiments, the sidewall barrier material is not an interlayer dielectric (ILD) material. In some embodiments, the lateral thickness (along x-axis) of the sidewall barrier seal1821a/b(insulating material) is in a range of 0.1 nm to 20 nm. In some embodiments, sidewall barriers are in direct contact with ILD. In some embodiments, planar capacitors1823a,1823b, and1823ccomprise a number of layers stacked together to form a planar capacitor. These layers may be extending in an x-plane when the capacitor is a planar capacitor. In some embodiments, the stack of layers includes refractive inter-metallic1811a/bas a barrier material; conductive oxides1812a/b, and FE material1813. FE material1813can be any of the FE materials discussed herein. In some embodiments, refractive inter-metallic1811a/bare removed, and electrodes are in direct contact with conductive oxides1812a/b. In some embodiments, refractive inter-metallic1811a/bmaintains the FE properties of the FE capacitor. In the absence of refractive inter-metallic1811a/b, the ferroelectric material1813of the capacitor may lose its potency. In some embodiments, refractive inter-metallic1811a/bcomprises Ti and Al (e.g., TiAl compound). In some embodiments, refractive inter-metallic1811a/bcomprises one or more of Ta, W, and/or Co. For example, refractive inter-metallic1811a/bincludes a lattice of Ta, W, and Co. In some embodiments, refractive inter-metallic1811a/bis part of a barrier layer which is a super lattice of a first material and a second material, wherein the first material includes Ti and Al (e.g., TiAl) and the second material includes Ta, W, and Co (e.g., layers of Ta, W, and Co together). In various embodiments, the lattice parameters of the barrier layer are matched with the lattice parameters of the conductive oxides and/or FE material1813. In some embodiments, refractive inter-metallic1811a/bincludes one of: Ti—Al such as Ti3Al, TiAl, TiAl3; Ni—Al such as Ni3Al, NiAl3, NiAl; Ni—Ti, Ni—Ga, Ni2MnGa; FeGa, Fe3Ga; borides, carbides, or nitrides. In some embodiments, TiAl material comprises Ti-(45-48)Al-(1-10)M (at. X trace amount %), with M being at least one element from: V, Cr, Mn, Nb, Ta, W, and Mo, and with trace amounts of 0.1-5% of Si, B, and/or Mg. In some embodiments, TiAl is a single-phase alloy γ(TiAl). In some embodiments, TiAl is a two-phase alloy γ(TiAl)+α2(Ti3Al). Single-phase γ alloys contain third alloying elements such as Nb or Ta that promote strengthening and additionally enhance oxidation resistance. The role of the third alloying elements in the two-phase alloys is to raise ductility (V, Cr, Mn), oxidation resistance (Nb, Ta) or combined properties. Additions such as Si, B, and Mg can markedly enhance other properties. In some embodiments, barrier layer1811ais coupled to a top electrode. In some embodiments, sidewall barrier seal1821a/b(insulating material) is placed around layers1811a,1812a,1813,1812b, and1811balong while the top and bottom surfaces of1811aand1811bare exposed for coupling to metal layers, vias, or a metallic pedestal. In some embodiments, conductive oxide layer(s) are formed between the top electrode and the ferroelectric dielectric. Examples of conductive oxides include: IrO2, RuO2, PdO2, OsO2, or ReO3. In some examples, conductive oxides are of the form A2O3 (e.g., In2O3, Fe2O3) and ABO3 type, where ‘A’ is a rare earth element and B is Mn. Any suitable conductive material may be used for the top electrode and the bottom electrode. For example, the material or the electrode may include one or more of: Cu, Al, Ag, Au, W, or Co. In some embodiments, the thickness along the z-axis of the top electrode and bottom electrode is in a range of 1 nm to 30 nm. In some embodiments, the thickness along the z-axis of the dielectric is in a range of 1 nm to 30 nm. In some embodiments, the thickness along the z-axis of the conductive oxide is in a range of 1 nm to 30 nm. FIG.19Aillustrates non-planar ferroelectric or paraelectric capacitor structure1900, in accordance with some embodiments. In some embodiments, non-planar capacitor structure1900is rectangular in shape. Taking the cylindrical shaped case for example, in some embodiments, the layers of non-planar capacitor structure1900from the center going outwards include bottom electrode1901a, first conductive oxide1912a, ferroelectric dielectric material1913, second conductive oxide1912b, and top electrode1901b. A cross-sectional view along the “ab” dashed line is illustrated in the middle ofFIG.19A. In some embodiments, conducting oxides are removed and the ferroelectric dielectric is directly connected to top electrode1901band bottom electrodes1901a. In some embodiments, ferroelectric dielectric material1913can include any suitable dielectric, where the thickness of dielectric film is a range of 1 nm to 20 nm. In some embodiments, ferroelectric dielectric material1913include any one of the materials discussed herein for ferroelectrics. In some embodiments, first conductive oxide1912ais conformally deposited over bottom electrode1901a. In some embodiments, dielectric material1913is conformally deposited over first conductive oxide1912a. In some embodiments, second conductive oxide1912bis conformally deposited over dielectric material1913. In some embodiments, top electrode1901bis conformally deposited over second conductive oxide1912b. In some embodiments, bottom electrode1901ais in the center while top electrode1901bis on an outer circumference of non-planar capacitor structure1900. In some embodiments, material for bottom electrode1901amay include one or more of: Cu, Al, Ag, Au, W, or Co, or their alloys. In some embodiments, material for first conductive oxide1912ainclude: IrO2, RuO2, PdO2, OsO2, or ReO3. In some examples, conductive oxides are of the form A2O3 (e.g., In2O3, Fe2O3) and ABO3 type, where ‘A’ is a rare earth element and B is Mn. In some embodiments, material for second conductive oxide1912bmay be same as the material for first conductive oxide1912a. In some embodiments, material for top electrode1901bmay include one or more of: Cu, Al, Ag, Au, W, or Co, or their alloys. In some embodiments, a first refractive inter-metallic layer (not shown) is formed between dielectric material1913and first conductive oxide1912a. In some embodiments, a second refractive inter-metallic layer (not shown) is formed between dielectric capacitor material1913and second conductive oxide1912b. In these cases, the first and second refractive inter-metallic layers are directly adjacent to their respective conductive oxide layers and to dielectric capacitor material1913. In some embodiments, refractive inter-metallic maintains the capacitive properties of the dielectric capacitor material1913. In some embodiments, refractive inter-metallic comprises Ti and Al (e.g., TiAl compound). In some embodiments, refractive inter-metallic comprises one or more of Ta, W, and/or Co. For example, refractive inter-metallic includes a lattice of Ta, W, and Co. In some embodiments, refractive inter-metallic includes one of: Ti—Al such as Ti3Al, TiAl, TiAl3; Ni—Al such as Ni3Al, NiAl3, NiAl; Ni—Ti, Ni—Ga, Ni2MnGa; FeGa, Fe3Ga; borides, carbides, or nitrides. In some embodiments, TiAl material comprises Ti-(45-48)Al-(1-10)M (at. X trace amount %), with M being at least one element from: V, Cr, Mn, Nb, Ta, W, and Mo, and with trace amounts of 0.1-5% of Si, B, and/or Mg. In some embodiments, TiAl is a single-phase alloy γ(TiAl). In some embodiments, TiAl is a two-phase alloy γ(TiAl)+α2(Ti3Al). Single-phase γ alloys contain third alloying elements such as Nb or Ta that promote strengthening and additionally enhance oxidation resistance. The role of the third alloying elements in the two-phase alloys is to raise ductility (V, Cr, Mn), oxidation resistance (Nb, Ta) or combined properties. Additions such as Si, B and Mg can markedly enhance other properties. The thicknesses of the layers of capacitor1900in the x-axis are in the range of 1 nm to 30 nm. In some embodiment, refractive inter-metallic layers are not used for non-planar capacitor structure1900. FIG.19Billustrates non-planar ferroelectric or paraelectric capacitor structure1920without conductive oxides, in accordance with some embodiments. Compared to non-planar capacitor structure1900, here first conductive oxide1912aand second conductive oxide1912bare removed and ferroelectric material1913is adjacent to top electrode1901band bottom electrode1910aas shown. FIG.20illustrates multi-input capacitive circuit2000with stacked planar ferroelectric or paraelectric capacitor structure, wherein the multi-input capacitive circuit includes a pull-up device and a pull-down device, in accordance with some embodiments. In this example, two transistors are shown, each controlled by its respective Up or Down controls on its gate terminal. The source and drain terminals of each transistor is coupled to respective contacts (CA). Etch stop layer is used in the fabrication of vias (via0) to connect the source or drain of the transistors to summing node n1 on metal-1 (M1) layer. Another etch stop layer is formed over M1 layer to fabricate vias (vial) to couple to respective M1 layers. In some embodiments, metal-2 (M2) is deposited over vias (vial). M2 layer is then polished. In some embodiments, the ferroelectric capacitor can be moved further up in the stack, where the capacitor level processing is done between different layers. In some embodiments, oxide is deposited over the etch stop layer. Thereafter, dry, or wet etching is performed to form holes for pedestals. The holes are filled with metal and land on the respective M2 layers. Fabrication processes such as interlayer dielectric (ILD) oxide deposition followed by ILD etch (to form holes for the pedestals), deposition of metal into the holes, and subsequent polishing of the surface are used to prepare for post pedestal fabrication. A number of fabrication processes of deposition, lithography, and etching takes place to form the stack of layers for the planar capacitor. In some embodiments, the ferroelectric dielectric capacitors are formed in a backend of the die. In some embodiments, deposition of ILD is followed by surface polish. In some embodiments, a metal layer is formed over top electrode of each capacitor to connect to a respective input. For example, metal layer over the top electrode of capacitor C1 is connected to input ‘a’. Metal layer over the top electrode of capacitor C2 is connected to input ‘b’. Metal layer over the top electrode of capacitor C3 is connected to input ‘c’. Metal layer over the top electrode of capacitor C4 is connected to input ‘d’. The metal layers coupled to the bottom electrodes of capacitors C1, C2, C3, and C4 are coupled to summing node n1 through respective vias. In this case, after polishing the surface, ILD is deposited, in accordance with some embodiments. Thereafter, holes are etched through the ILD to expose the top electrodes of the capacitors, in accordance with some embodiments. The holes are then filled with metal, in accordance with some embodiments. Followed by filling the holes, the top surface is polished, in accordance with some embodiments. As such, the capacitors are connected to input electrode (e.g., input ‘a’, input ‘b’, input ‘c’, and input ‘d’) and summing node n1 (through the pedestals), in accordance with some embodiments. In some embodiments, ILD is deposited over the polished surface. Holes for via are then etched to contact the M2 layer, in accordance with some embodiments. The holes are filled with metal to form vias (via2), in accordance with some embodiments. The top surface is then polished, in accordance with some embodiments. In some embodiments, process of depositing metal over the vias (via2), depositing ILD, etching holes to form pedestals for the next capacitors of the stack, forming the capacitors, and then forming vias that contact the M3 layer are repeated. This process is repeated ‘n’ times for forming ‘n’ capacitors in a stack for ‘n’ number of inputs, in accordance with some embodiments. In some embodiments, the bottom electrode of each capacitor is allowed to directly contact with the metal below. For example, the pedestals that connect to the top and bottom electrodes are removed. In this embodiment, the height of the stacked capacitors is lowered, and the fabrication process is simplified because the extra steps for forming the pedestals are removed. In some embodiments, pedestals or vias are formed for both the top and bottom electrodes of the planar capacitors. In this embodiment, the height of the stacked capacitors is raised, and the fabrication process adds an additional step of forming a top pedestal or via which contacts with respective input electrodes (e.g., input ‘a’, input ‘b’, input ‘c’, and input ‘d’). FIG.21illustrates multi-input capacitive circuit2100with stacked non-planar ferroelectric or paraelectric capacitor structure (e.g., structures ofFIG.18AorFIG.18B), wherein the multi-input capacitive circuit includes a pull-down device and a pull-up device, in accordance with some embodiments. In this example four capacitors are stacked. In some embodiments, a column of shared metal passes through the center of the capacitors, where the shared metal is the summing node n1 which is coupled to the stub and then to the source or drain terminals of the pull-up (MP1) and pull-down (MN1) transistors. Top electrode of each of the capacitor is partially adjacent to a respective input electrode. For example, the top electrode of capacitor C1 is coupled to input electrode ‘a’, the top electrode of capacitor C2 is coupled to input electrode ‘b’, the top electrode of capacitor C3 is coupled to input electrode ‘c’, and the top electrode of capacitor C4 is coupled to input electrode ‘d’. In this instance, the capacitors are formed between regions reserved for Vial through Via5 (e.g., between M1 through M6 layers). FIG.22illustrates a high-level architecture of an artificial intelligence (AI) machine2200comprising a compute die stacked with a memory die, wherein the compute die includes a c-element, completion tree, and/or validity tree with a multi-input capacitive circuit with configurable threshold, in accordance with some embodiments. AI machine2200comprises computational block2201or processor having random-access memory (RAM)2202and computational logic2203; first random-access memory2204(e.g., static RAM (SRAM), ferroelectric or paraelectric RAM (FeRAM), ferroelectric or paraelectric static random-access memory (FeSRAM)), main processor2205, second random-access memory2206(dynamic RAM (DRAM), FeRAM), and solid-state memory or drive (SSD)2207. In some embodiments, some, or all components of AI machine2200are packaged in a single package forming a system-on-chip (SoC). The SoC can be configured as a logic-on-logic configuration, which can be in a 3D configuration or a 2.5D configuration. In some embodiments, computational block2201is packaged in a single package and then coupled to processor2205and memories2204,2206, and2207on a printed circuit board (PCB). In some embodiments, computational block2201is configured as a logic-on-logic configuration, which can be in a 3D configuration or a 2.5D configuration. In some embodiments, computational block2201comprises a special purpose compute die2203or microprocessor. For example, compute die2203is a compute chiplet that performs a function of an accelerator or inference. In some embodiments, memory2202is DRAM which forms a special memory/cache for the special purpose compute die2203. The DRAM can be embedded DRAM (eDRAM) such as 1T1C (one transistor and one capacitor) based memories. In some embodiments, RAM2202is ferroelectric or paraelectric RAM (Fe-RAM). In some embodiments, compute die2203is specialized for applications such as Artificial Intelligence, graph processing, and algorithms for data processing. In some embodiments, compute die2203further has logic computational blocks, for example, for multipliers and buffers, a special data memory block (e.g., buffers) comprising DRAM, FeRAM, or a combination of them. In some embodiments, RAM2202has weights and inputs stored to improve the computational efficiency. The interconnects between processor2205(also referred to as special purpose processor), first RAM2204and compute die2203are optimized for high bandwidth and low latency. The architecture ofFIG.22allows efficient packaging to lower the energy, power, or cost and provides for ultra-high bandwidth between RAM2202and compute chiplet2203of computational block2201. In some embodiments, RAM2202is partitioned to store input data (or data to be processed)2202aand weight factors2202b. In some embodiments, input data2202ais stored in a separate memory (e.g., a separate memory die) and weight factors2202bare stored in a separate memory (e.g., separate memory die). In some embodiments, computational logic or compute chiplet2203comprises matrix multiplier, adder, concatenation logic, buffers, and combinational logic. In various embodiments, compute chiplet2203performs multiplication operation on inputs2202aand weights2202b. In some embodiments, weights2202bare fixed weights. For example, processor2205(e.g., a graphics processor unit (GPU), field programmable grid array (FPGA) processor, application specific integrated circuit (ASIC) processor, digital signal processor (DSP), an AI processor, a central processing unit (CPU), or any other high-performance processor) computes the weights for a training model. Once the weights are computed, they are stored in memory2202. In various embodiments, the input data that is to be analyzed using a trained model, is processed by computational block2201with computed weights2202bto generate an output (e.g., a classification result). In some embodiments, first RAM2204is ferroelectric or paraelectric based SRAM. For example, a six transistor (6T) SRAM bit-cells having ferroelectric or paraelectric transistors are used to implement a non-volatile FeSRAM. In some embodiments, SSD2207comprises NAND flash cells. In some embodiments, SSD2207comprises NOR flash cells. In some embodiments, SSD2207comprises multi-threshold NAND flash cells. In various embodiments, the non-volatility of FeRAM is used to introduce new features such as security, functional safety, and faster reboot time of AI machine2200. The non-volatile FeRAM is a low power RAM that provides fast access to data and weights. FeRAM2204can also serve as a fast storage for computational block2201(which can be an inference die or an accelerator), which typically has low capacity and fast access requirements. In various embodiments, FeRAM (FeDRAM or FeSRAM) includes ferroelectric or paraelectric material. The ferroelectric or paraelectric material may be in a transistor gate stack or in a capacitor of the memory. The ferroelectric material can be any suitable low voltage FE material discussed with reference to various embodiments. While embodiments here are described with reference to ferroelectric material, the embodiments are applicable to any of the nonlinear polar materials described herein. FIG.23illustrates an architecture of a computational block2300comprising a compute die stacked with a memory die, wherein the compute die includes a c-element, completion tree, and/or validity tree with a multi-input capacitive circuit with configurable threshold, in accordance with some embodiments. The architecture ofFIG.23illustrates an architecture for a special purpose compute die where RAM memory buffers for inputs and weights are split on die-1 and logic and optional memory buffers are split on die-2. In some embodiments, memory die (e.g., Die 1) is positioned below a compute die (e.g., Die 2) such that a heat sink or thermal solution is adjacent to the compute die. In some embodiments, the memory die is embedded in an interposer. In some embodiments, the memory die behaves as an interposer in addition to its basic memory function. In some embodiments, the memory die is a high bandwidth memory (HBM) which comprises multiple dies of memories in a stack and a controller to control the read and write functions to the stack of memory dies. In some embodiments, the memory die comprises a first die2301to store input data and a second die2302to store weight factors. In some embodiments, the memory die is a single die that is partitioned such that first partition2301of the memory die is used to store input data and second partition2302of the memory die is used to store weights. In some embodiments, the memory die comprises DRAM. In some embodiments, the memory die comprises FE-SRAM or FE-DRAM. In some embodiments, the memory die comprises MRAM. In some embodiments, the memory die comprises SRAM. For example, memory partitions2301and2302, or memory dies2301and2302include one or more of: DRAM, FE-SRAM, FE-DRAM, SRAM, and/or MRAM. In some embodiments, the input data stored in memory partition or die2301is the data to be analyzed by a trained model with fixed weights stored in memory partition or die2302. In some embodiments, the compute die comprises ferroelectric or paraelectric logic (e.g., majority, minority, and/or threshold gates) to implement matrix multiplier2303, logic2304, and temporary buffer2305. Matrix multiplier2303performs multiplication operation on input data ‘X’ and weights ‘W’ to generate an output ‘Y’. This output may be further processed by logic2304. In some embodiments, logic2304performs a threshold operation, pooling and drop out operations, and/or concatenation operations to complete the AI logic primitive functions. In some embodiments, the output of logic2304(e.g., processed output ‘Y’) is temporarily stored in buffer2305. In some embodiments, buffer2305is memory such as one or more of: DRAM, Fe-SRAM, Fe-DRAM, MRAM, resistive RAM (Re-RAM) and/or SRAM. In some embodiments, buffer2305is part of the memory die (e.g., Die 1). In some embodiments, buffer2305performs the function of a re-timer. In some embodiments, the output of buffer2305(e.g., processed output ‘Y’) is used to modify the weights in memory partition or die2302. In one such embodiment, computational block2300not only operates as an inference circuitry, but also as a training circuitry to train a model. In some embodiments, matrix multiplier2303includes an array of multiplier cells, wherein the DRAMs2301and2302include arrays of memory bit-cells, respectively, wherein each multiplier cell is coupled to a corresponding memory bit-cell of DRAM2301and/or DRAM2302. In some embodiments, computational block2300comprises an interconnect fabric coupled to the array of multiplier cells such that each multiplier cell is coupled to the interconnect fabric. Architecture2300provides reduced memory access for the compute die (e.g., die 2) by providing data locality for weights, inputs, and outputs. In one example, data from and to the AI computational blocks (e.g., matrix multiplier2303) is locally processed within a same packaging unit. Architecture2300also segregates the memory and logic operations onto a memory die (e.g., Die 1) and a logic die (e.g., Die 2), respectively, allowing for optimized AI processing. Desegregated dies allow for improved yield of the dies. A high-capacity memory process for Die 1 allows reduction of power of the external interconnects to memory, reduces cost of integration, and results in a smaller footprint. FIG.24illustrates a system-on-chip (SOC)2400that uses a c-element, completion tree, and/or validity tree with a multi-input capacitive circuit with configurable threshold, in accordance with some embodiments. SoC2400comprises memory2401having static random-access memory (SRAM) or FE based random-access memory FE-RAM, or any other suitable memory. The memory can be non-volatile (NV) or volatile memory. Memory2401may also comprise logic2403to control memory2402. For example, write and read drivers are part of logic2403. These drivers and other logic are implemented using the majority or threshold gates of various embodiments. The logic can comprise majority or threshold gates and traditional logic (e.g., CMOS based NAND, NOR etc.). SoC further comprises a memory I/O (input-output) interface2404. The interface may be a double-data rate (DDR) compliant interface or any other suitable interface to communicate with a processor. Processor2405of SoC2400can be a single core or multiple core processor. Processor2405can be a general-purpose processor (CPU), a digital signal processor (DSP), or an Application Specific Integrated Circuit (ASIC) processor. In some embodiments, processor2405is an artificial intelligence (AI) processor (e.g., a dedicated AI processor, a graphics processor configured as an AI processor). In various embodiments, processor2405executes instructions that are stored in memory2401. AI is a broad area of hardware and software computations where data is analyzed, classified, and then a decision is made regarding the data. For example, a model describing classification of data for a certain property or properties is trained over time with large amounts of data. The process of training a model requires large amounts of data and processing power to analyze the data. When a model is trained, weights or weight factors are modified based on outputs of the model. Once weights for a model are computed to a high confidence level (e.g., 95% or more) by repeatedly analyzing data and modifying weights to get the expected results, the model is deemed “trained.” This trained model with fixed weights is then used to make decisions about new data. Training a model and then applying the trained model for new data is hardware intensive activity. In some embodiments, the AI processor has reduced latency of computing the training model and using the training model, which reduces the power consumption of such AI processor systems. Processor2405may be coupled to a number of other chiplets that can be on the same die as SoC2400or on separate dies. These chiplets include connectivity circuitry2406, I/O controller2407, power management2408, and display system2409, and peripheral connectivity2406. Connectivity2406represents hardware devices and software components for communicating with other devices. Connectivity2406may support various connectivity circuitries and standards. For example, connectivity2406may support GSM (global system for mobile communications) or variations or derivatives, CDMA (code division multiple access) or variations or derivatives, TDM (time division multiplexing) or variations or derivatives, 3rd Generation Partnership Project (3GPP) Universal Mobile Telecommunications Systems (UMTS) system or variations or derivatives, 3GPP Long-Term Evolution (LTE) system or variations or derivatives, 3GPP LTE-Advanced (LTE-A) system or variations or derivatives, Fifth Generation (5G) wireless system or variations or derivatives, 5G mobile networks system or variations or derivatives, 5G New Radio (NR) system or variations or derivatives, or other cellular service standards. In some embodiments, connectivity2406may support non-cellular standards such as WiFi. I/O controller2407represents hardware devices and software components related to interaction with a user. I/O controller2407is operable to manage hardware that is part of an audio subsystem and/or display subsystem. For example, input through a microphone or other audio device can provide input or commands for one or more applications or functions of SoC2400. In some embodiments, I/O controller2407illustrates a connection point for additional devices that connect to SoC2400through which a user might interact with the system. For example, devices that can be attached to the SoC2400might include microphone devices, speaker or stereo systems, video systems or other display devices, keyboard or keypad devices, or other I/O devices for use with specific applications such as card readers or other devices. Power management2408represents hardware or software that performs power management operations, e.g., based at least in part on receiving measurements from power measurement circuitries, temperature measurement circuitries, charge level of battery, and/or any other appropriate information that may be used for power management. By using majority and threshold gates of various embodiments, non-volatility is achieved at the output of these logic. Power management2408may accordingly put such logic into low power state without the worry of losing data. Power management may select a power state according to Advanced Configuration and Power Interface (ACPI) specification for one or all components of SoC2400. Display system2409represents hardware (e.g., display devices) and software (e.g., drivers) components that provide a visual and/or tactile display for a user to interact with the processor2405. In some embodiments, display system2409includes a touch screen (or touch pad) device that provides both output and input to a user. Display system2409may include a display interface, which includes the particular screen or hardware device used to provide a display to a user. In some embodiments, the display interface includes logic separate from processor2405to perform at least some processing related to the display. Peripheral connectivity2410may represent hardware devices and/or software devices for connecting to peripheral devices such as printers, chargers, cameras, etc. In some embodiments, peripheral connectivity2410may support communication protocols, e.g., PCIe (Peripheral Component Interconnect Express), USB (Universal Serial Bus), Thunderbolt, High-Definition Multimedia Interface (HDMI), Firewire, etc. In various embodiments, SoC2400includes a coherent cache or memory-side buffer chiplet (not shown) which include ferroelectric or paraelectric memory. The coherent cache or memory-side buffer chiplet can be coupled to processor2405and/or memory2401according to the various embodiments described herein (e.g., via silicon bridge or vertical stacking). The term “device” may generally refer to an apparatus according to the context of the usage of that term. For example, a device may refer to a stack of layers or structures, a single structure or layer, a connection of various structures having active and/or passive elements, etc. Generally, a device is a three-dimensional structure with a plane along the x-y direction and a height along the z direction of an x-y-z Cartesian coordinate system. The plane of the device may also be the plane of an apparatus, which comprises the device. Throughout the specification, and in the claims, the term “connected” means a direct connection, such as electrical, mechanical, or magnetic connection between the things that are connected, without any intermediary devices. The term “coupled” means a direct or indirect connection, such as a direct electrical, mechanical, or magnetic connection between the things that are connected or an indirect connection, through one or more passive or active intermediary devices. The term “adjacent” here generally refers to a position of a thing being next to (e.g., immediately next to or close to with one or more things between them) or adjoining another thing (e.g., abutting it). The term “circuit” or “module” may refer to one or more passive and/or active components that are arranged to cooperate with one another to provide a desired function. The term “signal” may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meaning of “a,” “an,” and “the” include plural references. The meaning of “in” includes “in” and “on.” Here, the term “analog signal” generally refers to any continuous signal for which the time varying feature (variable) of the signal is a representation of some other time varying quantity, i.e., analogous to another time varying signal. Here, the term “digital signal” generally refers to a physical signal that is a representation of a sequence of discrete values (a quantified discrete-time signal), for example of an arbitrary bit stream, or of a digitized (sampled and analog-to-digital converted) analog signal. The term “scaling” generally refers to converting a design (schematic and layout) from one process technology to another process technology and subsequently being reduced in layout area. The term “scaling” generally also refers to downsizing layout and devices within the same technology node. The term “scaling” may also refer to adjusting (e.g., slowing down or speeding up—i.e., scaling down, or scaling up respectively) of a signal frequency relative to another parameter, for example, power supply level. The terms “substantially,” “close,” “approximately,” “near,” and “about,” generally refer to being within +/−10% of a target value. For example, unless otherwise specified in the explicit context of their use, the terms “substantially equal,” “about equal” and “approximately equal” mean that there is no more than incidental variation between among things so described. In the art, such variation is typically no more than +/−10% of a predetermined target value. Unless otherwise specified the use of the ordinal adjectives “first,” “second,” and “third,” etc., to describe a common object, merely indicate that different instances of like objects are being referred to and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking or in any other manner. For the purposes of the present disclosure, phrases “A and/or B” and “A or B” mean (A), (B), or (A and B). For the purposes of the present disclosure, the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). The terms “left,” “right,” “front,” “back,” “top,” “bottom,” “over,” “tinder,” and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. For example, the terms “over,” “under,” “front side,” “back side,” “top,” “bottom,” “over,” “under,” and “on” as used herein refer to a relative position of one component, structure, or material with respect to other referenced components, structures or materials within a device, where such physical relationships are noteworthy. These terms are employed herein for descriptive purposes only and predominantly within the context of a device z-axis and therefore may be relative to an orientation of a device. Hence, a first material “over” a second material in the context of a figure provided herein may also be “under” the second material if the device is oriented upside-down relative to the context of the figure provided. In the context of materials, one material disposed over or under another may be directly in contact or may have one or more intervening materials. Moreover, one material disposed between two materials may be directly in contact with the two layers or may have one or more intervening layers. In contrast, a first material “on” a second material is in direct contact with that second material. Similar distinctions are to be made in the context of component assemblies. The term “between” may be employed in the context of the z-axis, x-axis or y-axis of a device. A material that is between two other materials may be in contact with one or both of those materials, or it may be separated from both of the other two materials by one or more intervening materials. A material “between” two other materials may therefore be in contact with either of the other two materials, or it may be coupled to the other two materials through an intervening material. A device that is between two other devices may be directly connected to one or both of those devices, or it may be separated from both of the other two devices by one or more intervening devices. Here, multiple non-silicon semiconductor material layers may be stacked within a single fin structure. The multiple non-silicon semiconductor material layers may include one or more “P-type” layers that are suitable (e.g., offer higher hole mobility than silicon) for P-type transistors. The multiple non-silicon semiconductor material layers may further include one or more “N-type” layers that are suitable (e.g., offer higher electron mobility than silicon) for N-type transistors. The multiple non-silicon semiconductor material layers may further include one or more intervening layers separating the N-type from the P-type layers. The intervening layers may be at least partially sacrificial, for example to allow one or more of a gate, source, or drain to wrap completely around a channel region of one or more of the N-type and P-type transistors. The multiple non-silicon semiconductor material layers may be fabricated, at least in part, with self-aligned techniques such that a stacked CMOS device may include both a high-mobility N-type and P-type transistor with a footprint of a single FET (field effect transistor). Here, the term “backend” generally refers to a section of a die which is opposite of a “frontend” and where an IC (integrated circuit) package couples to IC die bumps. For example, high-level metal layers (e.g., metal layer6and above in a ten-metal stack die) and corresponding vias that are closer to a die package are considered part of the backend of the die. Conversely, the term “frontend” generally refers to a section of the die that includes the active region (e.g., where transistors are fabricated) and low-level metal layers and corresponding vias that are closer to the active region (e.g., metal layer5and below in the ten-metal stack die example). Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments. The various appearances of “an embodiment,” “one embodiment,” or “some embodiments” are not necessarily all referring to the same embodiments. If the specification states a component, feature, structure, or characteristic “may,” “might,” or “could” be included, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to “a” or “an” element, that does not mean there is only one of the elements. If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional elements. Furthermore, the particular features, structures, functions, or characteristics may be combined in any suitable manner in one or more embodiments. For example, a first embodiment may be combined with a second embodiment anywhere the particular features, structures, functions, or characteristics associated with the two embodiments are not mutually exclusive. While the disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications and variations of such embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. The embodiments of the disclosure are intended to embrace all such alternatives, modifications, and variations as to fall within the broad scope of the appended claims. In addition, well-known power/ground connections to integrated circuit (IC) chips and other components may or may not be shown within the presented figures, for simplicity of illustration and discussion, and so as not to obscure the disclosure. Further, arrangements may be shown in block diagram form to avoid obscuring the disclosure, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the present disclosure is to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that the disclosure can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting. The structures of various embodiments described herein can also be described as method of forming those structures, and method of operation of these structures. Following examples are provided that illustrate the various embodiments. The examples can be combined with other examples. As such, various embodiments can be combined with other embodiments without changing the scope of the invention. Example 1: An apparatus comprising: a first input; a second input; a third input; a control; a circuitry to adjust logic levels of the first input, the second input, and the control in a first operation mode; and a gate to receive the first input, the second input, and the third input, wherein the third input is coupled to an output of the gate, wherein the gate comprises: a first capacitor having a first terminal coupled to the first input, and a second terminal coupled to a summing node; a second capacitor having a third terminal coupled to the second input, and a fourth terminal coupled to the summing node; a third capacitor having a fifth terminal coupled to the third input, and a sixth terminal coupled to the summing node; and a device coupled to the summing node and a supply rail, wherein the device is controllable by the control, wherein the circuitry is to adjust a function of the gate in the first operation mode, and wherein the circuitry is to allow the gate to operate in accordance with the function in a second operation mode. Example 2: The apparatus of example 1, wherein the function is a majority function. Example 3: The apparatus of example 1, wherein the first capacitor, the second capacitor, and the third capacitor comprise linear dielectric material. Example 4: The apparatus of example 3, wherein the linear dielectric material includes one of: SiO2, Al2O3, Li2O, HfSiO4, Sc2O3, SrO, HfO2, ZrO2, Y2O3, Ta2O5, BaO, WO3, MoO3, or TiO2. Example 5: The apparatus of example 1, wherein the device is a pull-up device, wherein the circuitry is to set logic levels of the first input, the second input, and the third input to logic high, and the control to enable or turn on the pull-up device in the first operation mode to adjust a threshold of the gate to 2. Example 6: The apparatus of example 1, wherein the first capacitor, the second capacitor, and the third capacitor include: a linear dielectric material includes one or more of: Si, Al, Li, Hf, Sc, Sr, Zr, Y, Ta, Ba, W, Mo, or Ti; and a top electrode and a bottom electrode, wherein the linear dielectric material is between the top electrode and the bottom electrode, wherein the top electrode or the bottom electrode include one or more of: Cu, Al, Ag, Au, W, or Co. Example 7: The apparatus of example 1, wherein the first capacitor, the second capacitor, and the third capacitor include paraelectric material which includes one of: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95), HfZrO2, Hf—Si—O, BaTiO3, La-substituted PbTiO3, lead zirconate titanate, or PMN-PT (lead magnesium niobate-lead titanate) based relaxor ferroelectrics. Example 8: The apparatus of example 1, wherein the first capacitor, the second capacitor, and the third capacitor include ferroelectric material. Example 9: The apparatus of example 1, wherein the device is a first device, wherein the supply rail is a power supply rail, wherein the control is a first control, wherein the gate comprises a second device coupled to the summing node and a ground supply rail, wherein the second device is controllable by a second control. Example 10: The apparatus of example 9, wherein, in the first operation mode, the circuitry is to adjust a threshold of the gate to 2 after the second device is enabled first, and then the second device is disabled, and then the first device is enabled, and the first input is set to logic 1, the second input is set to logic 1, and the third input is set to logic 0. Example 11: The apparatus of example 9, wherein, in the first operation mode, the circuitry is to adjust a threshold of the gate to 2 after the first device is enabled first, and then the first device is disabled, and then the second device is enabled, and the first input is set to logic 1, the second input is set to logic 0, and the third input is set to logic 0. Example 12: The apparatus of example 9, wherein the first capacitor, the second capacitor, and the third capacitor include ferroelectric material, wherein the ferroelectric material includes one or more of: Bismuth ferrite (BFO), BFO with a doping material where in the doping material is one of Lanthanum, or elements from lanthanide series of periodic table; Lead zirconium titanate (PZT), or PZT with a doping material, wherein the doping material is one of La, Nb; a relaxor ferroelectric which includes one of lead magnesium niobate (PMN), lead magnesium niobate-lead titanate (PMN-PT), lead lanthanum zirconate titanate (PLZT), lead scandium niobate (PSN), Barium Titanium-Bismuth Zinc Niobium Tantalum (BT-BZNT), or Barium Titanium-Barium Strontium Titanium (BT-BST); a perovskite which includes one of: BaTiO3, PbTiO3, KNbO3, or NaTaO3; a hexagonal ferroelectric which includes one of: YMnO3, or LuFeO3; hexagonal ferroelectrics of a type h-RMnO3, where R is a rare earth element which includes one of: cerium (Ce), dysprosium (Dy), erbium (Er), europium (Eu), gadolinium (Gd), holmium (Ho), lanthanum (La), lutetium (Lu), neodymium (Nd), praseodymium (Pr), promethium (Pm), samarium (Sm), scandium (Sc), terbium (Tb), thulium (Tm), ytterbium (Yb), or yttrium (Y); Hafnium (Hf), Zirconium (Zr), Aluminum (Al), Silicon (Si), their oxides or their alloyed oxides; Hafnium oxides as Hf1−x Ex Oy, where E can be Al, Ca, Ce, Dy, er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y; Al(1−x)Sc(x)N, Ga(1−x)Sc(x)N, Al(1−x)Y(x)N or Al(1−x−y)Mg(x)Nb(y)N, y doped HfO2, where x includes one of: Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y, wherein ‘x’ is a fraction; Niobate type compounds LiNbO3, LiTaO3, Lithium iron Tantalum Oxy Fluoride, Barium Strontium Niobate, Sodium Barium Niobate, or Potassium strontium niobate; or an improper ferroelectric which includes one of: [PTO/STO]n or [LAO/STO]n, where ‘n’ is between 1 to 100. Example 13: An apparatus comprising: a first input; a second input; a third input; a fourth input; a fifth input; and a gate to provide an output which is a consensus of the first input, the second input, and the third input, wherein the gate receives the fourth input and the fifth input, wherein the output is coupled to the fourth input and the fifth input, wherein the gate has an adjustable threshold. Example 14: The apparatus of example 13, wherein the output is a logic high when the first input, the second input, and the third input are logic high, wherein the output is a logic low when the first input, the second input, and the third input are logic low, wherein the output retains its logic state when at least one of the first input, the second input, or the third input is a logic 1 and when the at least one of the first input, the second input, or the third input is a logic 0. Example 15: The apparatus of example 13, wherein the gate comprises: a first capacitor having a first terminal coupled to the first input, and a second terminal coupled to a summing node; a second capacitor having a third terminal coupled to the second input, and a fourth terminal coupled to the summing node; a third capacitor having a fifth terminal coupled to the third input, and a sixth terminal coupled to the summing node; a fourth capacitor having a seventh terminal coupled to the fourth input and the fifth input, and an eighth terminal coupled to the summing node; a fifth capacitor having a ninth terminal coupled to the fourth input and the fifth input, and a tenth terminal coupled to the summing node; and a device coupled to the summing node and a supply rail, wherein the device is controllable by a control. Example 16: The apparatus of example 13 comprises a circuitry to adjust a function of the gate by controlling the adjustable threshold in a first operation mode, and wherein the circuitry is to allow the gate to operate in accordance with the function in a second operation mode. Example 17: A system comprising: a memory to store one or more instructions; a processor circuitry to execute the one or more instructions; and a communication device to allow the processor circuitry to communicate with another device, wherein the processor circuitry includes a consensus circuitry which comprises an apparatus according to any one of examples 1 to 12 or examples 13 to 16. Example 1a: An apparatus comprising: a first consensus circuitry to determine a first consensus between a first input and a second input, the first consensus circuitry to generate a first output which is representative of the first consensus; a second consensus circuitry to determine a second consensus between a third input and a fourth input, the second consensus circuitry to generate a second output which is representative of the second consensus; and a third consensus circuitry coupled to the first consensus circuitry and the second consensus circuitry, wherein the third consensus circuitry is to determine a third consensus between the first output and the second output, the third consensus circuitry to generate a third output which is representative of the third consensus, wherein the first consensus circuitry, the second consensus circuitry, and the third consensus circuitry comprises a first gate with a first adjustable threshold, a second gate with a second adjustable threshold, and a third gate with a third adjustable threshold, respectively. Example 2a: The apparatus of example 1a comprising: a fourth consensus circuitry to determine a fourth consensus between a fifth input and an sixth input, the fourth consensus circuitry to generate a fourth output which is representative of the fourth consensus; a fifth consensus circuitry to determine a fifth consensus between a seventh input and an eighth input, the fifth consensus circuitry to generate a fifth output which is representative of the fifth consensus; and a sixth consensus circuitry coupled to the fourth consensus circuitry and the fifth consensus circuitry, wherein the sixth consensus circuitry is to determine a sixth consensus between the fourth output and the fifth output, the sixth consensus circuitry to generate a sixth output which is representative of the sixth consensus, wherein the fourth consensus circuitry, the fifth consensus circuitry, and the sixth consensus circuitry comprises a fourth gate with a fourth adjustable threshold, a fifth gate with a fifth adjustable threshold, and a sixth gate with a sixth adjustable threshold, respectively. Example 3a: The apparatus of example 2a comprising seventh consensus circuitry coupled to the third consensus circuitry and the sixth consensus circuitry, wherein the seventh consensus circuitry is to determine a seventh consensus between the third output and the sixth output, the seventh consensus circuitry to generate a seventh output representative of the seventh consensus. Example 4a: The apparatus of example 3a, wherein the seventh consensus indicates a consensus of the first input, the second input, the third input, the fourth input, the fifth input, the sixth input, the seventh input, and the eighth input. Example 5a: The apparatus of example 3a, wherein the seventh consensus circuitry comprises a seventh gate with a seventh adjustable threshold. Example 6a: The apparatus of example 1a, wherein the first gate comprises: a first input node to receive the first input; a second input node to receive the second input; a third input node; a control; a circuitry to adjust logic levels of the first input, the second input, and the control in a first operation mode; and a multi-input gate to receive the first input, the second input, and the third input, wherein and the third input node is coupled to an output of the multi-input gate, wherein the multi-input gate comprises: a first capacitor having a first terminal coupled to the first input node, and a second terminal coupled to a summing node; a second capacitor having a third terminal coupled to the second input node, and a fourth terminal coupled to the summing node; a third capacitor having a fifth terminal coupled to the third input node, and a sixth terminal coupled to the summing node; and a device coupled to the summing node and a supply rail, wherein the device is controllable by the control, wherein the circuitry is to adjust a function of the multi-input gate in the first operation mode, and wherein the circuitry is to allow the multi-input gate to operate in accordance with the function in a second operation mode. Example 7a: The apparatus of example 6a, wherein the function is a majority function. Example 8a: The apparatus of example 6a, wherein the first capacitor, the second capacitor, and the third capacitor comprise linear dielectric material. Example 9a: The apparatus of example 8a, wherein the linear dielectric material includes one of: SiO2, Al2O3, Li2O, HfSiO4, Sc2O3, SrO, HfO2, ZrO2, Y2O3, Ta2O5, BaO, WO3, MoO3, or TiO2. Example 10a: The apparatus of example 6a, wherein the device is a pull-up device, wherein the circuitry is to set logic levels of the first input, the second input, and the third input to logic high, and the control to enable or turn on the pull-up device in the first operation mode to adjust a threshold of the multi-input gate to 2. Example 11a: The apparatus of example 6a, wherein the first capacitor, the second capacitor, and the third capacitor include: a linear dielectric material includes one or more of: Si, Al, Li, Hf, Sc, Sr, Zr, Y, Ta, Ba, W, Mo, or Ti; and a top electrode and a bottom electrode, wherein the linear dielectric material is between the top electrode and the bottom electrode, wherein the top electrode or the bottom electrode include one or more of: Cu, Al, Ag, Au, W, or Co. Example 12a: The apparatus of example 6a, wherein the first capacitor, the second capacitor, and the third capacitor include paraelectric material which includes one of: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95), HfZrO2, Hf—Si—O, BaTiO3, La-substituted PbTiO3, lead zirconate titanate, or PMN-PT (lead magnesium niobate-lead titanate) based relaxor ferroelectrics. Example 13a: The apparatus of example 6, wherein the first capacitor, the second capacitor, and the third capacitor include ferroelectric material. Example 14a: The apparatus of example 6a, wherein the device is a first device, wherein the supply rail is a power supply rail, wherein the control is a first control, wherein the multi-input gate comprises a second device coupled to the summing node and a ground supply rail, wherein the second device is controllable by a second control. Example 15a: The apparatus of example 14a, wherein, in the first operation mode, the circuitry is to adjust a threshold of the multi-input gate to 2 after the second device is enabled first, and then the second device is disabled, and then the first device is enabled, and the first input is set to logic 1, the second input is set to logic 1, and the third input is set to logic 0. Example 16a: The apparatus of example 14a, wherein, in the first operation mode, the circuitry is to adjust a threshold of the multi-input gate to 2 after the first device is enabled first, and then the first device is disabled, and then the second device is enabled, and the first input is set to logic 1, the second input is set to logic 0, and the third input is set to logic 0. Example 17a: The apparatus of example 14a, wherein the first capacitor, the second capacitor, and the third capacitor include ferroelectric material, wherein the ferroelectric material according to the ferroelectric materials discussed herein. Example 18a: An apparatus comprising: an m-input consensus circuitry comprising a first plurality of consensus circuitries coupled to generate a first output indicative of a first consensus of m number of inputs; an n-input consensus circuitry comprising a second plurality of consensus circuitries coupled to generate a second output indicative of a second consensus of n number of inputs; and a 2-input consensus circuitry coupled to the m-input consensus circuitry and the n-input consensus circuitry, wherein the 2-input consensus circuitry is to generate a third output, wherein the m-input consensus circuitry, the n-input consensus circuitry, and the 2-input consensus circuitry comprise gates with adjustable threshold. Example 19a: The apparatus of example 18a, wherein the gates with adjustable threshold are configured as majority or minority gates. Example 20a: A system comprising: a memory to store one or more instructions; a processor circuitry to execute the one or more instructions; and a communication device to allow the processor circuitry to communicate with another device, wherein the processor circuitry includes a completion tree which comprises an apparatus according to any one of examples 1a to 17a or examples 18a to 19a. Example 1b: An apparatus comprising: a first OR gate to generate a first output which is indicative of a first OR function between a first input and a second input; a second OR gate to generate a second output which is indicative of a second OR function between a third input and a fourth input; and a first consensus circuitry to determine a first consensus between the first output and the second output, the first consensus circuitry to generate a first consensus output which is representative of the first consensus, wherein the first consensus circuitry comprises a first gate with a first adjustable threshold. Example 2b: The apparatus of example 1b further comprising: a third OR gate to generate a third output which is indicative of a third OR function between a fifth input and a sixth input; a fourth OR gate to generate a fourth output which is indicative of a fourth OR function between a seventh input and an eighth input; and a second consensus circuitry to determine a second consensus between the third output and the fourth output, the second consensus circuitry to generate a second consensus output which is representative of the second consensus, wherein the second consensus circuitry comprises a second gate with a second adjustable threshold. Example 3b: The apparatus of example 2b further comprising a third consensus circuitry coupled to the first consensus circuitry and the second consensus circuitry, wherein the third consensus circuitry is to determine a third consensus between the first consensus output and the second consensus output, the third consensus circuitry to generate a third consensus output which is representative of the third consensus. Example 4b: The apparatus of example 3b, wherein the third consensus circuitry comprises a third gate with a third adjustable threshold. Example 5b: The apparatus of example 3b, wherein the third consensus output indicates a valid state or a neutral state based on logic values of the first input, the second input, the third input, the fourth input, the fifth input, the sixth input, the seventh input, and the eighth input. Example 6b: The apparatus of example 1b, wherein the first OR gate comprises a first threshold gate with a threshold which is adjusted to function the first threshold gate as an OR gate. Example 7b: The apparatus of example 6b, wherein the threshold is adjusted to 1. Example 8b: The apparatus of example 1b, wherein the first consensus circuitry comprises: a first input node to receive the first input; a second input node to receive the second input; a third input node; a control; a conditioning circuitry to adjust logic levels of the first input, the second input, and the control in a first operation mode; and a multi-input gate to receive the first input, the second input, and the third input, wherein and the third input node is coupled to an output of the multi-input gate, wherein the multi-input gate comprises: a first capacitor having a first terminal coupled to the first input node, and a second terminal coupled to a summing node; a second capacitor having a third terminal coupled to the second input node, and a fourth terminal coupled to the summing node; a third capacitor having a fifth terminal coupled to the third input node, and a sixth terminal coupled to the summing node; and a device coupled to the summing node and a supply rail, wherein the device is controllable by the control, wherein the conditioning circuitry is to adjust a function of the multi-input gate in the first operation mode, and wherein the conditioning circuitry is to allow the multi-input gate to operate in accordance with the function in a second operation mode. Example 9b: The apparatus of example 8b, wherein the function is a majority function. Example 10b: The apparatus of example 8b, wherein the first capacitor, the second capacitor, and the third capacitor comprise linear dielectric material, and wherein the linear dielectric material includes one of: SiO2, Al2O3, Li2O, HfSiO4, Sc2O3, SrO, HfO2, ZrO2, Y2O3, Ta2O5, BaO, WO3, MoO3, or TiO2. Example 11b: The apparatus of example 8b, wherein the device is a pull-up device, wherein the conditioning circuitry is to set logic levels of the first input, the second input, and the third input to logic high, and the control to enable or turn on the pull-up device in the first operation mode to adjust a threshold of the multi-input gate to 2. Example 12b: The apparatus of example 8b, wherein the first capacitor, the second capacitor, and the third capacitor include: a linear dielectric material includes one or more of: Si, Al, Li, Hf, Sc, Sr, Zr, Y, Ta, Ba, W, Mo, or Ti; and a top electrode and a bottom electrode, wherein the linear dielectric material is between the top electrode and the bottom electrode, wherein the top electrode or the bottom electrode include one or more of: Cu, Al, Ag, Au, W, or Co. Example 13b: The apparatus of example 8b, wherein the first capacitor, the second capacitor, and the third capacitor include paraelectric material which includes one of: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95), HfZrO2, Hf—Si—O, BaTiO3, La-substituted PbTiO3, lead zirconate titanate, or PMN-PT (lead magnesium niobate-lead titanate) based relaxor ferroelectrics. Example 14b: The apparatus of example 8b, wherein the first capacitor, the second capacitor, and the third capacitor include ferroelectric material. Example 15b: The apparatus of example 8b, wherein the device is a first device, wherein the supply rail is a power supply rail, wherein the control is a first control, wherein the multi-input gate comprises a second device coupled to the summing node and a ground supply rail, wherein the second device is controllable by a second control. Example 16b: The apparatus of example 15b, wherein, in the first operation mode, the conditioning circuitry is to adjust a threshold of the multi-input gate to 2 after the second device is enabled first, and then the second device is disabled, and then the first device is enabled, and the first input is set to logic 1, the second input is set to logic 1, and the third input is set to logic 0. Example 17b: The apparatus of example 16b, wherein, in the first operation mode, the conditioning circuitry is to adjust a threshold of the multi-input gate to 2 after the first device is enabled first, and then the first device is disabled, and then the second device is enabled, and the first input is set to logic 1, the second input is set to logic 0, and the third input is set to logic 0. Example 18b: An apparatus comprising: an m-input validity circuitry to generate a first output indicative of a first validity of m number of inputs; an n-input validity circuitry to generate a second output indicative of a second validity of n number of inputs; and a 2-input consensus circuitry coupled to the m-input validity circuitry and the n-input validity circuitry, wherein the 2-input consensus circuitry is to generate a third output, wherein the m-input validity circuitry, the n-input validity circuitry, and the 2-input consensus circuitry comprise gates with adjustable threshold. Example 19b: The apparatus of example 18b, wherein the gates with the adjustable threshold are configured as majority, minority gates, or OR gates. Example 20b: A system comprising: a memory to store one or more instructions; a processor circuitry to execute the one or more instructions; and a communication device to allow the processor circuitry to communicate with another device, wherein the processor circuitry includes a validity tree which comprises an apparatus according to any one of examples 1b to 17b, or examples 18b to 19b. Example 1c: An apparatus comprising: a first input; a second input; and a consensus circuitry coupled to the first input and the second input, wherein the consensus circuitry is to generate a consensus output which is indicative of a consensus of the first input and the second input, wherein the consensus circuitry comprises a gate to receive the first input, the second input, and a third input, wherein and the third input is coupled to an output of the gate which is the consensus output, wherein the gate comprises: a first capacitor having a first terminal coupled to the first input, and a second terminal coupled to a summing node; a second capacitor having a third terminal coupled to the second input, and a fourth terminal coupled to the summing node; and a third capacitor having a fifth terminal coupled to the third input, and a sixth terminal coupled to the summing node, wherein the first capacitor, the second capacitor, and the third capacitor are planar stacked capacitors. Example 2c: The apparatus of example 1c comprising a circuitry to adjust logic levels of the first input, the second input, and a control in a first operation mode. Example 3c: The apparatus of example 2c, wherein the gate comprises a device coupled to the summing node and a supply rail, wherein the device is controllable by the control, wherein the circuitry is to adjust a function of the gate in the first operation mode, and wherein the circuitry is to allow the gate to operate in accordance with the function in a second operation mode. Example 4c: The apparatus of example 3c, wherein the function is a majority function. Example 5c: The apparatus of example 1c, wherein the first capacitor, the second capacitor, and the third capacitor comprise linear dielectric material or paraelectric material. Example 6c: The apparatus of example 1c, wherein the gate comprises: a first metal layer extending along an x-plane; a second metal layer extending along the x-plane, wherein the second metal layer is above the first metal layer; a first via extending along a y-plane, wherein the y-plane is orthogonal to the x-plane, wherein the first via couples the first metal layer with the second metal layer; a second via extending along the y-plane, wherein the second via couples the second metal layer, wherein the second via is above the first via; a first pedestal on the first metal layer, wherein the first pedestal is laterally offset from the first via; a second pedestal on the second metal layer, wherein the second pedestal is laterally offset from the second via, wherein the summing node is coupled to the first via; a first input line extending along a z-plane, wherein the z-plane is orthogonal to the x-plane and the y-plane, wherein the first input is coupled to the first input line; and a second input line extending along the z-plane, wherein the second input is coupled to the second input line. Example 7c: The apparatus of example 6c, wherein the first capacitor comprises a first planar stack of materials including a first linear dielectric material or a first paraelectric material, wherein the first planar stack of materials has a first top electrode and a first bottom electrode, wherein the first linear dielectric material or the first paraelectric material is between the first top electrode and the first bottom electrode, wherein the first bottom electrode is on the first pedestal, wherein the first input line is on the first top electrode. Example 8c: The apparatus of example 7c, wherein the second capacitor comprises a second planar stack of materials including a second linear dielectric material or a second paraelectric material, wherein the second planar stack of materials has a second top electrode and a second bottom electrode, wherein the second linear dielectric material or the second paraelectric material is between the second top electrode of the second planar stack of materials and the second bottom electrode and the second planar stack of materials, wherein the second bottom electrode is on the second pedestal, wherein the second input line is on the second top electrode of the second planar stack of materials. Example 9c: The apparatus of example 7c, wherein the first linear dielectric material includes one of: SiO2, Al2O3, Li2O, HfSiO4, Sc2O3, SrO, HfO2, ZrO2, Y2O3, Ta2O5, BaO, WO3, MoO3, or TiO2. Example 10c: The apparatus of example 3c, wherein the device is a pull-up device coupled to the summing node and a power supply rail. Example 11c: The apparatus of example 10c, wherein the circuitry is to set logic levels of the first input, the second input, and the third input to logic high, and the control to enable or turn on the pull-up device in the first operation mode to adjust a threshold of the gate to 2. Example 12c: The apparatus of example 10c, wherein the pull-up device is controlled by the control, wherein voltages on the first input, the second input, and the control are set in the first operation mode to adjust a threshold of the apparatus, wherein the control is to cause the pull-up device to be off in the second operation mode, wherein the second operation mode occurs after the first operation mode. Example 13c: The apparatus of example 1c, wherein the first capacitor, the second capacitor, or the third capacitor include: a linear dielectric material includes one or more of: Si, Al, Li, Hf, Sc, Sr, Zr, Y, Ta, Ba, W, Mo, or Ti; and a top electrode and a bottom electrode, wherein the linear dielectric material is between the top electrode and the bottom electrode, wherein the top electrode or the bottom electrode include one or more of: Cu, Al, Ag, Au, W, or Co. Example 14c: The apparatus of example 1c, wherein the first capacitor, the second capacitor, or the third capacitor include paraelectric material which includes one of: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95), HfZrO2, Hf—Si—O, BaTiO3, La-substituted PbTiO3, lead zirconate titanate, or PMN-PT (lead magnesium niobate-lead titanate) based relaxor ferroelectrics. Example 15c: An apparatus comprising: a first input; a second input; a third input; a fourth input; a fifth input; and a gate to provide an output which is a consensus of the first input, the second input, and the third input, wherein the output is coupled to the fourth input and the fifth input, wherein the gate has a plurality of capacitors that are planar capacitors, and wherein the planar capacitors are vertically stacked. Example 16c: The apparatus of example 15c, wherein the output is a logic high when the first input, the second input, and the third input are logic high, wherein the output is a logic low when the first input, the second input, and the third input are logic low, wherein the output retains its logic state when at least one of the first input, the second input, or the third input is a logic 1 and when the at least one of the first input, the second input, or the third input is a logic 0. Example 17c: The apparatus of example 15c, wherein the gate comprises: a first capacitor having a first terminal coupled to the first input, and a second terminal coupled to a summing node; a second capacitor having a third terminal coupled to the second input, and a fourth terminal coupled to the summing node; a third capacitor having a fifth terminal coupled to the third input, and a sixth terminal coupled to the summing node; a fourth capacitor having a seventh terminal coupled to the fourth input and the fifth input, and an eighth terminal coupled to the summing node; a fifth capacitor having a ninth terminal coupled to the fourth input and the fifth input, and a tenth terminal coupled to the summing node, wherein the first capacitor, the second capacitor, the third capacitor, the fourth capacitor, and the fifth capacitor are part of the plurality of capacitors; and a device coupled to the summing node and a supply rail, wherein the device is controllable by a control. Example 18c: The apparatus of example 15c comprises a circuitry to adjust a function of the gate by controlling the adjustable threshold in a first operation mode, and wherein the circuitry is to allow the gate to operate in accordance with the function in a second operation mode. Example 19c: A system comprising: a memory to store one or more instructions; a processor circuitry to execute the one or more instructions; and a communication device to allow the processor circuitry to communicate with another device, wherein the processor circuitry includes a consensus circuitry which comprises an apparatus according to any one of examples 1c to 14c, or examples 15c to 18c. Example 1d: An apparatus comprising: a first input; a second input; and a consensus circuitry coupled to the first input and the second input, wherein the consensus circuitry is to generate a consensus output which is indicative of a consensus of the first input and the second input, wherein the consensus circuitry comprises a gate to receive the first input, the second input, and a third input, wherein and the third input is coupled to an output of the gate which is the consensus output, wherein the gate comprises: a first capacitor having a first terminal coupled to the first input, and a second terminal coupled to a summing node; a second capacitor having a third terminal coupled to the second input, and a fourth terminal coupled to the summing node; and a third capacitor having a fifth terminal coupled to the third input, and a sixth terminal coupled to the summing node, wherein the first capacitor, the second capacitor, and the third capacitor are non-planar stacked capacitors. Example 2d: The apparatus of example 1d comprising a circuitry to adjust logic levels of the first input, the second input, and a control in a first operation mode. Example 3d: The apparatus of example 2d, wherein the gate comprises a device coupled to the summing node and a supply rail, wherein the device is controllable by the control, wherein the circuitry is to adjust a function of the gate in the first operation mode, and wherein the circuitry is to allow the gate to operate in accordance with the function in a second operation mode. Example 4d: The apparatus of example 3d, wherein the function is a majority function. Example 5d: The apparatus of example 1d, wherein the first capacitor, the second capacitor, and the third capacitor comprise linear dielectric material or paraelectric material. Example 6d: The apparatus of example 1d, wherein the gate comprises a via extending along a y-plane, wherein the y-plane is orthogonal to an x-plane, wherein the via couples to a first metal layer; a first input line extending along the x-plane or a z-plane, wherein the z-plane is orthogonal to the x-plane and the y-plane, wherein the first input line is on an outer portion of the first capacitor, wherein the first input line is coupled to the first input; a second input line extending along the x-plane or the z-plane, wherein the second input line is on an output portion of the second capacitor, wherein the second input line is coupled to the second input; and a first transistor coupled to the via and a supply rail, wherein: the first capacitor includes a first linear dielectric material or a first paraelectric material, wherein the first capacitor includes an electrode coupled to the via, wherein the electrode is in a middle of the first capacitor; the second capacitor includes a second linear dielectric material or a second paraelectric material, wherein the electrode passes through a middle of the second capacitor; and the third capacitor including a third linear dielectric material or a third paraelectric material, wherein the electrode passes through a middle of the third capacitor. Example 7d: The apparatus of example 6d, wherein the first capacitor includes: a first layer coupled to the electrode, wherein the first layer comprises metal; a second layer comprising the first linear dielectric material, wherein the second layer is around the first layer; and a third layer around the second layer, wherein the third layer comprises metal, wherein the second input line is adjacent to part of the third layer. Example 8d: The apparatus of example 7d, wherein: the first layer has a first circumference; the second layer has a second circumference; and the third layer has a third circumference, wherein the third circumference is larger than the second circumference, wherein the second circumference is larger than the first circumference. Example 9d: The apparatus of example 6d, wherein the first linear dielectric material includes one of: SiO2, Al2O3, Li2O, HfSiO4, Sc2O3, SrO, HfO2, ZrO2, Y2O3, Ta2O5, BaO, WO3, MoO3, or TiO2. Example 10d: The apparatus of example 3d, wherein the device is a pull-up device coupled to the summing node and a power supply rail. Example 11d: The apparatus of example 10d, wherein the circuitry is to set logic levels of the first input, the second input, and the third input to logic high, and the control to enable or turn on the pull-up device in the first operation mode to adjust a threshold of the gate to 2. Example 12d: The apparatus of example 10d, wherein the pull-up device is controlled by the control, wherein voltages on the first input, the second input, and the control are set in the first operation mode to adjust a threshold of the apparatus, wherein the control is to cause the pull-up device to be off in the second operation mode, wherein the second operation mode occurs after the first operation mode. Example 13d: The apparatus of example 1d, wherein the first capacitor, the second capacitor, or the third capacitor include: a linear dielectric material includes one or more of: Si, Al, Li, Hf, Sc, Sr, Zr, Y, Ta, Ba, W, Mo, or Ti; and a top electrode and a bottom electrode, wherein the linear dielectric material is between the top electrode and the bottom electrode, wherein the top electrode or the bottom electrode include one or more of: Cu, Al, Ag, Au, W, or Co. Example 14d: The apparatus of example 1d, wherein the first capacitor, the second capacitor, or the third capacitor include paraelectric material which includes one of: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95), HfZrO2, Hf—Si—O, BaTiO3, La-substituted PbTiO3, lead zirconate titanate, or PMN-PT (lead magnesium niobate-lead titanate) based relaxor ferroelectrics. Example 15d: An apparatus comprising: a first input; a second input; a third input; a fourth input; a fifth input; and a gate to provide an output which is a consensus of the first input, the second input, and the third input, wherein the output is coupled to the fourth input and the fifth input, wherein the gate has a plurality of capacitors that are non-planar capacitors, wherein the non-planar capacitors are vertically stacked. Example 16d: The apparatus of example 15d, wherein the output is a logic high when the first input, the second input, and the third input are logic high, wherein the output is a logic low when the first input, the second input, and the third input are logic low, wherein the output retains its logic state when at least one of the first input, the second input, or the third input is a logic 1 and when the at least one of the first input, the second input, or the third input is a logic 0. Example 17d: The apparatus of example 15d, wherein the gate comprises: a first transistor, the first transistor having a source region and a drain region, and a gate, wherein the first transistor is controllable by a first control; a first via coupled to the source region; a second via coupled to the drain region; a first metal layer over the first via, the first metal layer extending along an x-plane; a third via over the first metal layer, the third via in direct connection to the first metal layer, wherein the third via extends along a y-plane, wherein the y-plane is orthogonal to an x-plane; a first non-planar stack of materials including a first linear dielectric material or a first paraelectric material, wherein the first non-planar stack of materials includes an electrode coupled to the third via, wherein the electrode is in a middle of the first non-planar stack of materials; a second non-planar stack of materials including a second linear dielectric material or a second paraelectric material, wherein the electrode passes through a middle of the second non-planar stack of materials; a first input line extending along the x-plane or a z-plane, wherein the z-plane is orthogonal to the x-plane and the y-plane, wherein the first input line is on a portion of the first non-planar stack of materials, wherein the first input is coupled to the first input line; and a second input line extending along the x-plane or the z-plane, wherein the second input line is on a portion of the second non-planar stack of materials, wherein the second input is coupled to the second input line. Example 18d: The apparatus of example 15d comprises a circuitry to adjust a function of the gate by controlling the adjustable threshold in a first operation mode, and wherein the circuitry is to allow the gate to operate in accordance with the function in a second operation mode. Example 19d: A system comprising: a memory to store one or more instructions; a processor circuitry to execute the one or more instructions; and a communication device to allow the processor circuitry to communicate with another device, wherein the processor circuitry includes a consensus circuitry which comprises an apparatus according to any one of examples 1d to 14d, or examples 15d to 18d. Example 1e: An apparatus comprising: a first input; a second input; and a consensus circuitry coupled to the first input and the second input, wherein the consensus circuitry is to generate a consensus output which is indicative of a consensus of the first input and the second input, wherein the consensus circuitry comprises a gate to receive the first input, the second input, and a third input, wherein and the third input is coupled to an output of the gate which is the consensus output, wherein the gate comprises: a first capacitor having a first terminal coupled to the first input, and a second terminal coupled to a summing node, wherein the first capacitor includes a first ferroelectric material; a second capacitor having a third terminal coupled to the second input, and a fourth terminal coupled to the summing node, wherein the second capacitor includes a second ferroelectric material; and a third capacitor having a fifth terminal coupled to the third input, and a sixth terminal coupled to the summing node, wherein the third capacitor includes a third ferroelectric material, wherein the first capacitor, the second capacitor, and the third capacitor are planar stacked capacitors. Example 2e: The apparatus of example 1e comprising a circuitry to adjust logic levels of the first input, the second input, and a control in a first operation mode. Example 3e: The apparatus of example 2e, wherein the gate comprises: a pull-up device coupled to the summing node and a supply rail, wherein the pull-up device is controllable by a first control, wherein the circuitry is to adjust a function of the gate in the first operation mode, and wherein the circuitry is to allow the gate to operate in accordance with the function in a second operation mode; and a pull-down device coupled to the summing node and a ground. Example 4e: The apparatus of example 3e, wherein the function is a majority function. Example 5e: The apparatus of example 1e, wherein the gate comprises: a first metal layer extending along an x-plane; a second metal layer extending along the x-plane, wherein the second metal layer is above the first metal layer; a first via extending along a y-plane, wherein the y-plane is orthogonal to the x-plane, wherein the first via couples the first metal layer with the second metal layer; a second via extending along the y-plane, wherein the second via couples the second metal layer, wherein the second via is above the first via; a first pedestal on the first metal layer, wherein the first pedestal is laterally offset from the first via; a second pedestal on the second metal layer, wherein the second pedestal is laterally offset from the second via; a summing node coupled to the first via; a first input line extending along a z-plane, wherein the z-plane is orthogonal to the x-plane and the y-plane, wherein the first input line is coupled to the first input; and a second input line extending along the z-plane, wherein the second input line is coupled to the second input. Example 6e: The apparatus of example 5e, wherein the first capacitor comprises a first planar stack of materials including the first ferroelectric material, wherein the first planar stack of materials has a first top electrode and a first bottom electrode, wherein the first ferroelectric material is between the first top electrode and the first bottom electrode, wherein the first bottom electrode is on the first pedestal, wherein the first input line is on the first top electrode. Example 7e: The apparatus of example 6e, wherein the second capacitor comprises a second planar stack of materials including the second ferroelectric material, wherein the second planar stack of materials has a second top electrode and a second bottom electrode, wherein the second ferroelectric material is between the second top electrode of the second planar stack of materials and the second bottom electrode and the second planar stack of materials, wherein the second bottom electrode is on the second pedestal, wherein the second input line is on the second top electrode of the second planar stack of materials. Example 8e: The apparatus of example 7e, wherein the first ferroelectric material or the second ferroelectric material includes any of the ferroelectric materials discussed herein. Example 9e: The apparatus of example 6e, wherein the first top electrode and the first bottom electrode of the first planar stack of materials includes one or more of: Cu, Al, Ag, Au, W, or Co. Example 10e: The apparatus of example 3e, wherein the pull-up device is controlled by a first control, wherein the pull-down device is controlled by a second control, wherein voltages of the first input, the second input, the first control, and the second control are set in a first operation mode to adjust a threshold of the apparatus, wherein the first control is to cause the pull-up device to be off in a second operation mode, wherein the second control is to wherein the second operation mode occurs after the first operation mode. Example 11e: An apparatus comprising: a first input; a second input; a third input; a fourth input; a fifth input; and a gate to provide an output which is a consensus of the first input, the second input, and the third input, wherein the output is coupled to the fourth input and the fifth input, wherein the gate has a plurality of capacitors that are planar capacitors having ferroelectric material, wherein the planar capacitors are vertically stacked. Example 12e: The apparatus of example 11e, wherein the output is a logic high when the first input, the second input, and the third input are logic high, wherein the output is a logic low when the first input, the second input, and the third input are logic low, wherein the output retains its logic state when at least one of the first input, the second input, or the third input is a logic 1 and when the at least one of the first input, the second input, or the third input is a logic 0. Example 13e: The apparatus of example 11e, wherein the gate comprises: a first transistor, the first transistor having a first source region and a first drain region, and a first gate, wherein the first gate is controllable by a first control; a second transistor, the second transistor having a second source region and a second drain region, and a second gate, wherein the second gate is controllable by a second control; a first via is coupled to the first source region; a second via is coupled to the first drain region; a first metal layer over the first via, the first metal layer extending along an x-plane; a second etch stop layer over the first metal layer; a third via, over the first metal layer, and etched through the second etch stop layer, the third via in direct connection to the first metal layer; a second metal layer extending along the x-plane, wherein the second metal layer is above the first metal layer, wherein the second metal layer couples the third via; an interlayer dielectric between the first metal layer and the second metal layer; a first pedestal filled with metal, wherein the first pedestal is coupled to the second metal layer; a first plurality of layers to form a first planar capacitor, wherein the first plurality of layers includes a first ferroelectric dielectric material, wherein a first layer of the first plurality of layers is in contact with a top portion of the first pedestal, wherein a second layer of the first plurality of layers is coupled to a first input line, wherein the first input line is coupled to the first input; a fourth via in direct connection to the second metal layer; a third metal layer over the fourth via, wherein the first plurality of layers is between the second metal layer and the third metal layer; a second pedestal filled with metal, wherein the second pedestal is coupled to the third metal layer; and a second plurality of layers to form a second planar capacitor, wherein the second plurality of layers includes a second ferroelectric dielectric material, wherein a first layer of the second plurality of layers is in direct contact with a top portion of the second pedestal, wherein a second layer of the second plurality of layers is coupled to a second input line, wherein the second input line is coupled to the second input. Example 14e: The apparatus of example 11e, wherein the gate comprises: a first capacitor having a first terminal coupled to the first input, and a second terminal coupled to a summing node; a second capacitor having a third terminal coupled to the second input, and a fourth terminal coupled to the summing node; a third capacitor having a fifth terminal coupled to the third input, and a sixth terminal coupled to the summing node; a fourth capacitor having a seventh terminal coupled to the fourth input and the fifth input, and an eighth terminal coupled to the summing node; a fifth capacitor having a ninth terminal coupled to the fourth input and the fifth input, and a tenth terminal coupled to the summing node, wherein the first capacitor, the second capacitor, the third capacitor, the fourth capacitor, and the fifth capacitor are part of the plurality of capacitors; a pull-up device coupled to the summing node and a supply rail, wherein the pull-up device is controllable by a first control; and a pull-down device coupled to the summing node a ground rail, wherein the pull-down device is controllable by a second control. Example 15e: The apparatus of example 11e comprises a circuitry to adjust a function of the gate by controlling the adjustable threshold in a first operation mode, and wherein the circuitry is to allow the gate to operate in accordance with the function in a second operation mode. Example 16e: A system comprising: a memory to store one or more instructions; a processor circuitry to execute the one or more instructions; and a communication device to allow the processor circuitry to communicate with another device, wherein the processor circuitry includes a consensus circuitry which comprises an apparatus according to any one of examples 1e to 10e, or examples 11e to 15e. Example 1f: An apparatus comprising: a first input; a second input; and a consensus circuitry coupled to the first input and the second input, wherein the consensus circuitry is to generate a consensus output which is indicative of a consensus of the first input and the second input, wherein the consensus circuitry comprises a gate to receive the first input, the second input, and a third input, wherein and the third input is coupled to an output of the gate which is the consensus output, wherein the gate comprises: a first capacitor having a first terminal coupled to the first input, and a second terminal coupled to a summing node, wherein the first capacitor includes a first ferroelectric material; a second capacitor having a third terminal coupled to the second input, and a fourth terminal coupled to the summing node, wherein the second capacitor includes a second ferroelectric material; and a third capacitor having a fifth terminal coupled to the third input, and a sixth terminal coupled to the summing node, wherein the third capacitor includes a third ferroelectric material, wherein the first capacitor, the second capacitor, and the third capacitor are non-planar stacked capacitors. Example 2f: The apparatus of example 1f comprising a circuitry to adjust logic levels of the first input, the second input, and a control in a first operation mode. Example 3f: The apparatus of example 2f, wherein the gate comprises: a pull-up device coupled to the summing node and a supply rail, wherein the pull-up device is controllable by a first control, wherein the circuitry is to adjust a function of the gate in the first operation mode, and wherein the circuitry is to allow the gate to operate in accordance with the function in a second operation mode; and a pull-down device coupled to the summing node and a ground. Example 4f: The apparatus of example 3f, wherein the function is a majority function. Example 5f: The apparatus of example 1f, wherein the gate comprises: a via extending along a y-plane, wherein the y-plane is orthogonal to an x-plane, wherein the via couples to a first metal layer; a first input line extending along the x-plane or a z-plane, wherein the z-plane is orthogonal to the x-plane and the y-plane, wherein the first input line is on an outer portion of the first capacitor; a second input line extending along the x-plane or the z-plane, wherein the second input line is on an output portion of the second capacitor; a first transistor coupled to the via and a supply rail; and a second transistor coupled to the via and a ground, wherein: the first capacitor includes an electrode coupled to the via, wherein the electrode is in a middle of the first capacitor; the electrode passes through a middle of the second capacitor; and the electrode passes through a middle of the third capacitor. Example 6f: The apparatus of example 5f, wherein the first transistor is controlled by a first control, wherein the second transistor is controlled by a second control, wherein voltages on the first input line, the second input line, the first control, and the second control are set in a first operation mode to adjust a threshold of the apparatus. Example 7f: The apparatus of example 6f, wherein the first control is to cause the first transistor to be off in a second operation mode, wherein the second control is to cause the second transistor to be off in the second operation mode, wherein the second operation mode occurs after the first operation mode. Example 8f: The apparatus of example 5f, wherein the first capacitor includes: a first layer coupled to the electrode, wherein the first layer comprises metal; a second layer around the first layer, wherein the second layer comprises a first conductive oxide; a third layer comprising the first ferroelectric material, wherein the third layer is around the second layer; a fourth layer around the third layer, wherein the fourth layer comprises a second conductive oxide, wherein the fourth layer is around the third layer; and a fifth layer around the fourth layer, wherein the fifth layer comprises metal, wherein the first input line is adjacent to part of the fifth layer. Example 9f: The apparatus of example 8f, wherein: the first layer has a first circumference; the second layer has a second circumference; the third layer has a third circumference; the fourth layer has a fourth circumference; and the fifth layer has a fifth circumference, wherein the fifth circumference is larger than the fourth circumference, wherein the fourth circumference is larger than the third circumference, wherein the third circumference is larger than the second circumference, wherein the second circumference is larger than the first circumference. Example 10f: The apparatus of example 1f, wherein the first ferroelectric material or the second ferroelectric material includes any one of the ferroelectric materials discussed herein. Example 11f: The apparatus of example 6f, wherein the electrode includes one or more of: Cu, Al, Ag, Au, W, or Co. Example 12f: The apparatus of example 3f, wherein the pull-up device is controlled by a first control, wherein the pull-down device is controlled by a second control, wherein voltages of the first input, the second input, the first control, and the second control are set in a first operation mode to adjust a threshold of the apparatus, wherein the first control is to cause the pull-up device to be off in a second operation mode, wherein the second control is to wherein the second operation mode occurs after the first operation mode. Example 13f: An apparatus comprising: a first input; a second input; a third input; a fourth input; a fifth input; and a gate to provide an output which is a consensus of the first input, the second input, and the third input, wherein the output is coupled to the fourth input and the fifth input, wherein the gate has a plurality of capacitors that are non-planar capacitors having ferroelectric material, wherein the non-planar capacitors are vertically stacked. Example 14f: The apparatus of example 13f, wherein the output is a logic high when the first input, the second input, and the third input are logic high, wherein the output is a logic low when the first input, the second input, and the third input are logic low, wherein the output retains its logic state when at least one of the first input, the second input, or the third input is a logic 1 and when the at least one of the first input, the second input, or the third input is a logic 0. Example 15f: The apparatus of example 13f, wherein the gate comprises: a first transistor, the first transistor having a source region and a drain region, and a gate, wherein the first transistor is controllable by a first control; a first via coupled to the source region; a second via coupled to the drain region; a first metal layer over the first via, the first metal layer extending along an x-plane; a third via over the first metal layer, the third via in direct connection to the first metal layer, wherein the third via extends along a y-plane, wherein the y-plane is orthogonal to an x-plane; a first non-planar stack of materials including a first ferroelectric material, wherein the first non-planar stack of materials includes an electrode coupled to the third via, wherein the electrode is in a middle of the first non-planar stack of materials; a second non-planar stack of materials including a second ferroelectric material, wherein the electrode passes through a middle of the second non-planar stack of materials; a first input line extending along the x-plane or a z-plane, wherein the z-plane is orthogonal to the x-plane and the y-plane, wherein the first input line is on a portion of the first non-planar stack of materials, wherein the first input is coupled to the first input line; and a second input line extending along the x-plane or the z-plane, wherein the second input line is on a portion of the second non-planar stack of materials, wherein the second input is coupled to the second input line. Example 16f: The apparatus of example 13f comprises a circuitry to adjust a function of the gate by controlling the adjustable threshold in a first operation mode, and wherein the circuitry is to allow the gate to operate in accordance with the function in a second operation mode. Example 17f: A system comprising: a memory to store one or more instructions; a processor circuitry to execute the one or more instructions; and a communication device to allow the processor circuitry to communicate with another device, wherein the processor circuitry includes a consensus circuitry which comprises an apparatus according to any of the examples 1f to 12f, and examples 13f to 16f. Example 1g: An apparatus comprising: a first input; a second input; a third input; and a gate to receive the first input, the second input, and the third input, wherein and the third input is coupled to an output of the gate, wherein the output is a consensus of the first input and the second input, wherein the gate comprises: a first capacitor having a first terminal connected to the first input, and a second terminal coupled to a summing node, wherein the first capacitor comprises a first nonlinear polar material; a second capacitor having a third terminal connected to the second input, and a fourth terminal coupled to the summing node, wherein the first capacitor comprises a second nonlinear polar material; a third capacitor having a fifth terminal connected to the third input, and a sixth terminal coupled to the summing node, wherein the third capacitor comprises a third nonlinear polar material; and a device connected to the summing node and a supply rail, wherein the device has a gate terminal controllable by a control separate from the summing node. Example 2g: The apparatus of example 1g, wherein first capacitor, the second capacitor, and the third capacitor are configured such that a voltage on the summing node is to reduce static leakage through the device. Example 3g: The apparatus of example 2g, wherein the voltage on the summing node is close to rail-to-rail. Example 4g: The apparatus of example 1g, wherein the first nonlinear polar material includes paraelectric material which includes one of: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95), HfZrO2, Hf—Si—O, BaTiO3, La-substituted PbTiO3, lead zirconate titanate, or PMN-PT (lead magnesium niobate-lead titanate) based relaxor ferroelectrics. Example 5g: The apparatus of example 1g, wherein the first nonlinear polar material comprises a first ferroelectric material which includes any of the ferroelectric materials discussed herein. Example 6g: The apparatus of example 1g, wherein the gate is to perform a majority or a minority function of the first input, the second input, and the third input. Example 7g: The apparatus of example 1g, wherein the device is turned on in a reset mode, and wherein the device turned off in an evaluation mode separate from the reset mode. Example 8g: An apparatus comprising: a first input; a second input; a third input; a fourth input; a fifth input; and a gate to provide an output which is a consensus of the first input, the second input, and the third input, wherein the output is coupled to the fourth input and the fifth input, wherein the gate includes a plurality of capacitors that are coupled to the first input, the second input, the third input, the fourth input, and the fifth input, and wherein the plurality of capacitors comprises nonlinear polar material, wherein the gate includes a device connected to the gate and controllable by a control disconnected from the plurality of capacitors. Example 9g: The apparatus of example 8g, wherein the plurality of capacitors is configured such that a voltage on a summing node is to reduce static leakage through the device, wherein the plurality of capacitors is connected to the summing node. Example 10g: The apparatus of example 9g, wherein the voltage on the summing node is close to rail-to-rail. Example 11g: The apparatus of example 8g, wherein the nonlinear polar material includes paraelectric material which includes one of: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95), HfZrO2, Hf—Si—O, BaTiO3, La-substituted PbTiO3, lead zirconate titanate, or PMN-PT (lead magnesium niobate-lead titanate) based relaxor ferroelectrics. Example 12g: The apparatus of example 8g, wherein the nonlinear polar material includes ferroelectric material. Example 13g: The apparatus of example 8g, wherein the gate is to perform a majority or a minority function of the first input, the second input, the third input, the fourth input, and the fifth input. Example 14g: The apparatus of example 8g, wherein the device is turned on in a reset mode, and wherein the device turned off in an evaluation mode separate from the reset mode. Example 15g: A system comprising: a memory to store one or more instructions; a processor circuitry to execute the one or more instructions; and a communication device to allow the processor circuitry to communicate with another device, wherein the processor circuitry includes a consensus circuitry which comprises an apparatus according to any one of the examples 1g to 7g, or examples 8g to 14g. An abstract is provided that will allow the reader to ascertain the nature and gist of the technical disclosure. The abstract is submitted with the understanding that it will not be used to limit the scope or meaning of the claims. The following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment. | 226,311 |
11863185 | DETAILED DESCRIPTION OF EMBODIMENTS FIG.1is a diagram schematically illustrating a semiconductor integrated circuit device including an oscillator circuit according to an embodiment. An oscillator circuit100according to an embodiment may be included in semiconductor integrated circuit devices such as a timing controller (T-CON) of a display device, a source driver IC (SDIC: Source Driver Integrated Circuit), an application processor (AP) of a smartphone, and a central processing unit (CPU) of a computer. In other words, the oscillator circuit100may be embedded in a semiconductor integrated circuit device. The oscillator circuit100may output a clock signal necessary for the operation of the semiconductor integrated circuit device as above. Herein, the semiconductor integrated circuit device may further include a digital circuit10which operates in synchronization with the clock signal. The semiconductor integrated circuit device may further include a case20which protects the oscillator circuit100and the digital circuit10. Herein, the material of the case20may be an epoxy molding compound, and the case20may be formed in a molding process which is one of the post-process of semiconductors. When the digital circuit10operates, heat may be generated in the semiconductor integrated circuit device. In general, since the oscillator circuit100is sensitive to temperature changes, the oscillator circuit100may output a clock signal having a frequency error due to heat generated in the semiconductor integrated circuit device. Herein, the frequency error may mean a deviation between the actual output frequency of the clock signal and a preset target frequency. In an embodiment, the oscillator circuit100may correct a frequency error of a clock signal according to a change in a temperature of the semiconductor integrated circuit device during operation of the semiconductor integrated circuit device. Specifically, as illustrated inFIG.2, the oscillator circuit100may include a temperature sensor110, an error correction circuit120, and an oscillator130. The temperature sensor110may sense an internal temperature of the semiconductor integrated circuit device and output a temperature sensing value. Herein, the temperature sensor110may generate an analog signal corresponding to the internal temperature. When the temperature sensor110includes an analog-digital converter (ADC) circuit, the temperature sensor110may convert an analog signal corresponding to an internal temperature into a digital signal and output the same as a temperature sensing value. When the temperature sensor110does not include an ADC circuit, the temperature sensor110may output an analog signal corresponding to an internal temperature as a temperature sensing value. In addition, the oscillator circuit100may further include the ADC circuit for converting the temperature sensing value into a digital signal. The temperature sensor110may sense an internal temperature of the semiconductor integrated circuit device at regular intervals and output a temperature sensing value of the corresponding interval. For example, the temperature sensor110may sense the internal temperature of the semiconductor integrated circuit device every 0.3 seconds and output the temperature sensing value of the corresponding interval. This temperature sensor110may be disposed adjacent to one side of the oscillator130to be described later. Thus, the temperature sensor110may sense the temperature around the oscillator130which directly affects the oscillator130. In an embodiment, the temperature sensor110may include a Proportional To Absolute Temperature (PTAT) circuit including a transistor, for example, a Bipolar Junction Transistor (BJT). The temperature sensor110may include resistance temperature detectors whose resistance value changes as an internal temperature of the semiconductor integrated circuit device changes. When the temperature sensor110includes an ADC circuit and a PTAT circuit, the temperature sensor110may generate IPTAT or VPTAT (analog signal) corresponding to an internal temperature, and convert the IPTAT or VPTAT into a digital signal and output the same as a temperature sensing value. Herein, the temperature sensor110may directly output a temperature sensing value without correction. The temperature sensor110may correct a temperature sensing value and output the corrected temperature sensing value. When the semiconductor integrated circuit device operates, the semiconductor integrated circuit device consumes power. In addition, the thermal resistance changes according to the power consumption of the semiconductor integrated circuit device, and the temperature sensor110may output a temperature sensing value which is different from the actual internal temperature due to the change in the thermal resistance. Accordingly, the temperature sensor110may correct a temperature sensing value by reflecting the change in thermal resistance according to the power consumption of the semiconductor integrated circuit device, and output the corrected temperature sensing value. The error correction circuit120stores a first error correction value for correcting a frequency error of a clock signal when an internal temperature of the semiconductor integrated circuit device is room temperature and also stores a second error correction value for correcting the frequency error of the clock signal when the internal temperature of the semiconductor integrated circuit device is a high temperature. Here, the room temperature may be a first temperature between 25° C. and 35° C. and the high temperature may be a second temperature between 85° C. and 95° C. In an embodiment, the first error correction value and the second error correction value may be bias set values which are control values for setting a bias for adjusting an output frequency of a clock signal in the oscillator130. In other words, the first error correction value and the second error correction value may be a first bias set value and a second bias set value. The first bias set value and the second bias set value may include a first control code for coarse adjustment of the output frequency and a second control code for fine adjustment of the output frequency. For example, when the output frequency of the clock signal has a megahertz (Mhz) value, the first control code may be a code for adjusting the megahertz (Mhz) band of the output frequency. In addition, the second control code may be a code for adjusting the kilohertz (Khz) band of the output frequency. The first control code and the second control code may respectively comprise a plurality of bits. For example, each of the first control code and the second control code may be a 4-bit combination or the first control code may be a 4-bit combination, whereas the second control code may be a 6-bit combination. The first error correction value and the second error correction value may be resistor capacitor (RC) set values which are control values for setting one or more of a resistor value and a capacitor value for adjusting the output frequency of the clock signal in the oscillator130. In other words, the first error correction value and the second error correction value may be a first RC set value and a second RC set value. The error correction circuit120may generate an error correction value by using the temperature sensing value input from the temperature sensor110and the pre-stored first and second error correction values. Specifically, the error correction circuit120may estimate error correction values corresponding to the temperature sensing value by performing interpolation using the temperature sensing value, the first and the second error correction values. In other words, the error correction circuit120may generate error correction values corresponding to the temperature sensing value by using the interpolation. Here, the error correction circuit120may generate an error correction value by performing linear interpolation using the following equation. f(x)=y1+y2-y1x2-x1(x-x1)[Equation1] In Equation 1, y1may mean a first temperature value corresponding to room temperature, x1may mean a first error correction value, y2may mean a second temperature value corresponding to a high temperature, x2may mean a second error correction value, f(x) may mean a temperature sensing value input from the temperature sensor110, and x may mean an error correction value corresponding to the temperature sensing value. For example, inFIG.9, when the first temperature value is 30° C., the second temperature value is 90° C., the temperature sensing value is 45° C., and the error correction value is the bias set value, the error correction circuit120may generate a bias set value of which the second control code is152by performing linear interpolation using the first error correction value (135inFIG.9), the second error correction value (187inFIG.9), and the temperature sensing value (45° C.). Herein, since the first control code of the bias set value is a control code for coarse adjustment of the output frequency, the error correction circuit120may perform linear interpolation using only the second control code for fine adjustment. The error correction circuit120may deliver the error correction value generated through the interpolation to the oscillator130. In other words, the error correction circuit120may output a digital signal corresponding to the error correction value and transmit the same to the oscillator130. The error correction circuit120may also generate an error correction value by performing extrapolation using the first error correction value, the second error correction value, and the temperature sensing value. In other words, when the temperature corresponding to the temperature sensing value is higher than room temperature (e.g., 30° C.) and lower than a high temperature (e.g., 90° C.), the error correction circuit120may perform interpolation using the first error correction value, the second error correction value, and the temperature sensing value. In addition, when the temperature corresponding to the temperature sensing value is lower than room temperature or higher than a high temperature, the error correction circuit120may perform extrapolation using the first error correction value, the second error correction value, and the temperature sensing value. The error correction circuit120may generate an error correction value corresponding to the temperature sensing value whenever the temperature sensor110periodically outputs the temperature sensing value. The aforementioned error correction circuit120may include a one time programmable (OTP) memory410and a correction value estimation circuit420as illustrated inFIG.4. The OTP memory410may store a first error correction value and a second error correction value. In addition, the OTP memory410may further store a first temperature value corresponding to room temperature and a second temperature value corresponding to a high temperature. The OTP memory410may include one or more of an electrical fuse (eFuse) type OTP memory, a programmable read-only memory (PROM), and an electrically programmable ROM (EPROM). The correction value estimation circuit420may receive a temperature sensing value. In addition, the correction value estimation circuit420may extract the first error correction value and the second error correction value from the OTP memory410. The correction value estimation circuit420may compare a first temperature value corresponding to room temperature with a temperature sensing value, and may compare a second temperature value corresponding to a high temperature with a temperature sensing value. Hence, the correction value estimation circuit420may check a magnitude relationship between the first temperature value and the temperature sensing value and a magnitude relationship between the second temperature value and the temperature sensing value. When the temperature corresponding to the temperature sensing value is higher than the first temperature value (room temperature) and lower than the second temperature value (high temperature), the correction value estimation circuit420may generate an error correction value corresponding the temperature sensing value by performing linear interpolation using the first error correction value, the second error correction value, and the temperature sensing value. When the temperature corresponding to the temperature sensing value is lower than the first temperature value (room temperature) or higher than the second temperature value (high temperature), the correction value estimation circuit420may generate an error correction value corresponding the temperature sensing value by performing extrapolation using the first error correction value, the second error correction value, and the temperature sensing value. As described above, the correction value estimation circuit420may generate error correction values by estimation using the interpolation or the extrapolation. The oscillator130may output a clock signal necessary for the operation of the semiconductor integrated circuit device. In an embodiment, the oscillator130may set a bias or set one or more of a resistance value and a capacitor value according to an error correction value delivered from the oscillator circuit100when outputting a clock signal. In other words, the oscillator130may set one or more of a bias, a resistance value, and a capacitor value according to the error correction value. Hence, it is possible to correct the frequency error of the clock signal according to the internal temperature of the semiconductor integrated circuit device. Herein, the bias may be any one of a bias current and a bias voltage. When the oscillator130sets one or more of a resistance value and a capacitor value according to the error correction value, the oscillator130may include a resistor capacitor (RC) oscillator (seeFIG.10) including one or more of a resistor (R) and a capacitor (C), or a relaxation oscillator (seeFIG.11). In addition, the RC oscillator or relaxation oscillator may include one or more of a variable resistor and a variable capacitor. InFIG.10, ‘Rf’ means a feedback resistance, ‘amp’ means an amplifier and Voutmeans an output voltage. InFIG.11, ‘V−’ means a negative voltage, ‘V+’ means a positive voltage, ‘VDD’ means a positive power, ‘VSS’ means a negative power and ‘Vout’ means an output voltage. When the oscillator130sets a bias according to the error correction value, the oscillator130may include a voltage controlled oscillator (VCO) or a current controlled oscillator (ICO). From the foregoing, the configuration in which the temperature sensor110is included in the oscillator circuit100has been described. In other words, the configuration in which the temperature sensor110is embedded in the semiconductor integrated circuit device has been described. However, an embodiment is not limited thereto, and the temperature sensor110may not be included in the oscillator circuit100as illustrated inFIG.3. In other words, an external temperature sensor310separate from the semiconductor integrated circuit device may exist. The external temperature sensor310may be disposed adjacent to the semiconductor integrated circuit device. In addition, the external temperature sensor310may sense an ambient temperature of the case20of the semiconductor integrated circuit device and output a temperature sensing value. The temperature sensing value output from the external temperature sensor310may be delivered to the error correction circuit120of the oscillator circuit100through a communication interface connected between the external temperature sensor310and the semiconductor integrated circuit device. Herein, the ambient temperature of the case20may be similar to the internal temperature of the semiconductor integrated circuit device. InFIG.3, the error correction circuit120may generate an error correction value corresponding to a temperature sensing value by performing interpolation or extrapolation using the temperature sensing value delivered from the external temperature sensor310and the pre-stored first error correction value and second error correction value. In addition, inFIG.3, the oscillator130may correct the frequency error of the clock signal using the error correction value. In an embodiment, the error correction circuit120may store a first error correction value and a second error correction value in a wafer test process of the semiconductor integrated circuit device. Specifically, in the wafer test process of the semiconductor integrated circuit device, the error correction circuit120may be electrically connected to a probe card512of a test device510as illustrated inFIG.5. In addition, the oscillator130may also be electrically connected to the probe card512. When the oscillator circuit100includes the temperature sensor110, the probe card512and the temperature sensor110may also be electrically connected. AlthoughFIG.5illustrates a configuration in which the probe card512is connected to one semiconductor integrated circuit device, in reality, the probe card512may be electrically connected to a plurality of semiconductor integrated circuit devices designed on a wafer. In a state in which the probe card512and the semiconductor integrated circuit device are electrically connected, a tester514may store the first error correction value in the OTP memory410of the error correction circuit120through the process illustrated inFIG.6. Referring toFIG.6, in the wafer test process, the tester514may deliver an initial set value to the oscillator130through the probe card512(S610). Herein, the initial set value may be an initial bias set value or an RC initial set value. When the initial set value is the initial bias set value, the initial set value may include a first control code for coarse adjustment and a second control code for fine adjustment of the output frequency. The tester514may operate the oscillator130at room temperature (S620). Herein, the tester514may adjust the temperature of a jig (not shown) on which the wafer is seated to make the surface temperature of the wafer to a room temperature. For example, the room temperature may be 30° C., and the tester514may adjust the temperature of the jig (not shown) to make the surface temperature of the wafer to 30° C. When the oscillator circuit100includes the temperature sensor110, the surface temperature of the wafer may be known by the tester514checking the temperature value sensed by the temperature sensor110. When the oscillator circuit100does not include the temperature sensor110, the surface temperature of the wafer may be sensed by the test device510. The tester514may check the frequency of the clock signal output by the oscillator130under room temperatures, that is, the output frequency of the clock signal (S630). In addition, the tester514may compare the output frequency of the clock signal with the first target frequency (S640). Herein, the first target frequency may mean an ideal frequency of the clock signal output from the oscillator130under room temperatures. When the output frequency of the clock signal is different from the first target frequency in stage S640, the tester514may adjust the set value of the oscillator130one or more times until the output frequency of the clock signal coincides with the first target frequency (S650). When the output frequency of the clock signal and the first target frequency coincide in stage S640, the tester514designates a set value at a corresponding timing as the first error correction value and store the same in the OTP memory410of the oscillator circuit100(S660). Herein, the tester514may further store the first temperature value corresponding to room temperature in the OTP memory410. The tester514may store the second error correction value in the OTP memory410of the error correction circuit120through the process illustrated inFIG.7. Herein, the probe card512and the semiconductor integrated circuit device are electrically connected. Referring toFIG.7, in the wafer test process, the tester514may deliver an initial set value to the oscillator130through the probe card512(S710). In addition, the tester514may operate the oscillator130under a high temperature (S720). Herein, the tester514may adjust the temperature of a jig (not shown) on which the wafer is seated to make the surface temperature of the wafer to a high temperature. For example, the high temperature may be 90° C., and the tester514may adjust the temperature of the jig (not shown) to make the surface temperature of the wafer to 90° C. The tester514may check the frequency of the clock signal output by the oscillator130under a high temperature, that is, the output frequency of the clock signal (S730). In addition, the tester514may compare the output frequency of the clock signal with the second target frequency (S740). Herein, the second target frequency may mean an ideal frequency of the clock signal output from the oscillator130under a high temperature. Under a high temperature condition, the operation of the digital circuit10included in the semiconductor integrated circuit device may be deteriorated or loss of the digital circuit10may occur. In this connection, by setting the second target frequency to a frequency lower than the first target frequency, loss or deterioration of operation of the digital circuit10may be compensated. For example, when the first target frequency is 20 Mhz, the second target frequency may be 19.4 Mhz. When the output frequency of the clock signal is different from the second target frequency in stage S740, the tester514may adjust the set value of the oscillator130one or more times until the output frequency of the clock signal coincides with the second target frequency (S750). When the output frequency of the clock signal and the second target frequency coincide in stage S740, the tester514designates a set value at a corresponding timing as the second error correction value and store the same in the OTP memory410of the oscillator circuit100(S760). Herein, the tester514may further store the second temperature value corresponding to a high temperature in the OTP memory410. In an embodiment, the tester514may perform the process ofFIG.7after performing the process ofFIG.6. Conversely, after the process ofFIG.7is performed, the process ofFIG.6may be performed. As described above, in the wafer test process, the first error correction value and the second error correction value stored in the OTP memory410are not changed when the semiconductor integrated circuit device is used. As described above, the oscillator circuit100stores an error correction value under a room temperature and an error correction value under a high temperature, and generates an error correction value according to a temperature of the semiconductor integrated circuit device through extrapolation or interpolation using two pre-stored error correction values. Hence, the oscillator circuit100may operate insensitively to temperature changes. In addition, since the oscillator circuit100generates an error correction value through interpolation or extrapolation, it is not necessary to store error correction values corresponding to a plurality of temperatures in the oscillator circuit100. Hereinafter, the process of correcting the frequency error of the clock signal in the oscillator circuit100will be described. FIG.8is a flowchart illustrating an operation process of an oscillator circuit according to an embodiment. Referring toFIG.8, during operation of the semiconductor integrated circuit device, the oscillator circuit100may sense the temperature of the semiconductor integrated circuit device to generate a temperature sensing value (S710). Herein, the temperature of the semiconductor integrated circuit device may be an internal temperature of the case20or an ambient temperature of the case20. In addition, the oscillator circuit100may extract the first error correction value and the second error correction value stored in the OTP memory410, which is an internal memory, from the internal memory (S820). Herein, the first error correction value may be a bias set value or RC set value for correcting a frequency error of the clock signal when the temperature of the semiconductor integrated circuit device is room temperature, and the second error correction value may be a bias set value or RC set value for correcting the frequency error of the clock signal when the temperature of the semiconductor integrated circuit device is a high temperature higher than room temperature. The oscillator circuit100may generate an error correction value corresponding to the temperature sensing value using the first error correction value, the second error correction value, and the temperature sensing value (S830). Thereafter, the oscillator circuit100may correct the frequency error of the clock signal according to the internal temperature by setting a bias according to the error correction value or by setting one or more of a resistance value and a capacitor value (S840). The oscillator circuit100may repeat stages S810to S840at regular intervals until the power of the semiconductor integrated circuit device is turned off (S850). When the temperature corresponding to the temperature sensing value is higher than the room temperature and lower than the high temperature in stage S830, the oscillator circuit100may generate the error correction value corresponding the temperature sensing value by performing linear interpolation using the first error correction value, the second error correction value, and the temperature sensing value. When the temperature corresponding to the temperature sensing value in stage S830is lower than the room temperature or higher than the high temperature, the oscillator circuit100may generate the error correction value corresponding the temperature sensing value by performing extrapolation using the first error correction value, the second error correction value, and the temperature sensing value. Hereinafter, an example in which the oscillator circuit100is applied to a source driver IC of a display device will be described. FIG.12is a diagram for an example in which an oscillator circuit according to an embodiment is applied to a source driver IC. Referring toFIG.12, the semiconductor integrated circuit device according to an embodiment may be a source driver IC1200, the oscillator circuit100may be a clock recovery circuit1210(Clock Data Recovery), and the digital circuit may be a serial-parallel conversion circuit1220included in the source driver IC1200, a shift register circuit (not shown), or the like. The clock recovery circuit1210may receive a communication signal (CED, Clock Embedded Data) from a timing controller (not shown), and may recover a communication clock CLK included in the communication signal CED. The serial-to-parallel conversion circuit1220may receive a communication signal from a timing controller (not shown). The serial-to-parallel conversion circuit1220may convert serial data included in the communication signal into parallel data by using the communication clock CLK recovered by the clock recovery circuit1210. Herein, serial data and parallel data may include image data. FIG.13is a diagram for explaining the configuration of a clock recovery circuit including an oscillator circuit according to an embodiment. Referring toFIG.13, the clock recovery circuit1210may include the temperature sensor110, the error correction circuit120, and the oscillator130. In addition, the clock recovery circuit1210may further include a divider1310, a phase detector1320, a charge pump1330, and a loop filter1340. The divider1310may divide the output frequency of the clock signal OSC_CLK output from the oscillator130by a certain ratio. Hence, the divider1310may output the feedback signal FEB_CLK having a frequency obtained by dividing the output frequency of the clock signal OSC_CLK by a certain ratio. The phase detector1320detects a phase difference between the input signal IN_CLK and the feedback signal FEB_CLK having a frequency obtained by dividing the communication frequency, which is the frequency of the communication signal CED, by a certain ratio, and outputs an Up signal or a Down signal therefor. When the phase difference between the input signal IN_CLK and the feedback signal FEB_CLK is reduced, the frequency and pulse width of outputting an Up signal or a Down signal from the phase detector1320may be reduced. The charge pump1330operates to charge or discharge charges in the capacitor of the loop filter1340according to the pulse width of the Up signal or the Down signal of the phase detector1320. Herein, the charge pump1330may charge electric charges to the capacitor of the loop filter1340when it is an Up signal, and discharge electric charges from the capacitor of the loop filter1340when it is a Down signal. The loop filter1340may increase or decrease a control voltage Vcby charging or discharging electric charges to and from the capacitor by the charge pump1330. In addition, the loop filter1340may output a control signal having a control voltage Vc. Herein, the loop filter1340may remove unnecessary components, such as harmonics, from an Up signal or a Down signal. InFIG.13, the oscillator130may include a bias generation circuit132and a controlled oscillator134, VCO/ICO. The bias generation circuit132of the oscillator130may receive an error correction value from the error correction circuit120. In addition, the bias generation circuit132may set the size of a bias according to the error correction value. Here, the bias may be a bias current Idcor a bias voltage Vdc. The controlled oscillator134may receive a bias whose size is set according to the error correction value from the bias generation circuit132. Hence, the controlled oscillator134may correct a frequency error of the clock signal OSC_CLK according to the internal temperature of the source driver IC1200. The controlled oscillator134outputting the clock signal OSC_CLK having the frequency error corrected may adjust the phase of the clock signal OSC_CLK using the control voltage Vc. When the phase difference between the input signal IN_CLK and the feedback signal FEB_CLK is eliminated through phase adjustment of the clock signal OSC_CLK, the clock recovery circuit1210may recover the communication clock CLK. In other words, the clock recovery circuit1210may recover the communication clock CLK using the clock signal OSC_CLK for which the frequency error is corrected. | 30,737 |
11863186 | DETAILED DESCRIPTION This disclosure describes circuits and techniques for identifying potential problems with control signals for power switches. More specifically, this disclosure describes the use of registers, e.g., volatile or non-volatile storage elements, configured to count the rising and/or falling edges of pulse modulation (PM) signals within driver circuits or other control circuits. The PM signals may comprise so-called pulse width modulation (PWM) signals, or other types of modulation signals, such as pulse frequency modulation signals, pulse duration modulation signals, pulse density modulation signals, or other types of modulation signals used for the control of power switches. A processor, for example, may provide PM control signals to a driver circuit, and the driver circuit may generate and output PM drive signals to a power switch, wherein the PM drive signals are based on the PM control signals. By counting the edges of PM signals (e.g., rising and falling edges), undesirable changes or errors in the PM signals may be identified. For example, if the counts associated with PM control signals do not match the counts associated with PM drive signals, this may indicate a problem with one or more circuit elements within a driver circuit. In some cases, registers may be used to track the PM signals in many different circuit locations within a power switch system. In some cases, counter mismatch may be identified (e.g., by reading and comparing the content of two or more different registers) after an actual circuit failure, in order to help identify or diagnose the cause of the circuit failure. In other cases, counter mismatch may be identified within a driver circuit as a potential indicator of PM signaling problems that could lead to a future circuit failure. Thus, in some cases, a driver circuit may be configured to react to counter mismatch, such as by disabling operation, limiting operation of a power switch, or sending an alert to the processor. Accordingly, in some cases, the driver circuit or the processor that sends the PM control signals to the drier circuit may be configured to immediately react to counter mismatch. In other examples, however, the data stored in registers may be stored for use or analysis after device failure, e.g., for read-out by a technician, in order to help identify the cause of the device failure or the location within the circuit where PM signals may have been compromised. Analysis of circuit failures has shown that undesirable problems can sometimes manifest in PM signals within a power switch circuit system. PM signaling problems may be caused for a variety of reasons, such as circuit problems, circuit layout problems, poor circuit design, circuit noise, problems with a printed circuit board (PCB), circuit substrate issues, the positioning of circuit elements within a circuit system, or other reasons. Circuit problems may also be caused by aging of the effects of circuit elements, or excessive heat in the circuit, or possibly environmental exposure. For these and other reasons, it is often desirable to monitor circuit parameters in one or more circuit locations. For example, aging of DC-link capacitors or aging of ceramic capacitors for signal filters can cause issues with PM signals. Moreover, overly hot electrons in a MOSFET trench bottom can sometimes lead to undesirable increases in switching speed. Backchannel communication within a circuit may be desirable to communicate circuit information, circuit signals, or circuit operational parameters over a galvanic isolation barrier. In many situations, for example, driver circuits may include a galvanic isolation barrier that galvanically isolates a low-voltage domain associated with the processor from a high-voltage domain associated with the power switch. In such systems, backchannel communication may facilitate communication across the galvanic isolation barrier, such as by using optical signals, data communication over a secondary transformer, communication over a primary transformer when PM signals are disabled, or other techniques. Real-time back-channel communication is sometimes desirable in driver circuits. Real-time backchannel communication may refer to backchannel communication that is synchronous with PM signaling. Backchannel communication and especially real-time backchannel communication, however, can be costly from a circuit standpoint, often requiring additional circuit pins or elements to facilitate such communication over the galvanic isolation barrier. In many cases, microcontrollers are positioned on different circuit boards than power switch gate driver circuits, in which case large connectors may be needed. Circuit packages are also pinning limited. More pins in circuit packages may result in a higher pin pitch, which may require higher cleanliness requirements in circuit production lines. Backchannels may also require insulation in some circuits, which further increases circuit costs. Moreover, in situations where a forward channel exhibits circuit noise, the backchannel may exhibit the same problems. Filtering can sometimes make it challenging for a backchannel to accurately identify or communicate circuit events in real time, e.g., synchronous with PM signaling. In some examples, rather than real-time gate monitoring over a galvanic isolation barrier, this disclosure implements a circuit tracking scheme that counts and records circuit events for later readout or analysis, e.g., when PM signals are disabled or possibly after a circuit has failed. FIG.1is a block diagram of a system100that comprises a driver circuit102configured to control a power switch circuit104according to this disclosure. Power switch circuit104may comprise a power transistor. In the example ofFIG.1, the power transistor within power switch circuit104may comprise an insulated gate bipolar transistor (IGBT), or a metal-oxide-semiconductor field effect transistor (MOSFET). The MOSFET may be formed in silicon, in which case the MOSFET may be called a silicon MOSFET. Alternatively, the MOSFET may be formed in another semiconductor material, such as silicon carbide (SiC) or gallium nitride (GaN), in which case the MOSFET may be called a SiC MOSFET or a GaN MOSFET. Driver circuit102may comprise a galvanic isolation barrier101that separates a first voltage domain (e.g., a low voltage IV′ domain) from a second voltage domain (e.g., a high voltage ‘HV’ domain). Galvanic isolation barrier101, for example, can be implemented with one or more coiled transformers, one or more coreless transformers, one or more capacitors, or other elements that are capable of galvanically isolating two different voltage domains within driver circuit102. Driver circuit102may comprise an input pin112configured to receive PM control signals from a processor106. Driver circuit102may also comprise an output pin114galvanically isolated from the input pin112, wherein the driver circuit is configured to deliver PM drive signals from the output pin114to a power switch (e.g., power switch circuit104) to control ON/OFF switching of the power switch. An output register120of driver circuit102may be configured to store counts associated with the PM drive signals. In this way, driver circuit102can store at least a partial history of the PM drive signals applied to power switch circuit104, e.g., for later readout. In some examples, a plurality of registers may be used throughout system100to track PM signals at different circuit locations, which can facilitate comparisons amongst the PM signals at the different locations. In the example ofFIG.1, driver circuit includes both a first register, i.e., output register120, configured to store counts of PM drive signals, and a second register, i.e., an input register122, configured to store counts of PM control signals received from processor112. Moreover, processor106may comprise yet another register, i.e., processor register124configured to store counts of the PM control signals sent by processor112. By comparing the contents of two or more of registers120,122, or124, counter mismatch can be identified to indicate a potential PM signaling problem within system100. As noted above, in some cases, counter mismatch may be identified (e.g., by reading and comparing the content of two or more different registers of system100) after an actual circuit failure, in order to help identify or diagnose the cause of the circuit failure. In other cases, counter mismatch may be identified within driver circuit102, e.g., by periodically comparing the content of output register120and input register122, as a potential indicator of PM signaling problems that could lead to a future circuit failure. Thus, in some cases, driver circuit102may be configured to react to counter mismatch, such as by disabling operation or limiting operation of power switch circuit104or by communicating an alert to processor112. In other examples, however, the data stored in registers120,122, and124may be stored for use or analysis after device failure, e.g., for read-out by a technician, in order to help identify the cause of the device failure or the location within the circuit where PM signals may have been compromised. In general, registers120,122, and124may each comprise volatile or non-volatile memory or other storage elements, and the memory may be readable by processor106. Output register120, in some examples, may include a back-channel connection across galvanic isolation barrier101to facilitate readout by processor106. In some examples according to this disclosure, backchannel communication may be used to facilitate readout of output register120, but this backchannel communication may be non-real-time in the sense that the communication is slower than PM signaling and non-synchronous with PM signaling, which can help avoid complexities and challenges associated with real-time back channel communication. Input register122and processor register124may be connected to and readable by processor106. In some examples, output register120and input register122of driver circuit102comprise non-volatile memory that is readable by processor106when the PM drive signals are disabled. In some examples, each of registers120,122, and/or124may comprise so-called an overflow register that is configured to store counts in an overflowing manner, possibly storing least significant bits. Overflow registers may be useful to help limit the size of the registers and can provide an indicator of mismatch in counts between the registers without requiring an actual account of millions of switching events. Overflow registers may be configured to store N bits of data in a first in first out (FIFO) manner. 8-bit registers may be sufficient to achieve tracking of counter mismatch, e.g., storing least significant bits. In some examples, N may represent any integer greater than 3 and less than 17. The registers, e.g., each of registers120,122, and124, may comprise counters that count a number of rising edges and falling edges of the PM signals. In some implementations, the counters could be configured to count only the rising edges or only the falling edges, but counts of both rising and falling edges is usually desirable. FIG.2is an illustration of four graphs showing different exemplary signals associated with power switch control. PM signals22may comprise PWM control signals sent from a microcontroller. PM signals24may comprise the corresponding PWM control signals received by a driver circuit. PM signals22and24are similar (and have the same number of rising and falling edges) meaning that no signaling problem exists. The details ofFIG.2and other examples of this disclosure are generally described with regard to power switches that are normally in an OFF state, where a gate voltage turns the power switch ON. Of course, the same principles of this disclosure may also be used for drivers of power switches that are normally in an ON state, where a gate voltage turns the power switch OFF. PM signals26may comprise PWM drive signals within a driver circuit, e.g., those sent from the driver circuit to a gate of a power switch. PM signals26and24are similar (and have the same number of rising and falling edges) meaning that no signaling problem exists. A slight delay may exist between PM signals26relative to PM signals24due to signal delay through the driver circuit. Signal28may comprise the voltage drop over the power switch, e.g., the gate to emitter voltage, caused with PM signal26is applied to the gate of the power switch. Signal28generally corresponds to signal26, but signal28may include slopes in the turn-on and turn-off of the power switch, and may also include so-called “Miller” plateaus associated with power switch operation. The four graphs inFIG.2may generally represent signals associated with normal switch operation, without any signaling problems. FIG.3is another illustration of four graphs showing different exemplary signals associated with power switch control. PM signals32may comprise PWM control signals sent from a microcontroller. PM signals34may comprise the corresponding PWM control signals received by a driver circuit. PM signals22and24are dissimilar (and have different numbers of rising and falling edges) meaning that a signaling problem may exist, e.g., due to glitch301. PM signals36may comprise PWM drive signals within a driver circuit, e.g., those sent from the driver circuit to a gate of a power switch. PM signals36and34are similar (and have the same number of rising and falling edges) meaning that glitch302is present. A slight delay may exist between PM signals36relative to PM signals34due to signal delay through the driver circuit. Signal38may comprise the voltage drop over the power switch, e.g., the gate to emitter voltage, caused with PM signal36is applied to the gate of the power switch. Signal38may include slopes in the turn-on and turn-off of the power switch and may also include so-called “Miller” plateaus associated with power switch operation. Because of glitches301and302, signal38may include undesirable signal artifact303, which can put undesirable stress on the power switch. The four graphs inFIG.3may generally represent signals associated with switch operation, in the presence of a signaling problem, possibly of unknown cause. Thus, by tracking and counting edges of PM signals, counter mismatch may be used to identify the existence of glitches301and302. The signal monitoring and counting techniques of this disclosure may have benefits relative to direct gate monitoring of a power switch. In this case, the root cause of errors can be more easily identified, and the relative independence of different signals can be compared. In some examples, such counters can be used merely for debugging purposes, without use in the field, although the use of counters to track circuit performance in the field is often desirable according to this disclosure. Counting registers may be very inexpensive to implement within a gate driver circuit system, and in some cases, memory may already be available for other reasons at one or more of the different circuit locations. Power switch circuits may fail for reasons unrelated to power switch or the gate driver circuits Failures, for example, can be caused by circuit noise or undesirable circuit layout, and customers may experience circuit failures without evidence that the PWM signals sent to the power switch were actually correct. Such situations can be frustrating to customers and to circuit suppliers where circuits fail without a way to identify the cause of failure. According to this disclosure, gate driver circuits and other circuits associated with power switch control (such as the microcontroller that sends control signals to the gate driver circuit) may include pulse counters, e.g., pulse count registers, to track and store indications of rising and falling edges of PM signals. A microcontroller may count each PWM pulse at the microcontroller output, and a gate driver input may similarly count each PWM pulse at the gate driver input. Moreover, the gate driver may count each PWM pulse at the gate driver output and possibly at other locations, such as on a gate clamp pin. The count registers may be readable by microcontroller, such as when PM signals are disabled. PM signals may be viewed as being disabled any time PM signals are not being sent or when a driver signal is not enabled, such as via an enable signal on a separate pin. In some examples, a processor may readout a DUMP failure report as part of a failure analyzes. Readout of count registers may be performed at any time the driver is not active, such as at stop operation (e.g., Enable=Low) or possibly during down time when PM signal are not being sent or received by the driver circuit. In some examples, counter mismatch can be used by circuits or technicians to help pinpoint the cause of errors or the cause of device failure. FIG.4is a circuit diagram of a system that comprises a driver circuit402configured to control a power switch circuit403according to this disclosure. Driver circuit402may comprise a galvanic isolation barrier430that separates a first voltage domain (e.g., a low voltage domain associated with processor401) from a second voltage domain (e.g., a high voltage domain associated with power switch circuit403). Galvanic isolation barrier430, for example, can be implemented with one or more coiled transformers, one or more coreless transformers, one or more capacitors, or other elements that are capable of galvanically isolating two different voltage domains within driver circuit402. Driver circuit402may comprise an input pin configured to receive PM control signals (PWM_in) from a processor401. One or more input elements410, such as amplifiers, filters, or other signal conditioning components may filter, amplify, or otherwise condition the received input, e.g., to remove unwanted noise. Driver circuit402may also comprise an output pin galvanically isolated from the input pin, wherein the driver circuit is configured to deliver PM drive signals (PWM_out) from the output pin to a gate of a power switch circuit403to control ON/OFF switching of the power switch. One or more output elements420, such as amplifiers, filters, or other signal conditioning components may filter, amplify, or otherwise condition the signals, e.g., to remove unwanted noise. A gate resistor405may be included between driver circuit402and power switch circuit403. An output counter (C3) may comprise a storage register configured to store counts associated with the PM drive signals. In this way, driver circuit402can store at least a partial history of the PM drive signals applied to power switch circuit403, e.g., for later readout. The system ofFIG.4may include a plurality of counters to track PM signals at different circuit locations, which can facilitate comparisons amongst the PM signals at the different locations. In the example ofFIG.4, driver circuit includes both an output counter (C3) configured to store counts of PM drive signals, and an input counter (C2), i.e., an input register configured to store counts of PM control signals received from processor401. Moreover, processor401may comprise yet another register, i.e., processor counter (C1) configured to store counts of the PM control signals sent by processor401. By comparing the contents of two or more of counters C1, C2and/or C3, counter mismatch can be identified to indicate a potential PM signaling problem within the system. FIG.5is another illustration of four graphs showing different exemplary signals associated with power switch control, and counts associated with some of the graphs.FIG.5is similar toFIG.2, butFIG.5also shows counters C1, C2, and C3, which may correspond to counters C1, C2, and C3ofFIG.4. As can be seen inFIG.5, the counts for C1, C2, and C3are all the same, i.e., count=n, for each of C1, C2, and C3. In this case, there are no signaling problems within the system. FIG.6is another illustration of four graphs showing different exemplary signals associated with power switch control, and counts associated with some of the graphs.FIG.6is similar toFIG.5, except inFIG.5, the PWM_out signal is terminated early, e.g., due a desaturation (DESAT) event. Accordingly, the Vge signal is also terminated early (as shown in bold) due to the DESAT event. The counters shown inFIG.6, i.e., C1, C2, and C3, which may correspond to counters C1, C2, and C3ofFIG.4are all in sync, similar toFIG.5. In other words, as can be seen inFIG.6, the counts for C1, C2, and C3are all the same, i.e., count=n, for each of C1, C2, and C3. In this case, there are no signaling problems within the system. The DESAT event does not affect the counts, but merely the timing of when the counts may occur. FIG.7is another illustration of four graphs showing different exemplary signals associated with power switch control, and counts associated with some of the graphs.FIG.7is similar toFIG.3, butFIG.7also shows counters C1, C2, and C3, which may correspond to counters C1, C2, and C3ofFIG.4. As can be seen inFIG.7, the counts for C1are different than the counts for C2and C3, i.e., count=n for C1and count=n+x for C2and C3. In this case, there is a signaling problems within the system. The glitch at positions701and702cause extra counts by C2and C2, and result in unwanted artifact in Vge over the power switch, as shown at703. FIG.8is a block diagram of a system800that comprises a driver circuit802configured to control a power switch circuit804according to this disclosure. Driver circuit802may comprise a galvanic isolation barrier801that separates a first voltage domain (e.g., a low voltage IV′ domain) from a second voltage domain (e.g., a high voltage ‘HV’ domain). Galvanic isolation barrier801, for example, can be implemented with one or more coiled transformers, one or more coreless transformers, one or more capacitors, or other elements that are capable of galvanically isolating two different voltage domains within driver circuit802. Driver circuit802may comprise an input pin812configured to receive PM control signals from a processor806. Driver circuit802may also comprise an output pin814galvanically isolated from the input pin812, wherein the driver circuit is configured to deliver PM drive signals from the output pin814to a power switch (e.g., power switch circuit804) to control ON/OFF switching of the power switch within power switch circuit804. An output register820of driver circuit802may be configured to store counts associated with the PM drive signals. In this way, driver circuit802can store at least a partial history of the PM drive signals applied to power switch circuit804. In the example ofFIG.8, output register820is associated with a corresponding “shadow” output register825. Whereas output register820is located in the high voltage domain, shadow output register825is located in the low voltage domain. Shadow output register825may store a shadow of output register820, and shadow output register825may be updated with the content of output register820via backchannel850when driver circuit802is disabled or when PM signals are inactive or disabled. In some examples, driver circuit802may include an enable pin configured to receive an enable or disable signal from processor806or from another system-level component. When driver circuit802is disabled or when PM signals are inactive or disabled, shadow output register825can be updated to store the content of output register820. This allows for easy access to the count of PM drive signals being output by driver circuit802insofar as shadow output register825may be located in the same voltage domain as processor806and may be readable by processor806. InFIG.8, backchannel850is illustrated as being inside driver circuit802, but in other examples, backchannel850may also be external relative to driver circuit802. In some examples, shadow output register825can be viewed as a first register, input register822can be viewed as a second register, and output register820can be viewed as a third register. Shadow output register825is located in a first voltage domain (i.e., the LV domain) and configured to store a shadow of output register820when the PM drive signals are disabled. Input register822is galvanically isolated from output register820, and input register822is located in the first voltage domain (i.e., the LV domain) and output register820is located in a second voltage domain (i.e., the HV domain). PM drive signals may be enabled or disabled via an enable pin826. Enable pin826may be configured to receive enable or disable signals from a processor or another circuit. Alternatively, PM drive signals may also be enabled via a driver reset signal or possibly via software running on processor806. In any case, shadow output register825may be updated with the contents of output register820in response to the PM drive signals being disabled. Processor806can then read output of input register822and shadow output register825to determine if counter mismatch exists. Processor can also compare the contests of input register822and shadow output register825with that of processor register824to identify any mismatch. In some examples, processor806may cause driver circuit802to disable operation of power switch circuit804in response to mismatch among registers, but in other cases, mismatch among registers may be identified after failure of driver circuit802or power switch circuit804in order to allow technicians to identify the cause of the failure. In some examples, a plurality of registers may be used throughout system800to track PM signals at different circuit locations, which can facilitate comparisons amongst the PM signals at the different locations. In the example ofFIG.8, driver circuit802includes both a first register, i.e., shadow output register825configure to store counts of PM drive signals, and a second register, i.e., an input register822configured to store counts of PM control signals received from processor812. Driver circuit802also includes a third register, i.e., output register820. Moreover, processor806may comprise yet another register, i.e., processor register824configured to store counts of the PM control signals sent by processor812. By comparing the contents of two or more of registers825,822, or824(all of which are located in the same voltage domain), counter mismatch can be identified to indicate a potential PM signaling problem within system800. Again, in some cases, counter mismatch may be identified (e.g., by reading and comparing the content of two or more different registers of system800) after an actual circuit failure, in order to help identify or diagnose the cause of the circuit failure. In other cases, counter mismatch may be identified within driver circuit802, e.g., by periodically comparing the content of shadow output register825and input register822when PM drive signals are disabled, as a potential indicator of PM signaling problems that could lead to a future circuit failure. Thus, in some cases, driver circuit802or processor806may be configured to react to counter mismatch, such as by disabling operation or limiting operation of power switch circuit804. In other examples, however, the data stored in registers825,822, and824may be stored for use or analysis after device failure, e.g., for read-out by a technician, in order to help identify the cause of the device failure or the location within the circuit where PM signals may have been compromised. In general, each of registers820,822,824, and825may each comprise volatile or non-volatile memory or other storage elements. Some or all of registers820,822,824, and825may be readable by processor806. Output register820, in some examples, may include a back-channel connection across galvanic isolation barrier801to facilitate readout by processor806. Registers822,825and824may be connected to and readable by processor806. In some examples, output register820may be unreadable by processor806directly, but shadow output register825can periodically store the contents of output register820so that the data is accessible to processor806, e.g., when PM signals are disabled. In some examples, shadow output register825and input register822of driver circuit802comprise non-volatile memory that is readable by processor806when the PM drive signals are disabled. In some examples, each of registers820,822,824, and/or825may comprise so-called an overflow register that is configured to store counts in an overflowing manner, possibly storing least significant bits. Overflow registers may be useful to help limit the size of the registers and can provide an indicator of mismatch in counts between the registers without requiring an actual account of millions of switching events. Overflow registers may be configured to store N bits of data in a first in first out (FIFO) manner. 8-bit registers may be sufficient to achieve tracking of counter mismatch, e.g., storing least significant bits. In some examples, N may represent any integer greater than 3 and less than 17. The registers, e.g., each of registers820,822,824, and825may comprise counters that count a number of rising edges and falling edges of the PM signals. In some implementations, the counters could be configured to count only the rising edges or only the falling edges. However, counts of both rising and falling edges is desirable in many situations. Registers located in the low voltage domain may comprise non-volatile memory that is readable by the processor when the PM drive signals are disabled. Registers in the high voltage domain may comprise non-volatile memory that is updated to a corresponding shadow register in the low voltage domain when the PM drive signals are disabled. Again, in some examples, driver circuit802may be configured to disable the PM drive signals in response to identifying a mismatch between the shadow output register825and input register822, following an update to shadow output register825with the contents of output register820. In other examples, register readout may occur after device failure, e.g., by a technician, in order to help diagnose the cause of the device failure. FIG.9is another circuit diagram of a system according to this disclosure. The system ofFIG.9comprises a first driver circuit902A configured to control a high side power switch circuit903A according to this disclosure. Moreover, in the system ofFIG.9, a second driver circuit902B is configured to control a low side power switch circuit903B. High side power switch circuit903A and low side power switch circuit903B may form a half bridge configured to deliver power at a switch node904positioned between high side power switch circuit903A and low side power switch circuit903B Driver circuits902A,902B may each comprise a galvanic isolation barrier905A,905B that separates a first voltage domain. Galvanic isolation barriers905A,905B, for example, can be implemented with one or more coiled transformers, one or more coreless transformers, one or more capacitors, or other elements that are capable of galvanically isolating two different voltage domains within driver circuits903A,903B. Driver circuits902A,902B may each comprise an input pin configured to receive PM control signals (PWM_in) from a processor901. One or more input elements, such as amplifiers, filters, or other signal conditioning components may filter, amplify, or otherwise condition the received input, e.g., to remove unwanted noise. Driver circuits902A,902B may also each comprise an output pin galvanically isolated from the respective input pin, wherein each driver circuit902A,902B is configured to deliver PM drive signals (PWM_out) from the respective output pin to a gate of the respective power switch circuit903A,903B to control ON/OFF switching of the respective power switch. One or more output elements, such as amplifiers, filters, or other signal conditioning components may filter, amplify, or otherwise condition the output, e.g., to remove unwanted noise. Gate resistors may be included between driver circuit902A and power switch circuit903A and between circuit902B and power switch circuit903B. Output counters (C3and C3′) may comprise storage registers configured to store counts associated with the PM drive signals for driver902A and driver902B. In this way, driver circuits902A and902B can each store at least a partial history of the PM drive signals applied to power switch circuits903A,903B, e.g., for later readout. A plurality of counters may be used throughout the system ofFIG.9to track PM signals at different circuit locations, which can facilitate comparisons amongst the PM signals at the different locations. In the example ofFIG.9, each driver circuit includes both an output counter (C3and C3′) configured to store counts of PM drive signals, and input counters (C2_1, C2-2and C2_1′ and C2_2′), i.e., input registers configured to store counts of PM control signals received from processor901. In this case, both driver902A and driver902B may each receive the input PM control signals for both driver circuits, which can help to ensure that both power switches are not ON simultaneously. Moreover, processor401may comprise additional registers, i.e., processor counters (C1_P1and C1_P2) configured to store counts of the PM control signals sent by processor901. By comparing the contents of two or more of counters C1_P1, C2-1and C3_1, counter mismatch can be identified for driver902A to indicate a potential PM signaling problem within the system. Similarly, by comparing the contents of two or more of counters C1_P2, C2-2and C3_2, counter mismatch can be identified for driver902B to indicate a potential PM signaling problem within the system. FIG.10is another circuit diagram of a system according to this disclosure.FIG.10is similar toFIG.9in many respects. The system ofFIG.1comprises a first driver circuit1002A configured to control a high side power switch circuit1003A according to this disclosure. Moreover, in the system ofFIG.9, a second driver circuit1002B is configured to control a low side power switch circuit1003B. High side power switch circuit1003A and low side power switch circuit1003B may form a half bridge configured to deliver power at a switch node1004positioned between high side power switch circuit1003A and low side power switch circuit1003B Driver circuits1002A,1002B may each comprise a galvanic isolation barrier1005A,1005B that separates a first voltage domain. Galvanic isolation barriers1005A,1005B, for example, can be implemented with one or more coiled transformers, one or more coreless transformers, one or more capacitors, or other elements that are capable of galvanically isolating two different voltage domains within driver circuits1003A,1003B. Driver circuits1002A,1002B may each comprise an input pin configured to receive PM control signals (PWM_in) from a processor1001. One or more input elements, such as amplifiers, filters, or other signal conditioning components may filter, amplify, or otherwise condition the received input, e.g., to remove unwanted noise. Driver circuits1002A,1002B may also each comprise an output pin galvanically isolated from the respective input pin, wherein each driver circuit1002A,1002B is configured to deliver PM drive signals (PWM_out) from the respective output pin to a gate of the respective power switch circuit1003A,1003B to control ON/OFF switching of the respective power switch. One or more output elements, such as amplifiers, filters, or other signal conditioning components may filter, amplify, or otherwise condition the output, e.g., to remove unwanted noise. Gate resistors may be included between driver circuit1002A and power switch circuit1003A and between circuit1002B and power switch circuit1003B. Output counters (C3and C3′) may comprise storage registers configured to store counts associated with the PM drive signals for driver1002A and driver1002B. In this way, driver circuits1002A and1002B can each store at least a partial history of the PM drive signals applied to power switch circuits1003A,1003B, e.g., for later readout. A plurality of counters may be used throughout the system ofFIG.10to track PM signals at different circuit locations, which can facilitate comparisons amongst the PM signals at the different locations. The counters C1_P1, C1_P2, C2_1, C2_2, C2_1′, C2_1′, C3, and C3′ are similar to those shown inFIG.9and may operate in the manner explained above in relation toFIG.9. In addition, driver circuits1002A and1002B may each include a so-called interlock counter. For example, driver circuit1002A includes counter C Interlock, and driver circuit1002B includes counter C_Interlock′. The interlock counters may be configured to count instances where the high side PM signals and the low side PM signals are both simultaneously high. The details of this disclosure are generally described with regard to power switches that are normally in an OFF state, where a gate voltage turns the power switch ON. Of course, interlock counters may also be configured to count the ON state of power switches that are normally in the ON state, where the gate voltage turns the power switch OFF. FIG.11Ais a depiction of high side and low side PM signals, which may correspond to PM control signals or PM drive signals. As shown inFIG.11A, high side and low side PM signals are generally complementary in the sense that high side is ON when low side is OFF and low side is ON when high side is OFF. Sometimes, however, problems can occur in which both high side and low side PM signals are ON simultaneously, e.g., possibly for short periods.FIG.11Billustrates a possible situation where high side and low side PM signals are ON simultaneously (as shown at positions1101and1102). Interlock counters (C_Interlock and C_Interlock′) may be configured to count instances where the high side PM signals and the low side PM signals are both simultaneously high, such as instances shown at locations1101and1102. Interlock counters (C_Interlock and C_Interlock′) may include signal logic to identify these situations in the PM control signals. Alternatively, in some cases, interlock circuitry may already exist within the driver circuit to ensure that PM drive signals do not simultaneously activate both power switch circuits1103A,1103B. If interlock circuitry already exists in the driver circuits, then interlock counters (C_Interlock and C_Interlock′) may be configured to count instances where the interlock circuitry is activated. In addition to the counting of PM control signals and PM drive signals, counting instances of interlock or instances where PM control signals overlap, such as shown at locations1101and1102ofFIG.11Bcan be helpful for diagnosing circuit problems in driver circuits1002A and1002B. FIG.12is another circuit diagram of a system consistent with this disclosure. The system shown inFIG.12is similar to that ofFIG.4in many respects. The system ofFIG.12includes a driver circuit1202configured to control a power switch circuit1203according to this disclosure. Driver circuit1202may comprise a galvanic isolation barrier1230that separates a first voltage domain (e.g., a low voltage domain associated with processor1201) from a second voltage domain (e.g., a high voltage domain associated with power switch circuit1203). Galvanic isolation barrier1230, for example, can be implemented with one or more coiled transformers, one or more coreless transformers, one or more capacitors, or other elements that are capable of galvanically isolating two different voltage domains within driver circuit1202. Driver circuit1202may comprise an input pin configured to receive PM control signals (PWM_in) from a processor1201. One or more input elements, such as amplifiers, filters, or other signal conditioning components may filter, amplify, or otherwise condition the received input, e.g., to remove unwanted noise. Driver circuit1202may also comprise an output pin galvanically isolated from the input pin, wherein the driver circuit is configured to deliver PM drive signals (PWM_out) from the output pin to a gate of a power switch circuit1203to control ON/OFF switching of the power switch. One or more output elements such as amplifiers, filters, or other signal conditioning components may filter, amplify, or otherwise condition the signals, e.g., to remove unwanted noise. A gate resistor may be included between driver circuit1202and power switch circuit1203. An output counter (C3) may comprise a storage register configured to store counts associated with the PM drive signals. In this way, driver circuit1202can store at least a partial history of the PM drive signals applied to power switch circuit1203, e.g., for later readout. Similar to other examples, in the example shown inFIG.12, a plurality of counters may be used throughout the system of to track PM signals at different circuit locations, which can facilitate comparisons amongst the PM signals at the different locations. In the example ofFIG.12, driver circuit includes both an output counter (C3) configured to store counts of PM drive signals, and an input counter (C2), i.e., an input register configured to store counts of PM control signals received from processor1201. Moreover, processor1201may comprise yet another register, i.e., processor counter (C1) configured to store counts of the PM control signals sent by processor1201. Counters C1, C2, and C3may comprise storage registers, such as described herein. As further shown inFIG.12, driver1202may include an additional counter C4, i.e., a fourth register within the system configured to store counts associated with PM signals on a gate clamp pin of driver circuit1202. Some driver circuits, for example, include a gate clamp pin for monitoring the gate to emitter voltage “Vge” over power switch1203. Such driver designs present an opportunity to implement counter C4to count rising and falling edges associated with the PM signals on the gate clamp pin. As with other examples herein, by comparing the contents of two or more of counters C1, C2, C3and/or C4, counter mismatch can be identified to indicate a potential PM signaling problem within the system. Counter C4is another example of a desirable location for a counter to track PM signals within the system. FIG.13is another circuit diagram of a system consistent with this disclosure. The system shown inFIG.13is similar to that ofFIG.4in many respects. The system ofFIG.13includes a driver circuit1302configured to control a power switch circuit1303according to this disclosure. Driver circuit1302may comprise a galvanic isolation barrier1330that separates a first voltage domain (e.g., a low voltage domain associated with processor1301) from a second voltage domain (e.g., a high voltage domain associated with power switch circuit1303). Galvanic isolation barrier1330, for example, can be implemented with one or more coiled transformers, one or more coreless transformers, one or more capacitors, or other elements that are capable of galvanically isolating two different voltage domains within driver circuit1302. Driver circuit1202may comprise an input pin configured to receive PM control signals (PWM_in) from a processor1301. One or more input elements, such as amplifiers, filters, or other signal conditioning components may filter, amplify, or otherwise condition the received input, e.g., to remove unwanted noise. Driver circuit1302may also comprise an output pin galvanically isolated from the input pin, wherein the driver circuit is configured to deliver PM drive signals (PWM_out) from the output pin to a gate of a power switch circuit1303to control ON/OFF switching of the power switch. One or more output elements such as amplifiers, filters, or other signal conditioning components may filter, amplify, or otherwise condition the signals, e.g., to remove unwanted noise. A gate resistor may be included between driver circuit1302and power switch circuit1303. An output counter (C3) may comprise a storage register configured to store counts associated with the PM drive signals. In this way, driver circuit1302can store at least a partial history of the PM drive signals applied to power switch circuit1303, e.g., for later readout. In the example ofFIG.13, output counter C3is implemented close to galvanic isolation barrier1330, e.g., as far as possible from power switch1303, which can help avoid problems or damage to output counter C3in the event of device failure. Of course, it is also possible to implement output counter C3shown inFIG.4in combination with output counter C3shown inFIG.13, possibly along with other counters as described here. In any case,FIG.13shows a possible alternative (or addition) for the location of output counter C3. Counter registers within a power switch control and driver system, such as described here, may be implemented in volatile or non-volatile memory. Non-volatile memory may be more desirable, as it allows for readout even if there are supply problems that limit power to the counters. In some examples, PM signals are counted at locations as close as possible to output stage. In other examples, counter registers can be implemented in a die as far as possible from the output stage to limit damage in case of power switch failures. In some examples, a driver circuit may be configured to copy output registers from a high voltage side to corresponding shadow registers on a low voltage side, to enable readouts after HV supply failure. Counters may be long enough to store a desired number of n pulses, which vary for different scenarios. Counters between 4 and 16 bits may have sufficient capacity for counting PM signals, and overflow counting as wells as tracking of least significant bits may be used to store data in the counters. Readout of counters may occur during a PWM OFF Mode, e.g., when gate driver enable pin is low, during a driver reset, or in other situations where PM signals are disabled. In some examples, a microcontroller may be configured to readout and compare high-side and low-side switch counters from high side and low side driver circuits. In some examples, input counters on a driver circuit (e.g., C2) may be implemented after input filters (e.g., direct at the signal transfer LV to HV). In some examples, counter registers may be analyzed manual by an engineer (e.g., via a dump report) for circuit failure analysis. This can ensure that the techniques do not cause false circuit failures. In other cases, however, it may be desirable for driver circuits to react to counter mismatch, which can improve safety but could cause a false circuit failure if mismatch is identified while the circuit is still in good operational shape. Half-bridge configurations may use gate drivers pulse counting of interlock features. Only one switch can be turned-on, in this case, to prevent bridge shoot through, and if interlock is used to prevent simultaneous turn on, this event can be counted for later analysis. In some examples, C2counters may be used for counting pulses for both the high side and low side for identifying situations of interlock. Many gate drivers include a gate clamp pin, in which case a C4counting register may be desirable on the gate clamp pin. Counter mismatch may be the result of a signal error, but counter mismatch can also be caused by slow switching that cannot properly react to short pulses. Accordingly, in some cases, the circuits and techniques of this disclosure can be beneficial for a control unit to identify whether a pulse pattern is too fast for a power switch. In some examples, one or more C3counters may be located direct after the LV/HV communication line so that the signal is close to digital signal and at output stage the signal is a slow ramp. Counter data transfer or readout may be implemented in many ways. As noted, in some cases, a driver circuit or processor may be configured to react to counter mismatch. In other cases, however, these is no active data transfer to the processor. In this case, the registers just readable. In any case, a microprocessor (or technician) can decide when to read the counter registers. Readout during PWM operation has the risk that not all registers are in sync (data integrity). Readout during PWM stop (e.g., driver enable signal is Low, a desaturation event, a reset, or other situations where PM signals are disabled) may help to ensure data integrity. Alternatively, a continuous data transfer may have advantages that after failure, there is a higher likelihood that data is captured and stored but this may have a higher cost and complexity, with limited data integrity improvement beyond the situation of PWM stop operation. There may also be challenges in defining which pulses to count within a driver circuit. In general, the system may be configured to count only those pulses which the gate driver itself should interpretate as a signal. Thus, in some examples, glitches in the input and output stages, e.g., transitions on the order of 1 nanosecond (e.g., less than 5 nanoseconds) may be ignored and not counted. FIG.14is a flow diagram from the perspective of a gate driver circuit.FIG.14will be described from the perspective of driver circuit102ofFIG.1, although other driver circuits may perform similar techniques. As shown inFIG.14, in controlling a power switch a driver circuit102may be configured to receive PM control signals from a processor106via an input pin112of driver circuit102(1401). In some cases, processor106includes a processor register124configured to count edges of the PM control signals that are being sent to driver circuit102. Input register122of driver circuit is configured to count edges of the PM control signals (1402). Based on the PM control signals, driver circuit102is configured to generate PM drive signals (1403) and drive a power switch within power switch circuit104based on the PM drive signals (1404). In particular, driver circuit may deliver PM drive signals from an output pin114of driver circuit102to the power switch within power switch circuit104to control ON/OFF switching of the power switch. According to this disclosure, driver circuit102also includes an output register120configured to count edges of the PM drive signals (1405). In other words, output register120is configured to store counts associated with the PM drive signals. In some examples, such as shown inFIG.8, an output register820may reside on a high voltage side of driver circuit802, and driver circuit802may have a corresponding shadow output register825on the low voltage side. Shadow output register825, for example, may get periodically updated with the content of output register820when PM signals are disabled in driver circuit802. This can facilitate easy readout of input register822and shadow output register825by processor806so that count comparisons can be made, e.g., for circuit analysis purposes. FIG.15is a flow diagram showing one example of the analysis of PM count registers according to this disclosure. The process ofFIG.15, for example, may be performed after circuit or device failure to help diagnose the cause of the failure. As shown inFIG.15, a technician may obtain count values from two or more different registers associated with a power switch system (1501). The technician may compare first count values associated with PM drive signals with second count values associated with PM control signals (1502) to identify whether there is a mismatch between the counters (1503). If there is no mismatch (no branch of1503), then the PM signaling is OK (1504). However, if there is mismatch between the counters (yes branch of1503), this mismatch may indicate a signaling problem (1505). Moreover, if there are three or more counters located in specific locations of the driver circuit system, then the location of the mismatch may help pinpoint the location where circuit problems occurred. Thus, a method of analyzing operation of a power switch may comprise comparing first count values associated with PM drive signals associated with a driver circuit with second count values associated with PM control signals from a processor, and identifying a potential problem with operation of the power switch based on the comparison indicating a mismatch between the first count values and the second count values. The method may include comparing the first count values associated with the PM drive signals from the driver circuit with the second count values and with third count values wherein the third count values correspond to gate clamp signals associated with a gate clamp pin of the driver circuit. The following clauses may illustrate one or more aspects of the disclosure. Clause 1—A driver circuit configured to control a power switch, the driver circuit comprising: an input pin configured to receive PM control signals from a processor; an output pin galvanically isolated from the input pin, wherein the driver circuit is configured to deliver PM drive signals from the output pin to the power switch to control ON/OFF switching of the power switch; and a register configured to store counts associated with the PM drive signals. Clause 2—The driver circuit of clause 1, wherein the register comprises a non-volatile memory that is readable by the processor. Clause 3—The driver circuit of clause 1 or 2, wherein the register comprises a first register, the driver circuit further comprising a second register configured to store counts associated with the PM control signals. Clause 4—The driver circuit of clause 3, further comprising a third register configured to store counts associated with the PM drive signals, wherein the second register is galvanically isolated from the third register, wherein the second register is located in a first voltage domain and the third register is located in a second voltage domain, and wherein the first register is a shadow register located in the first voltage domain and configured to store a shadow of the third register when the PM drive signals are disabled. Clause 5—The driver circuit of clause 3 or 4, wherein the first register and the second register comprise non-volatile memory that is readable by the processor when the PM drive signals are disabled. Clause 6—The driver circuit of any of clauses 1-5, further comprising a gate clamp pin and a gate clamp resister (e.g., a fourth register) configured to store counts associated with PM signals on the gate clamp pin. Clause 7—The driver circuit of any of clauses 3-6, wherein the driver circuit is configured to disable the PM drive signals in response to identifying a mismatch between the first register and the second register. Clause 8—The driver circuit of any of clauses 1-7, wherein the register comprises an overflow register that is configured to store N bits of data, wherein N is an integer greater than 3 and less than 17. Clause 9—The driver circuit of any of clauses 1-8, wherein the counts identify a number of rising edges and falling edges of the PM drive signals. Clause 10—A method of controlling a power switch, the method comprising: receiving PM control signals from a processor via an input pin of a driver circuit; delivering PM drive signals from an output pin of the driver circuit to the power switch to control ON/OFF switching of the power switch; and storing counts associated with the PM drive signals in a register associated with the driver circuit. Clause 11—The method of clause 10, wherein the register comprises a first register, the method further comprising: storing counts associated with the PM control signals in a second register. Clause 12—The method of clause 11, wherein a third register is galvanically isolated from the second register, wherein the second register is located in a first voltage domain and the third register is located in a second voltage domain, the method further comprising: storing a shadow of the third register in the first register in response to disabling the PM drive signals, wherein the first register is a shadow register associated with the third register and the first register is located in the first voltage domain. Clause 13—The method of any of clauses 10-12, further comprising storing counts associated with PM signals on a gate clamp pin in gate clamp register (e.g., a fourth register). Clause 14—The method of any of clauses 11-13, further comprising enabling readout of the first register and the second register in response to disabling the PM drive signals. Clause 15—A system comprising: a processor; a power switch; and a driver circuit configured to control the power switch, the driver circuit comprising: an input pin configured to receive pulse modulation (PM) control signals from the processor; an output pin galvanically isolated from the input pin, wherein the driver circuit is configured to deliver PM drive signals from the output pin to the power switch to control ON/OFF switching of the power switch; and a register configured to store counts associated with the PM drive signals, wherein the register is readable by the processor. Clause 16—The system of clause 15, wherein the register comprises a first register, the driver circuit further comprising a second register configured to store counts associated with the PM control signals. Clause 17—The system of clause 16, wherein the first register comprises a shadow register located in a first voltage domain that stores a shadow of a third register located in a second voltage domain in response to the PM drive signals being disabled, and wherein the first register and the second register comprise non-volatile memory that is readable by the processor in response to the PM drive signals being disabled. Clause 18—The system of any of clauses 15-17, wherein the driver circuit further comprises a gate clamp pin and a gate clamp register (e.g., a fourth register) configured to store counts associated with PM signals on the gate clamp pin. Clause 19—The system of any of clauses 15-18, wherein the register comprise a driver register and wherein processor includes a processor register configured to store counts associated with the PM control signals. Clause 20—The system of any of clauses 15-19, wherein the driver circuit comprises a first driver circuit and the power switch comprises a high side power switch, the system further comprising: a low side power switch, wherein the low side power switch and the high side power switch form a half-bridge circuit; and a second driver circuit configured to control the low side power switch, the second driver circuit comprising: a low side input pin configured to receive low side PM control signals from the processor; a low side output pin galvanically isolated from the low side input pin, wherein the second driver circuit is configured to deliver low side PM drive signals from the low side output pin to the low side power switch to control ON/OFF switching of the low side power switch; and a low side register configured to store counts associated with the low side PM drive signals, wherein the low side register is readable by the processor. Clause 21—The system of clause 20, further comprising an interlock register configured to store counts associated with instances where the high side PM control signals and the low side PM control signals are both simultaneously high. Clause 22—A method of analyzing operation of a power switch, the method comprising: comparing first count values associated with pulse modulation (PM) drive signals associated with a driver circuit with second count values associated with PM control signals from a processor; and identifying a potential problem with operation of the power switch based on the comparison indicating a mismatch between the first count values and the second count values. Clause 23—The method of clause 22, wherein comparing comprises: comparing the first count values associated with the PM drive signals from the driver circuit with the second count values and with third count values wherein the third count values correspond to gate clamp signals associated with a gate clamp pin of the driver circuit. Various aspects have been described in this disclosure. These and other aspects are within the scope of the following claims. | 60,511 |
11863187 | DETAILED DESCRIPTION Aspects of the present disclosure relate to a D-type wholly dissimilar high-speed static set-reset flip-flop. A typical differential circuit supplies both inverted and non-inverted signals to subsequent logic. However, most flip-flops generate a single-ended output and an inverted signal for the generated single-ended output may be generated using an additional inverter. For example, the typical differential circuit may include two separate paths: one path is to generate a non-inverted signal, and the other path is to generate an inverted signal. The output path for the inverted and the non-inverted signal each may follow a different path. The other path may include the additional inverter. The additional inverter used for generating the inverted signal carries a speed penalty and places the two signals out of alignment. There may be a delay (one inverter) between the generated inverted and non-inverted signals. This delay may result in a phase difference between the inverted and non-inverted signal. Embodiments described herein generate an inverted signal and a non-inverted signal at the same time such that there is no phase difference between the inverted and the non-inverted signal. In some aspects, a data path is substantially similar between the inverted signal and the non-inverted signal. That is, the number of devices in the data path of the inverted signal and the number of devices in the data path of the non-inverted signal are close to each other. In some aspects, the data path may be from a clock input to an output (i.e., to the inverted signal and to the non-inverted signal). Technical advantages of the present disclosure include, but are not limited to, improvement in the performance of an integrated circuit by equalizing worst-case delays. The simultaneous (or without phase difference) generation of inverted and non-inverted signals is beneficial to combinational elements, such as decoders and multiplexers, where a phase difference between the inverted and non-inverted signals may cause glitches. By way of a non-limiting example, differential flip-flops are a reasonable choice for such decoders and multiplexers. In some embodiments, in the integrated circuit, two different but substantially similar circuit paths may be used to generate both the inverted signal and the non-inverted signal. In the circuit path, cross-coupled circuit (e.g., p-transistors as pull-up devices) may be used to achieve high energy efficiency. By way of a non-limited example, the integrated circuit may be related to 5 nm and/or 7 nm FINFET technology. Though various embodiments in the present disclosure are described using D-type flip-flop, the embodiments may be practiced using other types of flip-flops as well. In some embodiments, a data signal may be provided to a first path of the integrated circuit to generate the non-inverted signal and an inverse data signal may be provided to a second path of the integrated circuit to generate the inverted signal. In some aspects, the data signal and the inverse data signal may be provided simultaneously to the first path and to the second path, respectively. For example, the data signal and the inverse data signal may be provided at a rising edge of a clock cycle to the first path and the second path, respectively. In some aspects, the differential circuit may provide set/reset capability while maintaining a no-phase difference between the inverted signal and the non-inverted signal. The differential circuit may include a feedback circuit to pass a set/reset signal from the first path to the second path of the integrated circuit. FIG.1is a schematic of a circuit diagram100for differential flip-flops according to aspects of the disclosure. As shown inFIG.1, the path from data input to output for both the inverted and the non-inverted signal is identical (labelled path A and path B inFIG.1). An inverted data input is used to generate the inverted output signal. As a result, when the data and its inverted data signals are gated through the flip-flops, the generated non-inverted and inverted signals may not have a phase difference. The inverted and the non-inverted signals are generated simultaneously. That is, the inverted and the non-inverted signals are available concurrently (at the same time). Circuit100includes a data master block102, a data master-slave switch104, a data slave block106, a master cross-coupled feedback108, a slave cross-coupled feedback110, an inverse data master block112, an inverse master slave switch114, and an inverse slave block116. Data master block102may comprise a data input line that is configured to receive a data signal. Inverse data master block112may comprise an inverse data input line that is configured to receive an inverse of the data signal. Data slave block106is configured to generate an output data signal during a rising edge of a clock cycle. Inverse slave block116is configured to generate the inverse of the output data signal at the rising edge of the clock cycle. Data master block102may include a tri-state buffer118and a buffer120. In some aspects, tri-state buffer118may be a tri-state inverter. Switch104includes a buffer122. Buffer122may be controlled by a clock signal and the inverse of the clock signal. The inverse signal may be generated using a single stage inverter, a two-stage inverter, and the like or using an external circuit (e.g., a two-phase clock signal from the external circuit). Data slave block106includes a buffer124. Master cross-coupled feedback108is coupled between an output of buffer118and the output of buffer134. Master cross-coupled feedback108includes an inverter126and an inverter128. Slave cross-coupled feedback110is coupled between the output of data master slave switch104and the output of inverse data master slave switch114. Slave cross-coupled feedback110includes an inverter130and an inverter132. Inverse data master block112includes a buffer134and a buffer136. Inverse master slave switch114includes a buffer138. Inverse slave block116includes a buffer140. An output of inverse data master block112is coupled to the input of inverse master slave switch114. The output of the inverse master slave switch114is coupled to inverse slave block116. FIG.2is a schematic that shows a circuit200for generating inverted and non-inverted output signals using differential flip-flops with a set and reset functionality, according to aspects of the disclosure. Circuit200includes a data master block202, a data master-slave switch204, a data slave block206, a master cross-coupled feedback208, a slave cross-coupled feedback210, an inverse data master block212, an inverse master slave switch214, and an inverse slave block216. Data master block202may comprise a data input line that is configured to receive a data signal. Inverse data master block212is configured to receive an inverse of the data signal. Data slave block206is configured to generate an output data signal during a rising edge of a clock cycle. Inverse slave block216is configured to generate the inverse of the output data signal at the rising edge of the clock cycle. Master cross-coupled feedback208and slave cross-coupled feedback210pass a set/reset signal from the first path to the second path or from the second path to the first path. In some aspects, master cross-coupled feedback208and slave cross-coupled feedback210are not in the data path. Data master block202may include a buffer218and a buffer220. In some aspects, buffer220may have a set and reset functionality. Buffer220may be coupled to a set signal and to a reset signal. Switch204includes a buffer222. Buffer222may be controlled by a clock signal and the inverse of the clock signal. The inverse signal may be generated using a single stage inverter, a two-stage inverter, or the like or using an external circuit (e.g., a two-phase clock signal from the external circuit). Master cross-coupled feedback208is coupled between an output of buffer218and the output of buffer234. Master cross-coupled feedback208includes an inverter226and an inverter228. In some aspects, inverter226may be coupled to the set signal and to the reset signal. Slave cross-coupled feedback210is coupled between the output of data master slave switch204and the output of inverse data master slave switch214. Slave cross-coupled feedback210includes an inverter230and an inverter232. Inverter232may be coupled to the set signal and to the reset signal. The master cross-coupled feedback208and the slave-cross-coupled feedback210are configured to pass a set/reset to the other path. Inverse data master block212includes a buffer234and a buffer236. Buffer236may have a set/reset functionality. Buffer236may be coupled to the set and to the reset signal. Inverse master slave switch214includes a buffer238. Inverse slave block216includes buffer240. An output of inverse data master block212is coupled to the input of inverse master slave switch214. The output of the inverse master slave switch is coupled to inverse slave block216. FIG.3is a schematic that shows a circuit300for generating inverted and non-inverted output signals using differential flip-flops without a set and reset functionality in accordance with some embodiments. The path from data input to output for both the inverted and the non-inverted signal is identical. Circuit300includes a data master block302, a data master-slave switch304, a data slave block306, a master cross-coupled feedback308, a slave cross-coupled feedback310, an inverse data master block312, an inverse master slave switch314, and an inverse slave block316. As shown inFIG.3, the path from data input to output for both the inverted and the non-inverted signal are identical. That is the number of devices in both paths are substantially the same. Data master block302may include a transistor Q1, a transistor Q2, a transistor Q3, a transistor Q4, a transistor Q5, and a transistor Q6. In some aspects, the transistors of circuit300may be metal oxide semiconductor field effect transistors (MOSFETs). In some aspects, transistor Q1, transistor Q2, transistor Q5may be p-type transistors. Transistor Q4, transistor Q3, and transistor Q6may be n-type transistors. A data signal D is coupled to a gate of transistor Q1and to a gate of transistor Q3. The drain of transistor Q1is coupled to the source of transistor Q2. The drain of transistor Q2is coupled to the drain of transistor Q4. The source of transistor Q4is coupled to the drain of transistor Q3. Transistor Q1and transistor Q5are coupled to a power supply (labelled Vdd inFIG.3). In some aspects, the respective gate of transistor Q5and transistor Q6are coupled to the drain of transistor Q2and transistor Q4. Data master-slave switch304includes a transistor Q7and a transistor Q8. The respective source of transistor Q7and transistor Q8are coupled to the drain of transistor Q5and transistor Q6. Transistor Q7and transistor Q8are gated by a clock signal and the inverse clock signal respectively (labelled CKM and CKMN inFIG.3). The source of transistor Q7is coupled to the source of transistor Q8. In some aspects, transistor Q7is a p-type transistor and Q8is a n-type transistor. Data slave block306includes a transistor Q9, a transistor Q10, a transistor Q11, and a transistor Q12. The drain of transistor Q9is coupled to the drain of transistor Q10. The drain of transistor Q11is coupled to the drain of transistor Q12. The gate of transistor Q11and the gate of transistor Q12are coupled to the drain of transistor Q9. The drain of transistor Q7and the drain of transistor Q8are coupled to the gate of transistor Q9and the gate of transistor Q10. The output signal may be taken from the drain of transistor Q12. In some aspects, transistor Q9and transistor Q11are p-type transistors. Transistor Q10and transistor Q12are n-type transistors. Master cross-coupled feedback308includes a transistor Q13, a transistor Q14, a transistor Q15, a transistor Q16, a transistor Q17, a transistor Q18, and a transistor Q19, a transistor Q20. In some aspects, transistor Q13, transistor Q14, transistor Q17, and transistor Q18are p-type transistors. Transistor Q15, transistor Q16, transistor Q19, and transistor Q20are n-type transistors. Transistor Q14and transistor Q15are gated by the clock signal and the inverse clock signal respectively. Transistor Q18and transistor Q19are gated by the clock signal and the inverse clock signal, respectively. The drain of transistor Q14is coupled to the gate of transistor Q17and to the gate of transistor Q20. The drain of transistor Q18is coupled to the gate of transistor Q13and to the gate of transistor Q16. Slave cross-coupled feedback310includes a transistor Q21, a transistor Q22, a transistor Q23, a transistor Q24, a transistor Q25, a transistor Q26, a transistor Q27, and a transistor Q28. In some aspects, transistor Q21, transistor Q22, transistor Q25, and transistor Q26are p-type transistors. Transistor Q23, transistor Q24, transistor Q27, and transistor Q28are n-type transistors. Transistor Q22and transistor Q27are gated by the clock signal. Transistor Q22and transistor Q26are gated by the inverse of the clock signal. The drain of transistor Q22is coupled to the gate of transistor Q25and transistor Q28. The drain of transistor Q26is coupled to the gate of transistor Q21and transistor Q24. Inverse data master block312includes a transistor Q29, a transistor Q30, a transistor Q31, a transistor Q32, a transistor Q33, and a transistor Q34. In some aspects, transistor Q29, transistor Q30, transistor Q33, are p-type transistors. Transistor Q31, transistor Q32, and transistor Q34are n-type transistors. The inverse of the data signal (labelled DB inFIG.3) is coupled to a gate of transistor Q29and to a gate of transistor Q32. The drain of transistor Q29is coupled to the source of transistor Q30. The drain of transistor Q30is coupled to the drain of transistor Q31. The source of transistor Q31is coupled to the drain of transistor Q32. In some aspects, the respective gate of transistor Q33and transistor Q34are coupled to the drain of transistor Q31and to the drain of transistor Q32. Inverse master slave switch314includes a transistor Q35and a transistor Q36. Transistor Q35may be a p-type transistor. Transistor Q36may be a n-type transistor. The source of transistor Q35is coupled to the source of transistor Q36and to the drain of transistor Q33. Inverse slave block316includes a transistor Q37, a transistor Q38, a transistor Q39, and a transistor Q40. Transistor Q37and transistor Q39may be p-type transistors. The drain of transistor Q37and the drain of transistor Q38are coupled to the gate of transistor Q39and the gate of transistor Q40and to the drain of transistor Q26. The output signal (i.e., the inverted signal, labelled QN inFIG.3) is from the drain of transistor Q40. In some aspects, the clock signal and the inverse clock signal may be generated using a single inverter, two inverters (e.g., U3and U4inFIG.3), or any other number of inverters. In some aspects, the clock signal and the inverse clock signal may be provided from a two-phase clock from an external circuit. As shown inFIG.3, each path from the input to the output signal for the inverted and the non-inverted signals includes identical master and slave latches. When the clock signal is low, the input data overwrites the contents of the master latch, and when the clock signal is high, the master latch overwrites the contents of the slave latch. Because the differential flip-flop design described herein is static, the clock can run at any frequency up to its maximum value. FIG.4is a schematic of a circuit400for generating inverted and non-inverted output signals using differential flip-flops with a set and reset functionality, in accordance with an embodiment of the present disclosure. The path from data input to output for both the inverted and the non-inverted signal is substantially identical. That is the number of devices in both paths are substantially the same. Circuit400includes a data master block402, a data master-slave switch404, a data slave block406, a master cross-coupled feedback408, a slave cross-coupled feedback410, an inverse data master block412, an inverse master slave switch414, and an inverse slave block416. Data master block402may include a transistor Q1, a transistor Q2, a transistor Q3, a transistor Q4, a transistor Q5, a transistor Q6, a transistor Q7, a transistor Q8, a transistor Q9, and a transistor Q10. Transistor Q1, transistor Q2, transistor Q5, transistor Q6, and transistor Q7may be p-type transistors. Transistor Q3, transistor Q4, transistor Q8, transistor Q9, and transistor Q10may be n-type transistors. A data signal (labelled D inFIG.4) is coupled to a gate of transistor Q1and to a gate of transistor Q3. The drain of transistor Q1is coupled to the source of transistor Q2. The drain of transistor Q2is coupled to the drain of transistor Q4. The source of transistor Q4is coupled to the drain of transistor Q3. In some aspects, transistor Q5and transistor Q6are coupled in parallel. Transistor Q10is coupled between the drain of transistor Q8and the source of transistor Q9. Transistor Q6and transistor Q9are gated by a set signal (labelled SDB inFIG.4). Transistor Q7and transistor Q10are gated by a reset signal (labelled RD inFIG.4). The gate of transistor Q5and the gate of transistor Q8are coupled to the drain of transistor Q2. Data master-slave switch404includes a transistor Q11and a transistor Q12. Transistor Q11may be a p-type transistor. Transistor Q12may be a n-type transistor. The respective source of transistor Q11and transistor Q12are coupled to the drain of transistor Q10. Transistor Q11and transistor Q12are gated by a clock signal and the inverse clock signal respectively. The source of transistor Q11is coupled to the source of transistor Q12. Data slave block406includes a transistor Q13, a transistor Q14, a transistor Q15, and a transistor Q16. In some aspects, transistor Q13and transistor Q15are p-type transistors. Transistor Q14and transistor Q16are n-type transistors. The drain of transistor Q13and the drain of transistor Q14are coupled to the gate of transistor Q15and to the gate of transistor Q16. The drain of transistor Q15is coupled to the drain of transistor Q16. Master cross-coupled feedback408includes a transistor Q17, a transistor Q18, a transistor Q19, a transistor Q20, a transistor Q21, a transistor Q22, a transistor Q23, a transistor Q24, a transistor Q25, a transistor Q26, a transistor Q27, and a transistor Q28. In some aspects, transistor Q17, transistor Q18, transistor Q20, transistor Q19, transistor Q25, and transistor Q26may be p-type transistors. Transistor Q21, transistor Q22, transistor Q23, transistor Q24, transistor Q28, and transistor Q27may be n-type transistors. Transistor Q19and transistor Q26are gated by the clock signal. Transistor Q21and transistor Q28are gated by the inverse clock signal. Transistor Q17, transistor Q24are gated by the set signal. Transistor Q20and transistor Q22are gated by the reset signal. The drain of transistor Q17is coupled to the source of transistor Q18. Transistor Q20is coupled between transistor Q17and transistor Q18. The source of transistor Q19is coupled to the drain of transistor Q18. The drain of transistor Q21is coupled to the transistor Q19. The drain of transistor Q22is coupled to the source of transistor Q21. The source of transistor Q21is coupled to the gate of transistor Q27. The drain of transistor Q23and the drain of transistor Q24are coupled to the source of transistor Q22. The gate of transistor Q25and the gate of transistor Q27are coupled to the drain of transistor Q2. The source of transistor Q26is coupled to the drain of transistor Q25. The drain of transistor Q28is coupled to the drain of transistor Q26. The drain of transistor Q27is coupled to the source of transistor Q28. Slave cross-coupled feedback410includes a transistor Q29, a transistor Q30, a transistor Q31, a transistor Q32, a transistor Q33, a transistor Q34, a transistor Q35, a transistor Q36, a transistor Q37, a transistor Q38, a transistor Q39, and a transistor Q40. In some aspects, transistor Q29, transistor Q30, transistor Q31, transistor Q32, transistor Q37, and transistor Q38are p-type transistors. In some aspects, transistor Q33, transistor Q34, transistor Q35, transistor Q36, transistor Q40, and transistor Q39are n-type transistors. Transistor Q33and transistor Q40are gated by the clock signal. Transistor Q31and transistor Q38are gated by the inverse clock signal. Transistor Q29and transistor Q36are gated by the set signal. Transistor Q32and transistor Q34are gated by the reset signal. The drain of transistor Q29is coupled to the source of transistor Q30. Transistor Q32is coupled between the source of transistor Q29and the drain of transistor Q30. The source of transistor Q31is coupled to the drain of transistor Q30. The drain of transistor Q33is coupled to the drain of transistor Q31. The drain of transistor Q34is coupled to the source of transistor Q33. The drain of transistor Q35and the drain of transistor Q36are coupled to the source of transistor Q34. The gate of transistor Q37and the gate of transistor Q39are coupled to the drain of transistor Q31. The source of transistor Q38is coupled to the drain of transistor Q37. The drain of transistor Q40is coupled to the drain of transistor Q38. The drain of transistor Q39is coupled to the source of transistor Q40. The drain of transistor Q40is coupled to the gate of Q35and to the gate of transistor Q30. Inverse data master block412includes a transistor Q41, a transistor Q42, a transistor Q43, a transistor Q44, a transistor Q45, a transistor Q46, a transistor Q47, a transistor Q48, a transistor Q49, and a transistor Q50. In some aspects, transistor Q41, transistor Q42, transistor Q45, transistor Q46, transistor Q47are p-type transistors. In some aspects, transistor Q44, transistor Q43, transistor Q48, transistor Q49, and transistor Q50are n-type transistors. Transistor Q21and transistor Q43are coupled to the inverse data signal at their respective gate. Transistor Q42is gated by the inverse clock signal and transistor Q44is gated by the clock signal. Transistor Q45and transistor Q50are gated by the set signal. Transistor Q47and transistor Q48are gated by the reset signal. The drain of transistor Q41is coupled to the source of transistor Q42. The drain of transistor Q42is coupled to the drain of transistor Q44. The source of transistor Q44is coupled to the transistor Q43. The drain of transistor Q45is coupled to the source of transistor Q46. The gate of transistor Q46is coupled to the drain of transistor Q42. Transistor Q47is coupled between the source of transistor Q45and the drain of transistor Q46. The drain of transistor Q48is coupled to the drain of transistor Q46. The drain of transistor Q49and the drain of transistor Q50are coupled to the source of transistor Q48. Inverse master slave switch414includes transistor Q51and a transistor Q52. In some aspects, transistor Q51may be a p-type transistor. Transistor Q52may be a n-type transistor. The source of transistor Q51is coupled to the source of transistor Q52and to the drain of transistor Q46. Inverse slave block416includes a transistor Q53, a transistor Q54, a transistor Q55, and a transistor Q56. In some aspects, transistor Q53and transistor Q55may be n-type transistors. Transistor Q54and transistor Q56may be n-type transistor. The drain of transistor Q53and the drain of transistor Q54are coupled to the gate of transistor Q55and the gate of transistor Q56. The gate of transistor Q53and the gate of transistor Q54are coupled to the drain of transistor Q53. The output signal may be taken from the drain of transistor Q55. A truth table for the proposed differential flip-flop design, as shown inFIG.4, is shown below. TABLE 1Truth tableInputOutputDDBRDSDclkQQnNote1000Rise edge100100Rise edge0111———Illegal00———Illegal——1——01——01—10 A timing diagram for the differential flip-flop design shown inFIG.4is shown inFIG.5. As shown by trace504and trace506inFIG.5, input signals (D and DB (an inverted input signal) are complementary input signals. The inverted output signal Qn and the non-inverted output signal Q shown by trace510and trace508respectively, are generated according to the truth table shown in table 1. Trace502shows a clock signal. Trace512shows a reset signal. Trace514shows a set signal. An example active high reset/set is shown about t=5.18u. Table 2 illustrates exemplary performance comparison data for a reference cell using an additional inverter in the data input to the inverted output signal path and a cell with the differential flip-flop design as shown inFIG.4. TABLE 2Exemplary resultsAverage clockSetup andPathto output (ps)hold (ps)Reference cellQ55.182.47Design52.151.64Reference cellQN46.182.48Design51.620.74LoadingInput D33 fFInput DB32 fF Accordingly, the differential flip-flops with set and reset as shown inFIG.4may reduce the phase difference between the generated inverted and non-inverted output signals. Further, faster clocks may be used, and both set and reset inputs may be available for consumption. FIG.6is a flowchart of a method600for generating a signal and an inverted signal, in accordance with an embodiment of the present disclosure. In602, an inverted signal of a data input signal is generated. In some aspects, the inverted signal of the data input signal and the data input signal are available for input during a rising edge of a clock cycle. In some aspects, the inverted signal may be generated using an inverter. In604, the data input signal may be applied as an input to a first circuit path during the rising edge of the clock cycle to generate an output signal. In606, the inverted signal is applied as an input to a second circuit path during the rising edge of the clock cycle. In some aspects, the second circuit path may be identical to the first circuit path. The second circuit path may have an identical number of elements (e.g., transistors) as the first circuit path. In some aspects, the second circuit path may be substantially similar to the first circuit path. For example, the first circuit path and the second circuit path may have an identical number of latches. In some aspects, the first circuit path may comprise a first master latch and a first slave latch. The second circuit path may also comprise a second master latch and a second slave latch. In some aspects, the first circuit path may receive a set signal or a reset signal at the first circuit path. The set signal or reset signal may be passed to the second circuit path through a feedback circuit. FIG.7illustrates an example set of processes700used during the design, verification, and fabrication of an article of manufacture such as an integrated circuit to transform and verify design data and instructions that represent the integrated circuit. Each of these processes can be structured and enabled as multiple modules or operations. The term ‘EDA’ signifies the term ‘Electronic Design Automation.’ These processes start with the creation of a product idea710with information supplied by a designer, information which is transformed to create an article of manufacture that uses a set of EDA processes712. When the design is finalized, the design is taped-out734, which is when artwork (e.g., geometric patterns) for the integrated circuit is sent to a fabrication facility to manufacture the mask set, which is then used to manufacture the integrated circuit. After tape-out, a semiconductor die is fabricated 736 and packaging and assembly processes738are performed to produce the finished integrated circuit740. Specifications for a circuit or electronic structure may range from low-level transistor material layouts to high-level description languages. A high-level of representation may be used to design circuits and systems, using a hardware description language (‘HDL’) such as VHDL, Verilog, SystemVerilog, SystemC, MyHDL or OpenVera. The HDL description can be transformed to a logic-level register transfer level (‘RTL’) description, a gate-level description, a layout-level description, or a mask-level description. Each lower representation level that is a more detailed description adds more useful detail into the design description, for example, more details for the modules that include the description. The lower levels of representation that are more detailed descriptions can be generated by a computer, derived from a design library, or created by another design automation process. An example of a specification language at a lower level of representation language for specifying more detailed descriptions is SPICE, which is used for detailed descriptions of circuits with many analog components. Descriptions at each level of representation are enabled for use by the corresponding systems of that layer (e.g., a formal verification system). A design process may use a sequence depicted inFIG.7. The processes described by be enabled by EDA products (or EDA systems). During system design714, functionality of an integrated circuit to be manufactured is specified. The design may be optimized for desired characteristics such as power consumption, performance, area (physical and/or lines of code), and reduction of costs, etc. Partitioning of the design into different types of modules or components can occur at this stage. During logic design and functional verification716, modules or components in the circuit are specified in one or more description languages and the specification is checked for functional accuracy. For example, the components of the circuit may be verified to generate outputs that match the requirements of the specification of the circuit or system being designed. Functional verification may use simulators and other programs such as testbench generators, static HDL checkers, and formal verifiers. In some embodiments, special systems of components referred to as ‘emulators’ or ‘prototyping systems’ are used to speed up the functional verification. During synthesis and design for test718, HDL code is transformed to a netlist. In some embodiments, a netlist may be a graph structure where edges of the graph structure represent components of a circuit and where the nodes of the graph structure represent how the components are interconnected. Both the HDL code and the netlist are hierarchical articles of manufacture that can be used by an EDA product to verify that the integrated circuit, when manufactured, performs according to the specified design. The netlist can be optimized for a target semiconductor manufacturing technology. Additionally, the finished integrated circuit may be tested to verify that the integrated circuit satisfies the requirements of the specification. During netlist verification720, the netlist is checked for compliance with timing constraints and for correspondence with the HDL code. During design planning722, an overall floor plan for the integrated circuit is constructed and analyzed for timing and top-level routing. During layout or physical implementation724, physical placement (positioning of circuit components such as transistors or capacitors) and routing (connection of the circuit components by multiple conductors) occurs, and the selection of cells from a library to enable specific logic functions can be performed. As used herein, the term ‘cell’ may specify a set of transistors, other components, and interconnections that provides a Boolean logic function (e.g., AND, OR, NOT, XOR) or a storage function (such as a flip-flop or latch). As used herein, a circuit ‘block’ may refer to two or more cells. Both a cell and a circuit block can be referred to as a module or component and are enabled as both physical structures and in simulations. Parameters are specified for selected cells (based on ‘standard cells’) such as size and made accessible in a database for use by EDA products. During analysis and extraction726, the circuit function is verified at the layout level, which permits refinement of the layout design. During physical verification728, the layout design is checked to ensure that manufacturing constraints are correct, such as DRC constraints, electrical constraints, lithographic constraints, and that circuitry function matches the HDL design specification. During resolution enhancement730, the geometry of the layout is transformed to improve how the circuit design is manufactured. During tape-out, data is created to be used (after lithographic enhancements are applied if appropriate) for production of lithography masks. During mask data preparation732, the ‘tape-out’ data is used to produce lithography masks that are used to produce finished integrated circuits. A storage subsystem of a computer system (such as computer system800ofFIG.8) may be used to store the programs and data structures that are used by some or all of the EDA products described herein, and products used for development of cells for the library and for physical and logical design that use the library. FIG.8illustrates an example machine of a computer system800within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative implementations, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine may operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system800includes a processing device802, a main memory804(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), a static memory806(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device818, which communicate with each other via a bus830. Processing device802represents one or more processors such as a microprocessor, a central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device802may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device802may be configured to execute instructions826for performing the operations and steps described herein. The computer system800may further include a network interface device808to communicate over the network820. The computer system800also may include a video display unit810(e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device812(e.g., a keyboard), a cursor control device814(e.g., a mouse), a graphics processing unit822, a signal generation device816(e.g., a speaker), graphics processing unit822, video processing unit828, and audio processing unit832. The data storage device818may include a machine-readable storage medium824(also known as a non-transitory computer-readable medium) on which is stored one or more sets of instructions826or software embodying any one or more of the methodologies or functions described herein. The instructions826may also reside, completely or at least partially, within the main memory804and/or within the processing device802during execution thereof by the computer system800, the main memory804and the processing device802also constituting machine-readable storage media. In some implementations, the instructions826include instructions to implement functionality corresponding to the present disclosure. While the machine-readable storage medium824is shown in an example implementation to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine and the processing device802to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm may be a sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Such quantities may take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. Such signals may be referred to as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the present disclosure, it is appreciated that throughout the description, certain terms refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage devices. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may include a computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various other systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the method. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein. The present disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc. In the foregoing disclosure, implementations of the disclosure have been described with reference to specific example implementations thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of implementations of the disclosure as set forth in the following claims. Where the disclosure refers to some elements in the singular tense, more than one element can be depicted in the figures and like elements are labeled with like numerals. The disclosure and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 41,737 |
11863188 | DETAILED DESCRIPTION Hereinafter, various embodiments of the inventive concept are described with reference to the accompanying drawings. FIG.1is a circuit diagram for describing a flip-flop circuit1according to an example embodiment. Referring toFIG.1, the flip-flop circuit1may receive a data signal D and a clock signal CK, and may output an output signal Q. The flip-flop circuit1may include a master latch circuit10, a slave latch circuit20, an output inverter30, and a control signal generation circuit40. The master latch circuit10may transmit the data signal D to a second node DI based on a first control signal nCK and a second control signal bCK. The master latch circuit10may include a first tri-state inverter11, a first inverter12, and a second tri-state inverter13. The first tri-state inverter11may invert the data signal D and transmit an inverted signal with respect to the data signal D to a first node DN based on the first control signal nCK and the second control signal bCK. For example, when the first control signal nCK has a first logic level (e.g., a logic high level), the first tri-state inverter11may be in an active state in which the first tri-state inverter11operates as an inverter with respect to a data signal D having the first logic level. That is, the first tri-state inverter11may invert the data signal D having the first logic level and transmit the inverted signal to the first node DN. Thus, a signal of the first node DN may have a second logic level (e.g., a logic low level). In the disclosure, a first logic level may refer to a logic high level, and a second logic level may refer to a logic low level. When the first control signal nCK has the second logic level, the first tri-state inverter11may be in an inactive state or a floating state in which the first tri-state inverter11does not operate as an inverter with respect to the data signal D having the first logic level. That is, when the first control signal nCK has the second logic level, the first tri-state inverter11may not transmit the data signal D having the first logic level to the first node DN. When the second control signal bCK has the second logic level, the first tri-state inverter11may be in an active state in which the first tri-state inverter11operates as an inverter with respect to a data signal D having the second logic level. That is, the first tri-state inverter11may invert the data signal D having the second logic level and transmit the inverted signal to the first node DN. Thus, a signal of the first node DN may have the first logic level. When the second control signal bCK has the first logic level, the first tri-state inverter11may be in an inactive state or a floating state in which the first tri-state inverter11does not operate as an inverter with respect to the data signal D having the second logic level. That is, when the second control signal bCK has the first logic level, the first tri-state inverter11may not transmit the data signal D having the second logic level to the first node DN. The first inverter12may invert the signal of the first node DN and transmit the inverted signal to the second node DI. A signal of the second node DI may have the same logic level as the data signal D. The second tri-state inverter13may invert the signal of the second node DI based on the first control signal nCK and the second control signal bCK and transmit the inverted signal to the first node DN. For example, when the second control signal bCK has the first logic level, the second tri-state inverter13may be in an active state in which the second tri-state inverter13operates as an inverter with respect to a signal of the second node DI having the first logic level. That is, the second tri-state inverter13may invert the signal of the second node DI having the first logic level and transmit the inverted signal to the first node DN. Thus, a signal of the first node DN may have the second logic level. When the second control signal bCK has the second logic level, the second tri-state inverter13may be in an inactive state or a floating state in which the second tri-state inverter13does not operate as an inverter with respect to the signal of the second node DI having the first logic level That is, when the second control signal bCK has the second logic level, the second tri-state inverter13may not transmit the signal of the second node DI having the first logic level to the first node DN. When the first control signal nCK has the second logic level, the second tri-state inverter13may be in an active state in which the second tri-state inverter13operates as an inverter with respect to a signal of the second node DI having the second logic level. That is, the second tri-state inverter13may invert the signal of the second node DI having the second logic level and transmit the inverted signal to the first node DN. Thus, a signal of the first node DN may have the first logic level. When the first control signal nCK has the first logic level, the second tri-state inverter13may be in an inactive state or a floating state in which the second tri-state inverter13does not operate as an inverter with respect to the signal of the second node DI having the second logic level. That is, when the first control signal nCK has the first logic level, the second tri-state inverter13may not transmit the signal of the second node DI having the second logic level to the first node DN. When the second tri-state inverter13is in the active state, the first inverter12and the second tri-state inverter13may operate as a latch circuit for maintaining a signal level of the first node DN and the second node DI. The slave latch circuit20may include a third tri-state inverter21, a second inverter22, and a fourth tri-state inverter23. The third tri-state inverter21may invert the signal of the second node DI based on the first control signal nCK and the second control signal bCK and transmit the inverted signal to a third node QN. The operation of the third tri-state inverter21may be the same as the operation of the second tri-state inverter13described above. The second inverter22may invert a signal of the third node QN and transmit the inverted signal to a fourth node QI. The operation of the second inverter22may be the same as the operation of the first inverter12described above. The fourth tri-state inverter23may invert a signal of a fourth node QI based on the first control signal nCK and the second control signal bCK and transmit the inverted signal to the third node QN. The operation of the fourth tri-state inverter23may be the same as the operation of the first tri-state inverter11described above. The output inverter30may invert a signal of the third node QN to generate an output signal Q. The control signal generation circuit40may receive a clock signal CK and the signal of the first node DN and generate the first control signal nCK and the second control signal bCK. The control signal generation circuit40may include a third inverter41and a NOR circuit42. The third inverter41may generate the first control signal nCK by inverting the clock signal CK. The NOR circuit42may generate the second control signal bCK by performing a NOR operation on the signal of the first node DN and the first control signal nCK. Thus, the second control signal bCK may have the first logic level, only when the signal of the first node DN has the second logic level, and the first control signal nCK has the second logic level. That is, while the clock signal CK is toggled, a section in which the second control signal bCK has the first logic level may be decreased, and thus, currents consumed by the first through fourth tri-state inverters11,13,21, and23according to the second control signal nCK may be reduced. The flip-flop circuit1according to an example embodiment may generate the first control signal nCK and the second control signal bCK based on the first node DN and the clock signal CK, and thus, may perform a flip-flop circuit operation consuming low power. FIGS.2A through2Dare circuit diagrams for describing operations of the flip-flop circuit1according to an example embodiment.FIG.2Adescribes the operation of the flip-flop circuit1when a data signal D has a first logic level, and a clock signal CK has a second logic level,FIG.2Bdescribes the operation of the flip-flop circuit1when the data signal D has the first logic level, and the clock signal CK is transited to the first logic level,FIG.2Cdescribes the operation of the flip-flop circuit1when the data signal D has the second logic level, and the clock signal CK has the second logic level, andFIG.2Ddescribes the operation of the flip-flop circuit1when the data signal D has the second logic level, and the clock signal CK is transited to the first logic level. InFIGS.2A through2D, the first logic level may be indicated as “1,” and the second logic level may be indicated as “0.” Referring toFIG.2A, when the clock signal CK has the second logic level, a logic level of a first control signal nCK may be the first logic level via the third inverter41, and a logic level of a second control signal bCK may be the second logic level via the NOR circuit42. When the logic level of the first control signal nCK is the first logic level, and the logic level of the second control signal bCK is the second logic level, the first tri-state inverter11and the fourth tri-state inverter23may be in an active state in which the first tri-state inverter11and the fourth tri-state inverter23operate as inverters regardless of a logic level of the data signal D. The second tri-state inverter13and the third tri-state inverter21may be in an inactive state or a floating state in which the second tri-state inverter13and the third tri-state inverter21do not operate as inverters regardless of the logic level of the data signal D. The first tri-state inverter11may transmit an inverted signal of the data signal D to the first node DN, and thus, a logic level of a signal of the first node DN may be the second logic level. The first inverter12may transmit an inverted signal of the signal of the first node DN to the second node DI, and thus, a logic level of a signal of the second node DI may be the first logic level. The fourth tri-state inverter23may provide an inverted signal of a signal of the fourth node QI to the third node QN. Thus, the second inverter22and the fourth tri-state inverter23may perform a latch operation for maintaining logic levels of signals of the third node QN and the fourth node QI. The output inverter30may perform a hold operation for maintaining a logic level of an output signal Q as a logic level of a prior output signal Q− by inverting the signal of the third node QN. Referring toFIG.2B, when the clock signal CK transitions to the first logic level, the first control signal nCK may have the second logic level via the third inverter41. When the first control signal nCK has the second logic level, the first tri-state inverter11may be in an inactive state with respect to a data signal D having the first logic level, and thus, the logic level of the signal of the first node DN may be maintained as the second logic level. The first control signal nCK may have the second logic level, and the signal of the first node DN may have the second logic level, and thus, the second control signal bCK may have the first logic level via the NOR circuit42. When the logic level of the first control signal nCK is the second logic level, and the logic level of the second control signal bCK is the first logic level, the first tri-state inverter11and the fourth tri-state inverter23may be in an in active state or a floating state in which the first tri-state inverter11and the fourth tri-state inverter23do not operate as inverters regardless of the logic level of the data signal D. The second tri-state inverter13and the third tri-state inverter21may be in an active state in which the second tri-state inverter13and the third tri-state inverter21operate as inverters regardless of the logic level of the data signal D. The first inverter12and the second tri-state inverter13may perform a latch operation for maintaining the logic levels of the signals of the first node DN and the second node DI. The third tri-state inverter21may invert the signal of the second node DI and transmit the inverted signal to the third node QN, and thus, a logic level of a signal of the third node QN may be the second logic level. The output inverter30may generate the output signal Q by inverting the signal of the third node QN, and thus, a logic level of the output signal Q may be the first logic level. Referring toFIGS.2A and2B, when the data signal D has the first logic level, the output signal Q may have the first logic level by being synchronized to the clock signal CK at a timing at which the clock signal CK transitions from the first logic level to the second logic level. Referring toFIG.2C, when the clock signal CK has the second logic level, a logic level of the first control signal nCK may be the first logic level, and a logic level of the second control signal bCK may be the second logic level, and thus, the same operation asFIG.2Amay be performed. For example, the first tri-state inverter11may be in an active state with respect to a data signal D having the second logic level. Thus, a logic level of a signal of the first node DN may be the first logic level. The first inverter12may invert the signal of the first node DN and transmit the inverted signal to the second node DI, and thus, a logic level of the second node DI may be the second logic level. The logic level of the output signal Q may be maintained as a logic level of the prior output signal Q− via the second inverter22, the fourth tri-state inverter23, and the output inverter30. That is, when the clock signal CK has the second logic level, the flip-flop circuit1may perform a hold operation for maintaining the logic level of the output signal Q as the logic level of the prior output signal Q−. The prior output signal Q− may indicate a logic level of the output signal Q determined by a prior active edge of the clock signal CK. Referring toFIG.2D, when the logic level of the clock signal CK transitions to the first logic level, the logic level of the first control signal nCK may transition to the second logic level via the third inverter41. When the logic level of the first control signal nCK is the second logic level, the second tri-state inverter13may be in an active state with respect to a signal of the second node DI having the second logic level. Thus, via a latch structure formed by the second tri-state inverter13and the first inverter12, the logic level of the signal of the first node DN may be maintained as the first logic level, and the logic level of the signal of the second node DI may be maintained as the second logic level. Because the logic level of the signal of the first node DN is the first logic level, and the logic level of the first control signal nCK is the first logic level, the logic level of the second control signal bCK may be maintained as the second logic level via the NOR circuit42. Because the logic level of the first control signal nCK is the second logic level, the second tri-state inverter13and the third tri-state inverter21may operate as inverters with respect to a signal having the second logic level. Because the third tri-state inverter21may invert the signal of the second node DI having the second logic level and transmit the inverted signal to the third node QN, a signal of the third node QN may have the first logic level. Because the logic level of the second control signal bCK is the second logic level, the first tri-state inverter11and the fourth tri-state inverter23may operate as inverters with respect to a signal having the second logic level. Thus, via a latch structure formed by the second inverter22and the fourth tri-state inverter23, a logic level of the signal of the third node QN may be maintained as the first logic level, and a logic level of a signal of the fourth node QI may be maintained as the second logic level. Because the output inverter30may generate the output signal Q by inverting the signal of the third node QN, the logic level of the output signal Q may be the second logic level. Referring toFIGS.2C and2D, when the data signal D has the second logic level, the output signal Q may have the second logic level by being synchronized to the clock signal CK at a timing at which the clock signal CK transitions from the second logic level to the first logic level. Also, referring toFIGS.2A through2D, the second control signal bCK may have the first logic level, only when the logic level of the clock signal CK is the first logic level, and the logic level of the data signal D is the first logic level. Thus, the power consumed by the first through fourth tri-state inverters11,13,21, and23according to the second control signal bCK may be reduced. FIGS.3A through3Care views for describing an operation of a normal flip-flop circuit1-2.FIG.3Ais a circuit diagram of the normal flip-flop circuit1-2,FIG.3Bis a diagram for describing control signals generated by a clock buffer40-2included in the normal flip-flop circuit1-2, andFIG.3Cis a detailed circuit diagram of the normal flip-flop circuit1-2. Referring toFIG.3A, the normal flip-flop circuit1-2may include the clock buffer40-2. The clock buffer40-2may receive a clock signal CK and generate a first control signal nCK and a second control signal bCK. The clock buffer40-2may include a first clock inverter41-2and a second clock inverter42-2. The first clock inverter41-2may generate the first control signal nCK by inverting the clock signal CK, and the second clock inverter42-2may generate the second control signal bCK by inverting the first control signal nCK. Referring toFIG.3B, when the clock signal CK transitions from a first logic level (e.g., a logic high level) to a second logic level (e.g., a logic low level), a logic level of each of the first control signal nCK and the second control signal bCK may also transition from one state to another. For example, when the clock signal CK transitions from high to low, the logic level of the first control signal nCK may transition from low to high, and the logic level of the second control signal bCK may transition from high to low. A data signal D may be maintained as a logic high level from a time prior to a set up time Tsetup based on a descending edge of the clock signal CK. A timing at which the logic level of the first control signal nCK transitions may be delayed compared to a timing at which the logic level of the clock signal CK transitions. Also, a timing at which the logic level of the second control signal bCK transitions may be delayed similarly to the timing at which the logic level of the first control signal nCK transitions. The timings at which the logic levels transition may be different from each other, and thus, there may be a section at which both of the first control signal nCK and the second control signal bCK are recognized as the first logic level. For example, inFIG.3B, at a first point time t1, both of the first control signal nCK and the second control signal bCK may be recognized as the first logic level. FIG.3Cis the circuit diagram of the normal flip-flop circuit1-2before and after a first time point t1illustrated inFIG.3B. Referring toFIG.3C, the second tri-state inverter13may include first and second N-type transistors N11and N12and first and second p-type transistors P11and P12. The third tri-state inverter21may include third and fourth N-type transistors N13and N14and third and fourth P-type transistors P13and P14. The clock signal CK has the second logic level at the first time point t1, and thus, the normal flip-flop circuit1-2may have to perform a hold operation for maintaining an output signal Q as a prior output signal Q−, that is, the second logic level. However, the logic level of the second control signal bCK at the first time point t1is the first logic level, and thus, the fourth N-type transistor T14included in the third tri-state inverter21may be turned on, and the third node QN may be discharged. Thus, a signal of the third node QN may transition to the second logic level, and the output signal Q may transition to the first logic level. That is, at the first time point t1, the normal flip-flop circuit1-2may fail to hold the prior output signal Q− as the output signal Q. FIG.4is a timing diagram for describing a flip-flop circuit according to an example embodiment. Referring toFIG.4, a data signal D may be maintained as a first logic level for a predetermined time period from a time prior to a set up time Tsetup based on a time point at which a clock signal CK transitions from the first logic level to a second logic level. The data signal D may have the second logic level before being maintained as the first logic level. That is, the timing diagram ofFIG.4illustrates a sequential process of the operation of the flip-flop circuit1illustrated inFIG.2Dand the operation of the flip-flop circuit1illustrated inFIG.2A. Referring toFIG.4, even when the clock signal CK transitions from the first logic level to the second logic level, the second control signal bCK may not transition and may be maintained as the second logic level. Thus, as illustrated inFIG.3B, a situation in which both of the first control signal nCK and the second control signal bCK may be recognized as the first logic level may not occur. FIG.5is a circuit diagram for describing the flip-flop circuit1performing a hold operation, according to an example embodiment. Referring toFIGS.2D and4, when the clock signal CK has the first logic level, and the data signal D has the second logic level, the logic level of the third node QN may be the first logic level, and a logic level of the output signal Q may be the second logic level. Referring toFIGS.2A,4, and5, when the clock signal CK transitions from the first logic level to the second logic level, the output signal Q may be maintained as the logic level of the prior output signal Q−. That is, the logic level of the output signal Q may be maintained as the second logic level. Also, the logic level of the second control signal bCK may be maintained as the second logic level, and thus, the second N-type transistor N12may not be turned on, unlike the normal flip-flop circuit1-2ofFIG.3C. Here, because the third node QN may not be discharged, the flip-flop circuit1according to an embodiment may hold the prior output signal Q− as the output signal Q at the first time point t1. FIG.6is a circuit diagram for describing a flip-flop circuit1-3according to an example embodiment. Referring toFIG.6, the flip-flop circuit1-3may be a scan flip-flop circuit configured to receive a data signal D, a scan input signal SI, and a scan enable signal SE and output an output signal Q according to a first control signal nCK and a second control signal bCK. The flip-flop circuit1-3may include a scan inverter100. The scan inverter100may receive the scan enable signal SE and invert the scan enable signal SE to generate an inverted scan enable signal nSE. The scan enable signal SE may indicate a first operation mode or a second operation mode according to a logic level. For example, when the scan enable signal SE has a second logic level, the scan enable signal SE may indicate the first operation mode, and when the scan enable signal SE has a first logic level, the scan enable signal SE may indicate the second operation mode. For example, the first operation mode may be a normal operation mode in which data transmission is performed, and the second operation mode may be a scan test mode in which a test operation is performed. However, the one or more embodiments are not limited thereto, and the first operation mode and the second operation mode may be variously configured. In some embodiments, the first operation mode may be a scan test mode, and the second operation mode may be a normal operation mode. When the scan enable signal SE indicates the normal operation mode, the flip-flop circuit1-3may perform a normal operation of latching the data signal D and providing the output signal Q. When the scan enable signal SE indicates the scan test mode, the flip-flop circuit1-3may perform a scan test operation of latching the scan input signal SI and providing the output signal Q. The flip-flop circuit1-3may additionally include an input selection circuit14. The input selection circuit14may select one of the data signal D and the scan input signal SI as an input signal according to the scan enable signal SE and the inverted scan enable signal nSE. The input selection circuit14may invert the selected input signal based on the first control signal nCK or the second control signal bCK and transmit the inverted signal to the first node DN. When the scan enable signal SE has the second logic level, the input selection circuit14may operate as a tri-state inverter (for example, the first tri-state inverter11ofFIG.5) inverting the data signal D based on the first control signal nCK and the second control signal bCK. FIG.7is a circuit diagram for describing a flip-flop circuit1-4according to an example embodiment. Referring toFIG.7, the flip-flop circuit1-4may include a second tri-state inverter13-2and a third tri-state inverter21-2. Unlike the second tri-state inverter13ofFIG.6, the second tri-state inverter13-2may be realized as three transistors. For example, the second tri-state inverter13-2may include the second N-type transistor N12, and the first and second P-type transistors P11and P12from among the transistors included in the second tri-state inverter13ofFIG.6. That is, the first N-type transistor N11from among the transistors included in the second tri-state inverter13ofFIG.6may be omitted according to this embodiment. InFIG.6, a gate terminal of the first N-type transistor N11may be connected to the second node DI. InFIG.7, a gate terminal of the second P-type transistor P12may be connected to the second node DI. Referring back toFIG.2B, when the logic level of the clock signal CK is the first logic level, and the logic level of the data signal D is the first logic level, the logic level of the second node DI may be the second logic level, and the logic level of the first node DN may be the first logic level. That is, the second tri-state inverter13may operate as an inverter. Also, when the logic level of the clock signal CK is the first logic level, and the logic level of the data signal D is the first logic level, the logic level of the second control signal bCK may be the first logic level. Referring toFIG.7, the second N-type transistor N12may be turned on according to the second control signal bCK having the first logic level, and thus, a logic level of the first node DN may be maintained as the second logic level. That is, even when the first N-type transistor N11is omitted, the second tri-state inverter13-2ofFIG.7may operate as an inverter. Unlike the third tri-state inverter21ofFIG.6, the third tri-state inverter21-2may be realized with only three transistors. Furthermore, the third tri-state inverter21-2may include the fourth N-type transistor N14, and the third and fourth P-type transistors P13and P14from among the transistors included in the third tri-state inverter21ofFIG.6. That is, the third N-type transistor N13from among the transistors included in the third tri-state inverter21ofFIG.6may be omitted according to this embodiment. InFIG.6, the third N-type transistor N13may have a gate terminal connected to the second node DI. InFIG.7, a gate terminal of the fourth P-type transistor P14may be connected to the second node DI. Referring back toFIG.2B, when the logic level of the clock signal CK is the first logic level, and the logic level of the data signal D is the first logic level, the logic level of the second node DI may be the second logic level, and the logic level of the first node DN may be the first logic level. That is, the second tri-state inverter13may operate as an inverter. Also, when the logic level of the clock signal CK is the first logic level, and the logic level of the data signal D is the first logic level, the logic level of the second control signal bCK may be the first logic level. Referring toFIG.7, the fourth N-type transistor N14may be turned on according to the second control signal bCK having the first logic level, and thus, a logic level of the third node QN may be maintained as the second logic level. That is, even when the third N-type transistor N13is omitted, the third tri-state inverter21-2ofFIG.7may operate as an inverter. The flip-flop circuit1-4according to an example embodiment may realize the second and third tri-state inverters13-2and21-2by using fewer transistors, and therefore, may realize high integration. In some embodiments, the second or third tri-state inverter13-2or21-2included in the flip-flop circuit1-4ofFIG.7may be substituted by one or more components of the flip-flop circuit1ofFIG.5. FIG.8is a circuit diagram for describing a flip-flop circuit1-5according to an example embodiment. Referring toFIG.8, unlike the flip-flop circuit1-3ofFIG.6, the flip-flop circuit1-5may include a selection circuit14-2and a fourth tri-state inverter23-2. The selection circuit14-2may include first through fourth N-type transistors N21through N24and first through fourth P-type transistors P21through P24. An inverted scan enable signal nSE may be input to a gate terminal of the first N-type transistor N21, a data signal D may be input to a gate terminal of the second N-type transistor N22, a scan enable signal SE may be input to a gate terminal of the third N-type transistor N23, and a scan input signal SI may be input to a gate terminal of the fourth N-type transistor N24. The first through fourth N-type transistors N21through N24may form a pull-down portion14-3, and an end of the pull-down portion14-3may be connected to the first node DN, and the other end of the pull-down portion14-3may be connected to a first internal node M. A data signal D may be input to a gate terminal of the first P-type transistor P21, a scan enable signal SE may be input to a gate terminal of the second P-type transistor P22, a scan input signal SI may be input to a gate terminal of the third P-type transistor P23, and an inverted scan enable signal nSE may be input to a gate terminal of the fourth P-type transistor P24. The first through fourth P-type transistors P21through P24may form a pull-up portion14-4, and an end of the pull-up portion14-4may be connected to the first node DN, and the other end of the pull-up portion14-4may be connected to a second internal node N. The fourth tri-state inverter23-2may include fifth and sixth N-type transistors N25and N26and fifth and sixth P-type transistors P25and P26. A gate terminal of the fifth N-type transistor N25may receive a first control signal nCK, a source terminal of the fifth N-type transistor N25may be connected to a negative power node, and a drain terminal of the fifth N-type transistor N25may be connected to the first internal node M. A gate terminal of the sixth N-type transistor N26may be connected to the fourth node QI, a source terminal of the sixth N-type transistor N26may be connected to the first internal node M, and a drain terminal of the sixth N-type transistor N26may be connected to the third node QN. A gate terminal of the fifth P-type transistor P25may be connected to the fourth node QI, a source terminal of the fifth P-type transistor P25may be connected to the second internal node N, and a drain terminal of the fifth P-type transistor P25may be connected to the third node QN. A gate terminal of the sixth P-type transistor P26may receive a second control signal bCK, a source terminal of the sixth P-type transistor P26may be connected to the positive power node, and a drain terminal of the sixth P-type transistor P26may be connected to the second internal node N. When a logic level of the second control signal bCK is a second logic level, the pull-up portion14-4may be connected to the positive power node, may invert one of the data signal D and the scan input signal SI, and transmit the inverted signal to the first node DN. When a logic level of the first control signal nCK is a first logic level, the pull-down portion14-3may be connected to the negative power node, may invert one of the data signal D and the scan input signal SI, and transmit the inverted signal to the first node DN. That is, the selection circuit14-2and the fourth tri-state inverter23-2may share the same positive power node and negative power node, and thus, the structure of a power delivery network for providing power to the flip-flop circuit1-5may be simplified. In some embodiments, the selection circuit14-2and the fourth tri-state inverter23-2included in the flip-flop circuit1-5ofFIG.8may be substituted by one or more components of the flip-flop circuit1-3ofFIG.6and/or the flip-flop circuit1-4ofFIG.7. FIG.9is a circuit diagram for describing a flip-flop circuit1-6according to an example embodiment. Referring toFIG.9, the flip-flop circuit1-6may further include a conductive line path Path. The conductive line path Path may connect a third internal node A with a fourth internal node B. The third internal node A may be formed between the first P-type transistor P21and the second P-type transistor P22serially connected with each other. The fourth internal node B may be formed between the first N-type transistor N21and the second N-type transistor N22serially connected with each other. In a normal operation mode, a scan enable signal SE may have a second logic level, and an inverted scan enable signal nSE may have a first logic level. Thus, both of the first N-type transistor N21and the second P-type transistor P22may be turned on. Due to on-resistance of the first N-type transistor N21and the second P-type transistor P22, a speed at which the data signal D is transmitted to the first node DN in the normal operation mode may be reduced. According to an example embodiment of the inventive concept, the conductive line path Path may have a resistance lower than the on-resistance of the first N-type transistor N21and the second P-type transistor P22. Thus, effects of the on-resistance of the first N-type transistor N21and the second P-type transistor P22on the data transmission path may be reduced via the conductive line path Path, and thus, the performance of the flip-flop circuit1-6may be improved. That is, the speed at which the data signal D is transmitted to the first node DN in the normal operation mode may be improved. In some embodiments, a selection circuit14-3included in the flip-flop circuit1-6ofFIG.9may be substituted by one or more components of the flip-flop circuit1-3ofFIG.6or the flip-flop circuit1-4ofFIG.7. FIG.10is a circuit diagram for describing a flip-flop circuit1-7according to an example embodiment. Referring toFIG.10, the flip-flop circuit1-7may include a selection circuit14-4, unlike the flip-flop circuit1-6ofFIG.9. The selection circuit14-4may include a delay circuit15and an inversion circuit16. The delay circuit15may receive a scan enable signal SE and a scan input signal SI. The delay circuit15may include a NAND circuit15-1and a fourth inverter15-2. When the scan enable signal SE has a logic low level, that is, the flip-flop circuit1-7is in a normal operation mode, the delay circuit15may output a signal having the logic low level. When the scan enable signal SE has a logic high level, that is, the flip-flop circuit is in a scan test mode, the delay circuit15may receive a scan input signal SI and output a delayed scan input signal dSI. The delayed scan input signal dSI may be input to the inversion circuit16. The inversion circuit16may transmit one of the delayed scan input signal dSI and a data signal D to the first node DN according to an inverted scan enable signal nSE. A timing at which the delayed scan input signal dSI is provided to the inversion circuit16may be delayed compared to a timing at which a positive power node or a negative power node is provided to the inversion circuit16through the second internal node N or the first internal node M. Thus, via the delay circuit15, a hold time period during which the scan input signal SI has to be maintained after a clock signal CK transitions may be secured, and thus, the performance of a scan test operation using the flip-flop circuit1-7may be improved. In some embodiments, the selection circuit14-4included in the flip-flop circuit1-7ofFIG.10may be selectively included in the flip-flop circuit1-5ofFIG.8or the flip-flop circuit1-6ofFIG.9. FIG.11is a circuit diagram for describing a flip-flop circuit1-8according to an example embodiment. Referring toFIG.11, the flip-flop circuit1-8may include a NOR circuit42-3. The NOR circuit42-3may be an embodiment of the NOR circuit42included in the flip-flop circuit1-4ofFIG.7. The NOR circuit42-3may include fifth and sixth N-type transistors N15and N16, and a fifth P-type transistor P15. A first control signal nCK may be input to a gate terminal of the fifth N-type transistor N15, a negative power node may be connected to a source terminal of the fifth N-type transistor N15, and a node generating a second control signal bCK may be connected to a drain terminal of the fifth N-type transistor N15. A gate terminal of the sixth N-type transistor N16may be connected to the first node DN, a source terminal of the sixth N-type transistor N16may be connected to the negative power node, and a drain terminal of the sixth N-type transistor N16may be connected to the node generating the second control signal bCK. The first control signal nCK may be input to a gate terminal of the fifth P-type transistor P15, the second node DI may be connected to a source terminal of the fifth P-type transistor P15, and the node generating the second control signal bCK may be connected to a drain terminal of the fifth P-type transistor P15. A general bistable input NOR circuit may be realized as four transistors. However, the NOR circuit42-3according to an example embodiment may be realized as three transistors. The NOR circuit42included in the flip-flop circuit1ofFIG.1may generate the second control signal bCK having the first logic level, when a signal of the first node DN has the second logic level, and the first control signal nCK has the second logic level. Referring toFIGS.2B,2D, and11, when the first control signal nCK has the second logic level, and the signal of the first node DN has the second logic level, the second control signal bCK may have the first logic level. That is, the NOR circuit42-3ofFIG.11may operate in the same way as the NOR circuit42ofFIG.1. The flip-flop circuit1-8according to an example embodiment may realize the NOR circuit42-3by using fewer transistors, and thus, may realize high integration. In some embodiments, the NOR circuit42-3included in the flip-flop circuit1-8ofFIG.11may be substituted by one or more components of the flip-flop circuits1, and1-3through1-7ofFIGS.1,2A through2D, and5through10. FIG.12is a circuit diagram for describing a flip-flop circuit1-9according to an example embodiment. Referring toFIG.12, the flip-flop circuit1-9may include the second and third tri-state inverters13-2and21-2included in the flip-flop circuit1-4ofFIG.7, the selection circuit14-2and the fourth tri-state inverter23-2included in the flip-flop circuit1-5ofFIG.8, and the NOR circuit42-3included in the flip-flop circuit1-8ofFIG.11. The flip-flop circuit1-9according to an example embodiment may realize the second and third tri-state inverters13-2and21-2and the NOR circuit42-3by using fewer transistors, and thus, may realize high integration. Also, the flip-flop circuit1-9may include the selection circuit14-2and the fourth tri-state inverter23-2sharing the same positive power node and the same negative power node, and thus, the structure of a power delivery network for providing power to the flip-flop circuit1-9may be simplified. FIG.13is a circuit diagram for describing a flip-flop circuit1-10according to an example embodiment. Referring toFIG.13, the flip-flop circuit1-10may include a NOR circuit12-2and a control signal generation circuit40-3. According to an embodiment, the NOR circuit12-2may replace the first inverter12ofFIG.6. For example, the NOR circuit12-2may receive a signal of the first node DN and a reset signal Reset and transmit a result of a NOR operation on the signal of the first node DN and the reset signal Reset to the second node DI. That is, when the reset signal Reset has a first logic level, a signal of the second node DI may have a second logic level regardless of a data signal D. The control signal generation circuit40-3may include a NOR circuit41-2. The NOR circuit41-2may be arranged at the location of the third inverter41ofFIG.6. For example, the NOR circuit41-2may receive a clock signal CK and a reset signal Reset and generate a result of a NOR operation on the clock signal CK and the reset signal Reset as a first control signal nCK. That is, when the reset signal Reset has the first logic level, a logic level of the first control signal nCK may be maintained as the second logic level. When the reset signal Reset has the first logic level, both of the signal of the second node DI and the first control signal nCK have the second logic level, and thus, the third node QN may have the first logic level via the third tri-state inverter21, and an output signal Q may be reset as the second logic level via the output inverter30. In some embodiments, the NOR circuits12-2and41-2included in the flip-flop circuit1-10ofFIG.13may be substituted by one or more components of the flip-flop circuits1and1-3through1-9ofFIGS.1,2A through2D, and5through12. FIG.14is a circuit diagram for describing a flip-flop circuit2according to an example embodiment. Referring toFIG.14, the flip-flop circuit2according to an example embodiment may include a master latch circuit210, a slave latch circuit220, and input inverter230, an output inverter240, and a control signal generation circuit250. The input inverter230may invert a data signal D and transmit the inverted signal to the first node DN. The master latch circuit210may include a first tri-state inverter211, a first inverter212, and a second tri-state inverter213. The first tri-state inverter211may invert a signal of the first node DN based on a first control signal nCK and a second control signal bCK and transmit the inverted signal to the second node DI. The operation of the first tri-state inverter211may be the same as the operation of the first tri-state inverter11ofFIG.1. The first inverter212may invert a signal of the second node DI and transmit the inverted signal to the third node DB. The second tri-state inverter213may invert a signal of the third node DB based on the first control signal nCK and the second control signal bCK and transmit the inverted signal to the second node DI. The operation of the second tri-state inverter213may be the same as the operation of the second tri-state inverter13ofFIG.1. The slave latch circuit220may include a third tri-state inverter221, a second inverter222, and a fourth tri-state inverter223. The third tri-state inverter221may invert the signal of the third node DB based on the first control signal nCK and the second control signal bCK and transmit the inverted signal to the fourth node QI. The operation of the third tri-state inverter221may be the same as the operation of the third tri-state inverter21ofFIG.1. The second inverter222may invert a signal of the fourth node QI and transmit the inverted signal to a fifth node QN. The fourth tri-state inverter223may invert a signal of the fifth node QN based on the first control signal nCK and the second control signal bCK and transmit the inverted signal to the fourth node QI. The output inverter240may generate an output signal Q by inverting the signal of the fifth node QN. The control signal generation circuit250may generate the first control signal nCK and the second control signal bCK, based on the signal of the first node DN, the signal of the fifth node QN, the signal of the second node DI, and a clock signal CK. The control signal generation circuit250may include a NAND circuit251, a third inverter252, an AND circuit253, and a NOR circuit254. The NAND circuit251may perform a NAND operation on the signal of the first node DN and the signal of the fifth node QN to generate a signal of a sixth node ND. The third inverter252may receive the clock signal CK and invert the clock signal CK to generate the first control signal nCK The AND circuit253may perform an AND operation on the first control signal nCK and the signal of the sixth node ND to generate a signal of a seventh node NQ. The NOR circuit254may perform a NOR operation on the signal of the second node DI and the signal of the seventh node NQ to generate the second control signal bCK. The flip-flop circuit2according to an example embodiment may generate the second control signal bCK in synchronization with the first node DN, the second node DI, and the fifth node QN, and thus, compared to when the second control signal bCK is generated by simply inverting the first control signal nCK, the number of times of toggling of the second control signal bCK may be reduced. Thus, the flip-flop circuit2may consume less power. FIGS.15A through15Dare circuit diagrams for describing operations of the flip-flop circuit2according to an example embodiment.FIG.15Adescribes the operation of the flip-flop circuit2when a data signal D has a first logic level, and a clock signal CK has a second logic level,FIG.15Bdescribes the operation of the flip-flop circuit2when the data signal D has the first logic level, and the clock signal CK transitions to the first logic level,FIG.15Cdescribes the operation of the flip-flop circuit2when the data signal D has the second logic level, and the clock signal CK has the second logic level, andFIG.15Ddescribes the operation of the flip-flop circuit2when the data signal D has the second logic level, and the clock signal CK transitions to the first logic level. InFIGS.15A through15D, the first logic level may be indicated as “1,” and the second logic level may be indicated as “0.” Referring toFIG.15A, a logic level of the first node DN may be the second logic level via the input inverter230. Thus, a logic level of the sixth node ND may be the first logic level via the NAND circuit251. When the clock signal CK has the second logic level, a logic level of the first control signal nCK may be the first logic level via the third inverter252. A logic level of the seventh node NQ may be the first logic level via the AND circuit253. A logic level of the second control signal bCK may be the second logic level via the NOR circuit254. Because the logic level of the first control signal nCK is the first logic level, and the logic level of the second control signal bCK is the second logic level, the first tri-state inverter211and the fourth tri-state inverter223may be in an active state, and the second tri-state inverter213and the third tri-state inverter221may be in an inactive state. A signal of the second node DI may have the first logic level via the first tri-state inverter211. A signal of the third node DB may have the second logic level via the first inverter212. The second inverter222and the fourth tri-state inverter223may form a latch structure, and thus, the output signal Q may be maintained as the prior output signal Q−. Referring toFIG.15B, the first control signal nCK may have the second logic level via the third inverter252. A logic level of the seventh node NQ may be the second logic level via the AND circuit253. When the first control signal nCK has the second logic level, the second tri-state inverter213may operate as an inverter with respect to a signal of the third node DB having the second logic level, and thus, the logic levels of the second node DI and the third node DB may be maintained via a latch structure formed by the first inverter212and the second tri-state inverter213. Thus, a logic level of the second control signal bCK may be the second logic level via the NOR circuit254. Because the first control signal nCK and the second control signal bCK may have the second logic level, the first through fourth tri-state inverters211,213,221, and223may operate as inverters with respect to input signals having the second logic level. Thus, a signal of the fourth node QI may have the first logic level via the third tri-state inverter221. A signal of the fifth node QN may have the second logic level via the second inverter222. The second inverter222and the fourth tri-state inverter223may form a latch structure, and the logic levels of the fourth node QI and the fifth node QN may be maintained. The output signal Q may have the first logic level via the output inverter240. Referring toFIGS.15A and15B, in a case in which the data signal D has the first logic level, the output signal Q may have the first logic level by being synchronized to the clock signal CK at a timing at which the clock signal CK transitions from the first logic level to the second logic level. Referring toFIG.15C, when the clock signal CK has the second logic level, the first control signal nCK may have the first logic level via the third inverter252. Thus, the first tri-state inverter211may operate as an inverter with respect to a signal of the first node DN having the first logic level. A signal of the second node DI may have the second logic level via the first tri-state inverter211, and a signal of the third node DB may have the logic level of the first node via the first tri-state inverter211. Signals of the sixth node ND and the seventh node NQ may have the same logic level as the output signal Q via the NAND circuit251and the AND circuit253. The second control signal bCK may have the same logic level as a signal of the fifth node QN via the NOR circuit254. When the signal of the fifth node QN, that is, the second control signal bCK, has the first logic level, the second tri-state inverter213and the first inverter212may perform a latch operation, and the logic levels of the second node DI and the third node DB may be maintained. Also, the third tri-state inverter221may operate as an inverter with respect to a signal of the third node DB having the first logic level. Thus, a logic level of the fourth node QI may be the second logic level via the third tri-state inverter221. A logic level of the fifth node QN may be the first logic level via the second inverter222. The output signal Q may be held as the second logic level via the output inverter240. When the signal of the fifth node QN, that is, the second control signal bCK, has the second logic level, the fourth tri-state inverter223may operate as an inverter with respect to the signal of the fifth node QN having the second logic level. Thus, the fourth tri-state inverter223and the second inverter222may perform a latch operation, and the logic levels of the fourth node QI and the fifth node QN may be maintained. The output signal Q may be held as the first logic level via the output inverter240. Referring toFIG.15D, when the logic level of the clock signal CK transitions to the first logic level, the logic level of the first control signal nCK may transition to the second logic level. A signal of the seventh node NQ may have the second logic level via the AND circuit253. Because the logic level of the signal of the second node DI is the second logic level, a logic level of the second control signal bCK may be the first logic level via the NOR circuit254. Thus, the first tri-state inverter211and the fourth tri-state inverter223may be in an inactive state, and the second tri-state inverter213and the third tri-state inverter221may be in an active state. A logic level of the fourth node QI may be the second logic level via the third tri-state inverter221. A logic level of the fifth node QN may be the first logic level via the second inverter222. A logic level of the output signal Q may be the second logic level via the output inverter240. Referring toFIGS.15C and15D, in a case in which the data signal D has the second logic level, the output signal Q may have the second logic level by being synchronized to the clock signal CK at a timing at which the clock signal CK transitions from the first logic level to the second logic level. Referring toFIG.15C, the second control signal bCK may be determined according to the logic level of the fifth node QN. That is, compared to when the second control signal bCK is generated as a delayed signal of the clock signal CK, the number of times of toggling of the second control signal bCK may be reduced, and thus, the flip-flop circuit2may perform a low-power-consumption operation. FIG.16is a circuit diagram for describing a flip-flop circuit2-2according to an example embodiment. Referring toFIG.16, the flip-flop circuit2-2may include a selection circuit230-2and a first tri-state inverter211-2. The selection circuit230-2may select one of a data signal D and a scan input signal SI according to a scan enable signal SE and an inverted scan enable signal nSE, may invert the selected signal, and may provide the inverted signal to the first node DN. The first tri-state inverter211-2may include a first N-type transistor N41and a first P-type transistor P41. The first N-type transistor N41may have a gate terminal connected to the first node DN, a source terminal connected to the first internal node M, and a drain terminal connected to the second node DI. The first P-type transistor P41may have a gate terminal connected to the first node DN, a source terminal connected to the second internal node N, and a drain terminal connected to the second node DI. A structure of the fourth tri-state inverter223may be the same as the structure of the fourth tri-state inverter23-2illustrated inFIG.8. The first tri-state inverter211-2may share a positive power node and a negative power node with the fourth tri-state inverter223, and thus, the structure of a power delivery network for providing power to the flip-flop circuit2-2may be simplified. In some embodiments, the selection circuit230-2and the first tri-state inverter211-2included in the flip-flop circuit2-2ofFIG.16may be substituted by one or more components of the flip-flop circuit2ofFIG.15. FIG.17is a circuit diagram for describing a flip-flop circuit2-3according to an example embodiment. Referring toFIG.17, the flip-flop circuit2-3may include an and-or-inverter (AOI)21circuit255. The AOI21circuit255may perform the functions of the AND circuit253and the NOR circuit254ofFIG.14. The AOI21circuit255may include first through third N-type transistors N51through N53and first through third P-type transistors P51through P53. The first N-type transistor N51may have a gate terminal connected to the sixth node ND, a source terminal connected to a negative power node, and a drain terminal connected to a source terminal of the second N-type transistor N52. The second N-type transistor N52may have a gate terminal receiving a first control signal nCK, the source terminal connected to the first N-type transistor N51, and a drain terminal connected to a node generating the second control signal bCK. The third N-type transistor N53may have a gate terminal connected to the second node DI, a source terminal connected to the negative power node, and a drain terminal connected to the node generating the second control signal bCK. The first P-type transistor P51may have a gate terminal receiving a first control signal nCK, a source terminal connected to a drain terminal of the third P-type transistor P53, and a drain terminal connected to the node generating the second control signal bCK. The second P-type transistor P52may have a gate terminal connected to the sixth node ND, a drain terminal connected to the node generating the second control signal bCK, and a source terminal connected to the drain terminal of the third P-type transistor P53. The third P-type transistor P53may have a gate terminal connected to the second node DI, the drain terminal connected to the source terminals of the first P-type transistor P51and the second P-type transistor P52, and a source terminal connected to a positive power node. Referring toFIGS.15C,15D, and17, when a logic level of the sixth node ND is the second logic level, a logic level of the second node DI may always be the second logic level. That is, when the second P-type transistor P52is turned on, the third P-type transistor P53may also be turned on, and thus, the second P-type transistor P52and the third P-type transistor P53may share the positive power node. A power node exclusively connected to the second P-type transistor P52may be omitted, and thus, the structure of a power delivery network for providing power may be simplified. FIG.18is a circuit diagram for describing a flip-flop circuit2-4according to an example embodiment. Referring toFIG.18, the flip-flop circuit2-4may include a control signal generation circuit250-3. The control signal generation circuit250-3may include an AOI21circuit255-2. The AOI21circuit255-2may include the first through third N-type transistors N51through N53and the first and second P-type transistors P51and P52. Compared with the AOI21circuit255ofFIG.17, the third P-type transistor P53may be omitted. The first P-type transistor P51may have a gate terminal receiving a first control signal nCK, a drain terminal connected to a node generating a second control signal bCK, and a source terminal connected to the third node DB. The second P-type transistor P52may have a gate terminal connected to the sixth node ND, a drain terminal connected to the node generating the second control signal bCK, and a source terminal connected to the third node DB. Referring toFIGS.15C and15D, when a logic level of the second node DI is the second logic level, a logic level of the third node DB may always be the first logic level. Thus, even when the third P-type transistor P53ofFIG.17is omitted, and the source terminals of the first and second P-type transistors P51and P52are connected to the third node DB, the AOI21circuit255-2ofFIG.18may perform the same function as the AOI21circuit255ofFIG.17. The AOI21circuit255-2may be realized by using fewer transistors, and thus, the flip-flop circuit2-4may have improved integration. FIG.19is a circuit diagram for describing a flip-flop circuit2-5according to an example embodiment. Referring toFIG.19, the flip-flop circuit2-5may include a control signal generation circuit250-4. The control signal generation circuit250-4may include an AOI21circuit255-3. Compared to the AOI circuit255ofFIG.17, the second P-type transistor P52may be connected to a positive power node which is different from a positive power node connected to the third P-type transistor P53. The positive power node may be connected to each of the second P-type transistor P52and the third P-type transistor P53, and thus, the signal stability of a power delivery network may be improved. FIG.20is a circuit diagram for describing a flip-flop circuit2-6according to an example embodiment. Referring toFIG.20, the flip-flop circuit2-6may include a second tri-state inverter213-2and a control signal generation circuit250-5. The second tri-state inverter213-2may include fourth and fifth N-type transistors N54and N55and fourth and fifth P-type transistors P54and P55. The fourth N-type transistor N54may have a gate terminal receiving a second control signal bCK, a source terminal connected to a negative power node, and a drain terminal connected to a source terminal of the fifth N-type transistor N55. The fifth N-type transistor N55may have a gate terminal connected to the third node DB, the source terminal connected to the drain terminal of the fourth N-type transistor N54, and a drain terminal connected to the second node DI. The fourth P-type transistor P54may have a gate terminal connected to the third node DB, a drain terminal connected to the second node DI, and a source terminal connected a fifth internal node A. The fifth P-type transistor P55may have a gate terminal receiving a first control signal nCK, a drain terminal connected to the fifth internal node A, and a source terminal connected to a positive power node. The control signal generation circuit250-5may include an AOI21circuit255-4. The AOI21circuit255-4may include the second and third P-type transistors P52and P53. The second P-type transistor P52may have a gate terminal connected to the sixth node ND, a drain terminal connected to a node generating the second control signal bCK, and a source terminal connected to the fifth internal node A. The third P-type transistor P53may have a gate terminal connected to the second node DI, a drain terminal connected to the node generating the second control signal bCK, and a source terminal connected to the fifth internal node A. Unlike the AOI21circuit255-3ofFIG.19, in the AOI21circuit255-4, the second and third P-type transistors P52and P53may share the positive power node with the second tri-state inverter213-2. That is, when a logic level of the first control signal nCK is a second logic level, a logic level of the fifth internal node A may be a first logic level, and thus, it may be understood that the positive power node may be connected to the second and third P-type transistors P52and P53. That is, because the second tri-state inverter213-2and the AOI21circuit255-4may share the same positive power node, the structure of a power delivery network for providing power to the flip-flop circuit2-6may be simplified. FIG.21is a circuit diagram for describing a flip-flop circuit2-7according to an example embodiment. Referring toFIG.21, the flip-flop circuit2-7may include a control signal generation circuit250-4, unlike the flip-flop circuit2-6ofFIG.20. The control signal generation circuit250-4may include an AOI21circuit255-5. Unlike the AOI21circuit255-4ofFIG.20, the second P-type transistor P52included in the AOI21circuit255-5may have a source terminal not connected to the fifth internal node A and connected to an additional positive power node. When a logic level of a first control signal nCK is a second logic level, a logic level of the fifth internal node A may be the second logic level via the second tri-state inverter213. That is, it may be understood that a negative power node may be connected to a source terminal of the third P-type transistor P53. Thus, the AOI21circuit255-5may operate in the same way as the AOI21circuit255-3ofFIG.19. Because the second tri-state inverter213-2and the third P-type transistor P53may share the same positive power node, and the second P-type transistor P52may be connected to an additional positive power node, a power delivery network for providing power to the flip-flop circuit2-7may have various structures. FIG.22is a circuit diagram for describing a flip-flop circuit3according to an example embodiment. Referring toFIG.22, the flip-flop circuit3may include a control signal generation circuit260, unlike the flip-flop circuit2ofFIG.14. The control signal generation circuit260may receive a signal of the first node DN, a signal of the second node DI, a signal of the fourth node QI, and a clock signal CK and generate a first control signal nCK and a second control signal bCK. The control signal generation circuit260may include a fifth inverter261, a sixth inverter262, an OR circuit263, an AND circuit264, and a NOR circuit265. The fifth inverter261may invert the signal of the first node DN and transmit the inverted signal to the sixth node ND. The sixth inverter262may invert the clock signal CK to generate the first control signal nCK. The OR circuit263may perform an OR operation on a signal of the sixth node ND and a signal of the fourth node QI and transmit a signal generated by the OR operation to the seventh node NQ. The AND circuit264may perform an AND operation on the first control signal nCK and a signal of the seventh node NQ and transmit a signal generated by the AND operation to an eighth node NB. The NOR circuit265may perform a NOR operation on a signal of the eighth node NB and a signal of the second node DI to generate the second control signal bCK. The flip-flop circuit3according to an example embodiment may generate the second control signal bCK in synchronization with the first node DN, the second node DI, and the fourth node QI. Thus, compared to when the second control signal bCK is generated by simply inverting the first control signal nCK, the number of times of toggling of the second control signal bCK may be reduced. Thus, the flip-flop circuit2may consume less power. FIGS.23A through23Dare circuit diagrams for describing operations of the flip-flop circuit3according to an example embodiment.FIG.23Adescribes the operation of the flip-flop circuit3when a data signal D has a first logic level, and a clock signal CK has a second logic level,FIG.23Bdescribes the operation of the flip-flop circuit3when the data signal D has the first logic level, and the clock signal CK is transited to the first logic level,FIG.23Cdescribes the operation of the flip-flop circuit3when the data signal D has the second logic level, and the clock signal CK has the second logic level, andFIG.23Ddescribes the operation of the flip-flop circuit3when the data signal D has the second logic level, and the clock signal CK is transited to the first logic level. InFIGS.23A through23D, the first logic level may be indicated as “1,” and the second logic level may be indicated as “0.” Referring toFIG.23A, the clock signal CK may have the second logic level, and thus, the first control signal nCK may have the first logic level via the sixth inverter262. A logic level of the first node DN may be the second logic level via the input inverter230. A logic level of the sixth node ND may be the first logic level via the fifth inverter261. A logic level of the seventh node NQ may be the first logic level via the OR circuit263. A logic level of the eighth node NB may be the first logic level via the AND circuit264. A logic level of the second control signal bCK may be the second logic level via the NOR circuit265. Because the logic level of the first control signal nCK may be the first logic level, and the logic level of the second control signal bCK may be the second logic level, the first and fourth tri-state inverters211and223may be in an active state, and the second and third tri-state inverters213and221may be in an inactive state. Thus, a logic level of the second node DI may be the first logic level via the first tri-state inverter211, and a logic level of the third node DB may be the second logic level via the first inverter212. The second inverter222and the fourth tri-state inverter223may perform a latch operation, and thus, logic levels of the fourth node QI and the fifth node QN may be maintained. An output signal Q may be maintained as a logic level of a prior output signal Q− via the output inverter240. Referring toFIG.23B, the clock signal CK may have the first logic level, and thus, the first control signal nCK may have the second logic level via the sixth inverter262. The eighth node NB may have the second logic level via the AND circuit264. A logic level of the second node DI may be the first logic level before the clock signal CK transitions, and thus, a logic level of the second control signal bCK may be the second logic level via the NOR circuit265. A logic level of the first node DN may be the second logic level via the input inverter230. When the logic level of the second control signal bCK is the second logic level, the first through fourth tri-state inverters211,213,221, and223may operate as inverters with respect to an input signal having the second logic level. Thus, the logic level of the second node DI may be the first logic level. A logic level of the third node DB may be the second logic level via the first inverter212. A logic level of the fourth node QI may be the first logic level via the third tri-state inverter221. A logic level of the fifth node QN may be the second logic level via the second inverter222. The output signal Q may have the first logic level via the output inverter240. Referring toFIGS.23A and23B, in a case in which the data signal D has the first logic level, the output signal Q may have the first logic level by being synchronized to the clock signal CK at a timing at which the clock signal CK transitions from the first logic level to the second logic level. Referring toFIG.23C, the clock signal CK may have the second logic level, and thus, the first control signal nCK may have the first logic level via the sixth inverter262. The first tri-state inverter211may operate as an inverter with respect to a signal of the first node DN having the first logic level. Thus, a signal of the second node DI may have the second logic level. A signal of the third node DB may have the first logic level via the first inverter212. A signal of the first node DN may have the first logic level via the input inverter230. A signal of the sixth node ND may have the second logic level via the fifth inverter261. The seventh node NQ may have the same logic level as the fourth node QI via the OR circuit263. The eighth node NB may have the same logic level as the fourth node QI via the AND circuit264. A logic level of the second control signal bCK may be the same as a logic level of a signal of the fifth node QN via the NOR circuit265. When the second control signal bCK, that is, the signal of the fifth node QN, has the first logic level, the second and third tri-state inverters213and221may operate as inverters with respect to a signal of the third node DB. Thus, a signal of the fourth node QI may have the second logic level. The signal of the fifth node QN may have the first logic level via the second inverter222. The second logic level may be maintained via the output inverter240. When the second control signal bCK, that is, the signal of the fifth node QN, has the second logic level, the fourth tri-state inverter223may operate as an inverter with respect to the fifth node QN. Thus, via a latch operation of the fourth tri-state inverter233and the second inverter222, the logic levels of the fourth node QI and the fifth node QN may be maintained, and a logic level of the output signal Q may be maintained as a logic level of a prior output signal Q− via the output inverter240. Referring toFIG.23D, the clock signal CK may transition to the first logic level, and thus, the first control signal nCK may have the second logic level via the sixth inverter262. A logic level of the eighth node NB may be the second logic level via the AND circuit264. A signal of the second node DI may have the second logic level before the clock signal CK is transited, and thus, a logic level of the second control signal bCK may be the first logic level via the NOR circuit265. A logic level of the first control signal nCK may be the second logic level, and the logic level of the second control signal bCK may be the first logic level, and thus, the second and third tri-state inverters213and221may be in an active state. Thus, logic levels of the second node DI and the third node DB may be maintained via the second tri-state inverter213and the first inverter212. A logic level of the fourth node QI may be the second logic level via the third tri-state inverter221. A logic level of the fifth node QN may be the first logic level via the second inverter222. A logic level of the output signal Q may be the second logic level via the output inverter240. Referring toFIGS.23C and23D, in a case in which the data signal D has the second logic level, the output signal Q may have the second logic level by being synchronized to the clock signal CK at a timing at which the clock signal CK transitions from the first logic level to the second logic level. FIG.24is a circuit diagram for describing a flip-flop circuit3-2according to an example embodiment. Referring toFIG.24, unlike the flip-flop circuit3ofFIG.22, the flip-flop circuit3-2may include the selection circuit230-2and the first tri-state inverter211-2. The selection circuit230-2and the first tri-state inverter211-2are described above with reference toFIG.16, and thus, may not be described again. FIG.25is a circuit diagram for describing a flip-flop circuit3-3according to an example embodiment. Referring toFIG.25, the flip-flop circuit3-3may include an or-and-or-inverter (OAOI) circuit266, unlike the flip-flop circuit3-2ofFIG.24. The OAOI circuit266may perform the functions of the OR circuit263, the AND circuit264, and the NOR circuit265ofFIG.24. The OAOI circuit266may include first through fourth N-type transistors N61through N64and first through fourth P-type transistors P61through P64. The first N-type transistor N61may have a gate terminal connected to the sixth node ND, a source terminal connected to a negative power node, and a drain terminal connected to a source terminal of the third N-type transistor N63. The second N-type transistor N62may have a gate terminal connected to the fourth node QI, a source terminal connected to the negative power node, and a drain terminal connected to the source terminal of the third N-type transistor N63. The third N-type transistor N63may have a gate terminal receiving a first control signal nCK, the source terminal commonly connected to the drain terminals of the first and second N-type transistors P61and P62, and a drain terminal connected to a node generating a second control signal bCK. The fourth N-type transistor N64may have a gate terminal connected to the second node DI, a source terminal connected to the negative power node, and a drain terminal to the node generating the second control signal bCK. The first P-type transistor P61may have a gate terminal receiving the first control signal nCK, a source terminal connected to a drain terminal of the fourth P-type transistor P64, and a drain terminal connected to the node generating the second control signal bCK. The second P-type transistor P62may have a gate terminal connected to the fourth node QI, a source terminal connected to a drain terminal of the third P-type transistor P63, and a drain terminal connected to the node generating the second control signal bCK. The third P-type transistor P63may have a gate terminal connected to the sixth node ND, a source terminal connected to the drain terminal of the fourth P-type transistor P64, and the drain terminal connected to the source terminal of the second P-type transistor P62. The fourth P-type transistor P64may have a gate terminal connected to the second node DI, a source terminal connected to a positive power node, and the drain terminal commonly connected to the source terminals of the first and third P-type transistors P61and P63. FIG.26is a circuit diagram for describing a flip-flop circuit3-4according to an example embodiment. Referring toFIG.26, the flip-flop circuit3-4may include a control signal generation circuit260-3, unlike the flip-flop circuit3-3ofFIG.25. The control signal generation circuit260-3may include an OAOI circuit266-2. Unlike the OAOI circuit266ofFIG.25, in the OAOI circuit266-2, the source terminal of the third P-type transistor P63may be connected to an additional positive power node, rather than the drain terminal of the fourth P-type transistor P64. The positive power node may be connected to each of the third P-type transistor P63and the fourth P-type transistor P64, and thus, a signal security of a power delivery network may be improved. FIG.27is a circuit diagram for describing a flip-flop circuit3-5according to an example embodiment. Referring toFIG.27, the flip-flop circuit3-5may include a control signal generation circuit260-4, unlike the flip-flop circuit3-3ofFIG.25. The control signal generation circuit360-4may include an OAOI circuit266-3. Unlike the OAOI circuit266-2ofFIG.26, the OAOI circuit266-3may only include the first P-type transistor P61, excluding the second through fourth P-type transistors P62through P64inFIG.26. Referring toFIGS.23D and26, when the first control signal nCK has a second logic level, and a logic level of the second node DI is the second logic level, logic levels of both of the fourth node QI and the sixth node ND may be the second logic level. Thus, in the OAOI circuit266-2ofFIG.26, the second and third P-type transistors P62and P63and the first and fourth P-type transistors P61and P64may simultaneously pre-charge the second control signal bCK. Thus, in the OAOI circuit266-3ofFIG.27, the second and third P-type transistors P62and P63may be removed to improve integration. Also, when the second node DI has the second logic level, the third node DB may always have the first logic level via the first inverter212. Thus, in the OAOI circuit266-3according to an example embodiment, the fourth P-type transistor P64having the gate terminal connected to the second node DI may be removed, and the first P-type transistor P61having the source terminal connected to the third node DB may be provided, to provide high integration. FIG.28is a circuit diagram for describing a flip-flop circuit3-6according to an example embodiment. Referring toFIG.28, the flip-flop circuit3-6may include a control signal generation circuit260-5, unlike the flip-flop circuit3-3ofFIG.25. The control signal generation circuit260-5may include an OAOI circuit266-4. Compared with the OAOI circuit266ofFIG.25, the first P-type transistor P61of the OAOI circuit266-4may have a gate terminal connected to the second node DI and a source terminal connected to a sixth internal node X. The sixth internal node X may be an internal node of the second tri-state inverter213. For example, the second tri-state inverter213may include the fifth P-type transistor P55having the gate terminal receiving the first control signal nCK, the source terminal connected to the positive power node, and the drain terminal connected to the sixth internal node X. FIG.29is a diagram for describing a multi-bit flip-flop circuit1000according to an example embodiment. Referring toFIG.29, the multi-bit flip-flop circuit1000may receive first and second data signals D1and D2and, according to a clock signal CK, may output first and second output signals Q1and Q2. Embodiments are not limited thereto, and the multi-bit flip-flop circuit1000may receive a plurality of data signals and, according to the clock signal CK, may output a plurality of output signals. The multi-bit flip-flop circuit1000may include a first flip-flop circuit (FF1)1100and a second flip-flop circuit (FF2)1200. The first flip-flop circuit1100may include a first master latch circuit (ML1)1110, a first slave latch circuit (SL1)1120, a first output inverter (INV1)1130, and a first control signal generation circuit (CSGC1)1140. The first flip-flop circuit1100may further include an input inverter inverting a first data signal D1. The first master latch circuit1100may include at least one component of the master latch circuits described above, and the first slave latch circuit1120may include at least one component of the slave latch circuits described above. The first control signal generation circuit1140may include at least one component of the control signal generation circuits described above. The second flip-flop circuit1200may include a second master latch circuit (ML2)1210, a second slave latch circuit (SL2)1220, a second output inverter (INV2)1230, and a second control signal generation circuit (CSGC2)1240. The second flip-flop circuit1200may further include an input inverter inverting a second data signal D2. The second master latch circuit1210may include at least one component of the master latch circuits described above, and the second slave latch circuit1220may include at least one component of the slave latch circuits described above. The second control signal generation circuit1240may include at least one component of the control signal generation circuits described above. The first control signal generation circuit1140may generate a control signal bCK1, based on a signal of an internal node (for example, the first node DN ofFIG.1) of the first master latch circuit1110, an internal node (for example, the fifth node QN ofFIG.14) of the first slave latch circuit1120, or a node (for example, the second node DB ofFIG.27) between the first master latch circuit1110and the first slave latch circuit1120, and an inverted clock signal nCK. The second control signal generation circuit1240may generate a control signal bCK2, based on a signal of an internal node (for example, the first node DN ofFIG.1) of the second master latch circuit1210, an internal node (for example, the fifth node QN ofFIG.14) of the second slave latch circuit1220, or a node (for example, the second node DB ofFIG.27) between the second master latch circuit1210and the second slave latch circuit1220, and an inverted clock signal nCK. While the inventive concept has been particularly shown and described with reference to embodiments thereof, it will be understood that various changes in form and details may be made therein without departing from the spirit and scope of the following claims. | 81,847 |
11863189 | DETAILED DESCRIPTION The following disclosure provides many different embodiments, or examples, for implementing different features of the provided subject matter. Specific examples of components, values, operations, materials, arrangements, or the like, are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. Other components, values, operations, materials, arrangements, or the like, are contemplated. For example, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed between the first and second features, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The apparatus may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein may likewise be interpreted accordingly. An input buffer circuit couples an input voltage signal from a transmitting circuit in a first power domain to a receiving circuit in a second power domain. The input buffer circuit receives the input voltage signal and generates an output voltage signal. The input buffer circuit has a lower threshold voltage and an upper threshold voltage. In some embodiments, a first enabling signal is generated based a comparison between the input voltage signal and the upper threshold voltage, and a second enabling signal is generated based a comparison between the input voltage signal and the lower threshold voltage. In some embodiments, based on the first enabling signal and the second enabling signal, the output voltage signal changes between a first voltage level and a second voltage level. When the input voltage signal rises and consecutively crosses the lower threshold voltage and then the upper threshold voltage, the output voltage signal changes from the first voltage level to the second voltage level. When the input voltage signal falls and consecutively crosses the upper threshold voltage and then the lower threshold voltage, the output voltage signal changes from the second voltage level to the first voltage level. The difference between the upper threshold voltage and the lower threshold voltage is the hysteresis window. If the upper threshold voltage and the lower threshold voltage are determined by the supply voltages in the second power domain for the receiving circuit and uncorrelated to the supply voltages in the first power domain for the transmitting circuit, then the ratio between the hysteresis window and the supply voltage difference in the first power domain will decrease as the supply voltage difference in the first power domain increases. A supply voltage difference in a power domain is the voltage difference between an upper supply voltage and a lower supply voltage in the power domain. In some embodiments of the present disclosure, each of the upper threshold voltage in an upper threshold circuit (for generating the first enabling signal) and the lower threshold voltage in a lower threshold circuit (for generating the second enabling signal) is adjusted to enlarge the hysteresis window if the supply voltage difference in the first power domain increases. Additionally, in some embodiments, the hysteresis window degradation due to process, voltage, and temperature variations is further reduced by the improved implementation of the upper threshold circuit and the lower threshold circuit, as compared with some alternative designs of input buffer circuits without similar threshold circuits. FIG.1Ais a schematic diagram of an input buffer circuit100having two threshold circuits and one control circuit, in accordance with some embodiments. The input buffer circuit100includes an upper threshold circuit110, a lower threshold circuit120, and a control circuit130. The upper threshold circuit110and the lower threshold circuit120are configured to receive an input voltage signal PAD at an input node102of the input buffer circuit100. The upper threshold circuit110is configured to generate a first enabling signal ENupbased on a comparison between the input voltage signal PAD and the upper threshold voltage VTH. In some embodiments, if the input voltage signal PAD is larger than the upper threshold voltage VTH, then the first enabling signal ENupgenerated by the upper threshold circuit110is set to be logic TRUE. The lower threshold circuit120is configured to generate a second enabling signal ENdnbased on a comparison between the input voltage signal PAD and the lower threshold voltage VTL. In some embodiments, if the input voltage signal PAD is smaller than the lower threshold voltage VTL, then the second enabling signal ENdngenerated by the lower threshold circuit120is set to be logic TRUE. The first enabling signal ENupand the second enabling signal ENdnare coupled to the control circuit130. The control circuit130is configured to generate an output voltage signal Vout at the output node108of the control circuit130based on the first enabling signal ENupand the second enabling signal ENdn. In operation, the input voltage signal PAD at the input node102is provided by electronic circuits in a first power domain, and the output voltage signal Vout generated by the input buffer circuit100at the output node108is coupled to electronic circuits in a second power domain. In some embodiments, the electronic circuits in the first power domain are connected between the power supply voltages VDDH and VSS, and the electronic circuits in the second power domain are connected between the power supply voltages VDDL and VSS. In some embodiments, the power supply voltage VSS is connected to the common ground, and the power supply voltage VDDH in the first power domain is 2.5V (or 3.3V) while the power supply voltage VDDL in the second power domain is 1.8V. In some embodiments, the power supply voltage VSS is connected to common ground, and the power supply voltage VDDH in the first power domain is 1.8V (or 2.5V) while the power supply voltage VDDL in the second power domain is 1.2V. In some embodiments, the power supply voltage VDDH is higher than 3.3V. In some embodiments, the power supply voltage VDDL is smaller than 1.2 V. Other examples of the power supply voltage VDDH and the power supply voltage VDDL are within the contemplated scope of the present disclosure. During operation, the voltage levels of the input voltage signal PAD at the input node102of the input buffer circuit100generally are in a range from VSS to VDDH, and the output voltage signal Vout generated at the output node108of the input buffer circuit100is in a range from VSS to VDDL. FIG.1Bis a diagram of waveforms of the input voltage signal PAD at the input node102and the output voltage signal Vout at the output node108of the input buffer circuit100, in accordance with some embodiments. InFIG.1B, the input voltage signal PAD rises from voltage VSS to voltage VDDH and stays at the voltage VDDH for some time; then the input voltage signal PAD falls from voltage VDDH to voltage VSS. As the input voltage signal PAD is rising from voltage VSS and changing towards voltage VDDH, the output voltage signal Vout changes from voltage VSS to voltage VDDL at time t+, when the input voltage signal PAD crosses an upper threshold voltage VTH. The output voltage signal Vout stays at voltage VDDL, while the input voltage signal PAD reaches voltage VDDH and stays at voltage VDDH. The output voltage signal Vout changes from voltage VDDL to voltage VSS at time t−, when the input voltage signal PAD crosses a lower threshold voltage VTL, as the input voltage signal PAD is falling from voltage VDDH and changing towards voltage VSS. Because the signal waveform of the output voltage signal Vout changes within the range from VSS to VDDL, the output voltage signal Vout is a more suitable signal for the electronic circuits in the second power domain powered by the power supply voltages VDDL and VSS. In contrast, if the input voltage signal PAD is directly coupled to the electronic circuits in the second power domain, the peak voltage (such as VDDH) of the input voltage signal PAD may exceed the maximum durable voltage of the electronic circuits in the second power domain. FIG.1Cis a schematic diagram of an input buffer circuit100ofFIG.1Ahaving the control circuit implemented with switches, in accordance with some embodiments. InFIG.1C, the upper threshold circuit110includes a high-side tracker112and an upper threshold detector114. The high-side tracker112is configured to generate a tracking-up signal PADUP based on the input voltage signal PAD. The upper threshold detector114is configured to receive the tracking-up signal PADUP from the high-side tracker112and to set a logic level of the first enabling signal ENupbased on the tracking-up signal PADUP. InFIG.1C, the lower threshold circuit120includes a low-side tracker122and a lower threshold detector124. The low-side tracker122is configured to generate a tracking-down signal PADDN based on the input voltage signal PAD. The lower threshold detector124is configured to receive the tracking-down signal PADDN from the low-side tracker122and to set a logic level of the second enabling signal ENdnbased on the tracking-down signal PADDN. InFIG.1C, the control circuit130includes a first switch131, a second switch132, and a regenerative circuit135. The first switch is electrically connected between the upper supply voltage VDDH and a buffer output node BufOut. The second switch is132electrically connected between the buffer output node BufOut and the lower supply voltage VSS. The regenerative circuit135is electrically coupled to the buffer output node BufOut. The regenerative circuit135maintains the voltage at the buffer output node BufOut when both the first switch131and the second switch132are at the disconnected state. The first switch131is controlled by the first enabling signal ENupreceived from the upper threshold detector114. The second switch132is controlled by the second enabling signal ENupreceived from the lower threshold detector124. The buffer output node BufOut is coupled to the input terminal of the level shifter140. The level shifter140is connected to the power supply VDDH in the first power domain and the power supply VDDL in the second power domain. The voltage signal at the input terminal of the level shifter140is within the voltage range of the first power domain, but the voltage signal at the output terminal of the level shifter140is within the voltage range of the second power domain. The output terminal of the level shifter140is the output node108of the input buffer circuit100. The output voltage signal Vout at the output node108of the input buffer circuit100is within the voltage range of the second power domain. The operation of the input buffer circuit100inFIG.1Cis described with reference toFIGS.2A-2F. In some embodiments, the output voltage signal Vout at the output node108in the second power domain (having supply voltages VDDL and VSS) is further coupled to the electronic components in a third power domain (having supply voltages VCC and VSS) through the level shifter160. The level shifter160is connected to the power supply VDDL in the second power domain and the power supply VCC in the third power domain. The voltage signal at the input terminal of the level shifter160is within the voltage range of the second power domain, but the voltage signal at the output terminal of the level shifter160is within the voltage range of the third power domain. In response to the input voltage signal PAD at the input node102of the input buffer circuit10, the voltage signal CoreOut is generated at the output terminal of the level shifter160by the input buffer circuit100and the level shifter160. FIGS.2A-2Fare diagrams of waveforms of signals at the input node, the output node, and various other nodes in the input buffer circuit100ofFIG.1C, in accordance with some embodiments.FIG.2Ais the waveform of the input voltage signal PAD at the input node102.FIG.2Bis the waveform of the tracking-up signal PADUP at the output of the high-side tracker112.FIG.2Cis the waveform of the tracking-down signal PADDN at the output of the low-side tracker122.FIG.2Dis the waveform of the first enabling signal ENupat the output of the upper threshold detector114.FIG.2Eis the waveform of the second enabling signal ENdnat the output of the lower threshold detector124.FIG.2Fis the waveform of the voltage at the buffer output node BufOut. In the example waveform ofFIG.2A, during the time period from ta to tb, the input voltage signal PAD rises from voltage VSS to voltage VDDH. During the time period from tb to tc, the input voltage signal PAD remains at the voltage VDDH. During the time period from tc to td, the input voltage signal PAD falls from voltage VDDH to voltage VSS. During the time period from ta to tb when the input voltage signal PAD is rising, the input voltage signal PAD crosses the lower threshold voltage VTLat time t1and crosses the upper threshold voltage VTHat time t2. During the time period from tc to td when the input voltage signal PAD is falling, the input voltage signal PAD crosses the upper threshold voltage VTHat time t3and crosses the lower threshold voltage VTLat time t4. The tracking-up signal PADUP at the output of the high-side tracker112follows the signal received at the input of the high-side tracker112if the signal received at the input is larger than a predetermined lower limiting voltage (such as VSSH), and the tracking-up signal PADUP is maintained at the predetermined lower limiting voltage (such as VSSH) if the signal received at the input is smaller than or equal to the predetermined lower limiting voltage. In the example waveform ofFIG.2B, the tracking-up signal PADUP is maintained at voltage VSSH until time t1(which is the time at which the input voltage signal PAD rises above the lower threshold voltage VTL). Then, the tracking-up signal PADUP follows the input voltage signal PAD from time t1to time t4(which is the time at which the input voltage signal PAD falls below the lower threshold voltage VTL). The tracking-up signal PADUP is again maintained at voltage VSSH after time t4. The tracking-down signal PADDN at the output of the low-side tracker122follows the signal received at the input of the low-side tracker122if the signal received at the input is smaller than a predetermined upper limiting voltage (such as VDDL), and the tracking-down signal PADDN is maintained at the predetermined upper limiting voltage (such as VDDL) if the signal received at the input is larger than or equal to the predetermined upper limiting voltage. In the example waveform ofFIG.2C, the tracking-down signal PADDN follows the input voltage signal PAD until time t2(which is the time at which the input voltage signal PAD rises above upper the threshold voltage VTH). Then, the tracking-down signal PADDN is maintained at voltage VDDL from time t2to time t3(which is the time at which the input voltage signal PAD falls below the upper threshold voltage VTH). The tracking-down signal PADDN again follows the input voltage signal PAD after time t3. In the example waveform ofFIG.2D, the first enabling signal ENupat the output of the upper threshold detector114is determined by comparing the tracking-up signal PADUP (received from the high-side tracker112) with the upper threshold voltage VTH. Before time t2, the tracking-up signal PADUP inFIG.2Bis below the upper threshold voltage VTH, and the first enabling signal ENupinFIG.2Dis at logic FALSE. From time t2to time t3, the tracking-up signal PADUP inFIG.2Bis above the upper threshold voltage VTH, and the first enabling signal ENupinFIG.2Dis at logic TRUE. After time t3, the tracking-up signal PADUP inFIG.2Bis again below the upper threshold voltage VTH, and the first enabling signal ENupinFIG.2Dis at logic FALSE. In the example waveform ofFIG.2E, the second enabling signal ENdnat the output of the lower threshold detector124is determined by comparing the tracking-down signal PADDN (received from the low-side tracker122) with the lower threshold voltage VTL. Before time t1, the tracking-down signal PADDN inFIG.2Cis below the lower threshold voltage VTL, and the second enabling signal ENdninFIG.2Eis at logic TRUE. From time t1to time t4, the tracking-down signal PADDN inFIG.2Cis above the lower threshold voltage VTL, and the second enabling signal ENdninFIG.2Eis at logic FALSE. After time t4, the tracking-down signal PADDN inFIG.2Cis again below the lower threshold voltage VTL, and the second enabling signal ENdninFIG.2Eis at logic TRUE. In the examples embodiments ofFIG.1C, the first enabling signal ENupfrom the upper threshold detector114controls the first switch131, and the second enabling signal ENdnreceived from the lower threshold detector124controls the second switch132. When the first enabling signal ENupis at logic TRUE, the first switch131is at the connected state, which connects the buffer output node BufOut with the upper supply voltage VDDH. When the second enabling signal ENdnis at logic TRUE, the second switch132is at the connected state, which connects the buffer output node BufOut with the lower supply voltage VSS. In the example waveform ofFIG.2F, the voltage at the buffer output node BufOut inFIG.1Cdepends upon the first enabling signal ENupinFIG.2Dand the second enabling signal ENdninFIG.2E. Before time t1, because the first enabling signal ENupis at logic FALSE and the second enabling signal ENdnis at logic TRUE, the buffer output node BufOut is not connected to the upper supply voltage VDDH through the first switch131but the buffer output node BufOut is connected to the lower supply voltage VSS through the second switch132. Consequently, the voltage at the buffer output node BufOut is at the lower supply voltage VSS. From time t1to time t2, because the first enabling signal ENupis at logic FALSE and the second enabling signal ENdnis at logic FALSE, the buffer output node BufOut is not connected to the upper supply voltage VDDH through the first switch131and the buffer output node BufOut is also not connected to the lower supply voltage VSS through the second switch132. Consequently, during time t1to time t2, the voltage at the buffer output node BufOut is still at the lower supply voltage VSS, because the voltage at the buffer output node BufOut at time t1is maintained until time t2by the regenerative circuit135when each of the first switch131and the second switch132is not at the connecting state. From time t2to time t3, because the first enabling signal ENupis at logic TRUE and the second enabling signal ENdnis at logic FALSE, the buffer output node BufOut is connected to the upper supply voltage VDDH through the first switch131but the buffer output node BufOut is not connected to the lower supply voltage VSS through the second switch132. Consequently, the voltage at the buffer output node BufOut is at the upper supply voltage VDDH. From time t3to time t4, because the first enabling signal ENupis at logic FALSE and the second enabling signal ENdnis at logic FALSE, the buffer output node BufOut is not connected to upper supply voltage VDDH through the first switch131and the buffer output node BufOut is also not connected to the lower supply voltage VSS through the second switch132. Consequently, during time t3to time t4, the voltage at the buffer output node BufOut is still at the upper supply voltage VDDH, because the voltage at the buffer output node BufOut at time t3is maintained until time t4by the regenerative circuit135when each of the first switch131and the second switch132is not at the connecting state. After time t4, because the first enabling signal ENupis at logic FALSE and the second enabling signal ENdnis at logic TRUE, the buffer output node BufOut is not connected to the upper supply voltage VDDH through the first switch131but the buffer output node BufOut is connected to the lower supply voltage VSS through the second switch132. Consequently, the voltage at the buffer output node BufOut is at the lower supply voltage VSS. In the example waveform ofFIG.2F, the voltage at the buffer output node BufOut changes from the lower supply voltage VSS to the upper supply voltage VDDH at time t+ (which is same as time t2), when the input voltage signal PAD crosses the upper threshold voltage VTHas the input voltage signal PAD is rising. The voltage at the buffer output node BufOut changes from the upper supply voltage VDDH to the lower supply voltage VSS at time t− (which is same as time t4), when the input voltage signal PAD crosses a lower threshold voltage VTLas the input voltage signal PAD is falling. FIG.3is a schematic diagram of an input buffer circuit300having two threshold circuits and one control circuit, in accordance with some embodiments. Similar to the input buffer circuit100inFIG.1C, the input buffer circuit300inFIG.3also includes an upper threshold circuit110, a lower threshold circuit120, and a control circuit130. The upper threshold circuit110includes a high-side tracker112and an upper threshold detector114. The upper threshold detector114is coupled between the high-side tracker112and the first switch131in the control circuit130. The lower threshold circuit120includes a low-side tracker122and a lower threshold detector124. The lower threshold detector124is coupled between the low-side tracker122and the second switch132in the control circuit130. InFIG.3, the input buffer circuit300inFIG.3also includes switches331and332, inverters341and342, and also the tracker circuits351and352for forming a regenerative circuit coupled between the buffer output node BufOut and the output node108. InFIG.3, the high-side tracker112is implemented with the PMOS transistors M1-M2and M5-M6. The PMOS transistors M1and M2are serially connected between the node of the tracking-up signal PADUP and the input node102. The gates of the PMOS transistors M1and M2are connected to the supply voltage VSSH. The PMOS transistor M5and M6are serially connected between the node of the tracking-up signal PADUP and the supply voltage VSSH. The gate of the PMOS transistor M5is connected to the input node102. The gate of the PMOS transistor M6is connected to the supply voltage VSS. InFIG.3, the upper threshold detector114is implemented with the PMOS transistor M9and the NMOS transistor M10serially connected between the supply voltage VDDH and the supply voltage VSSH. The first switch131is implemented with the PMOS transistors M13and M14serially connected between the supply voltage VDDH and the buffer output node BufOut. The gate of the PMOS transistor M14is connected to the supply voltage VSSH, and the gate of the PMOS transistor M13connected to the output node of the upper threshold detector114. InFIG.3, the low-side tracker122is implemented with the NMOS transistors M3-M4and M7-M8. The NMOS transistors M3and M4are serially connected between the input node102and the node of the tracking-down signal PADDN. The gates of the NMOS transistors M3and M4are connected to the supply voltage VDDL. The NMOS transistors M7and M8are serially connected between the node of the supply voltage VDDL and the node of the tracking-down signal PADDN. The gate of the NMOS transistor M7is connected to the supply voltage VDDH. The gate of the NMOS transistor M8is connected to the node of the tracking-down signal PADDN. InFIG.3, the lower threshold detector124is implemented with the PMOS transistor M11and the NMOS transistor M12serially connected between the supply voltage VDDL and the supply voltage VSS. The second switch132is implemented with the NMOS transistors M15and M16serially connected between the buffer output node BufOut and the supply voltage VSS. The gate of the NMOS transistor M15is connected to the supply voltage VDDL, and the gate of the NMOS transistor M16is connected to the output node of the lower threshold detector124. The input buffer circuit300inFIG.3has a regenerative circuit coupled between the buffer output node BufOut and the output node108. In the regenerative circuit, each of the switches331and332is implemented with two transistors. Specifically, the switch331is implemented with the PMOS transistors M17and M18serially connected between the supply voltage VDDH and the buffer output node BufOut. The gate of the PMOS transistor M18is connected to the supply voltage VSSH, and the gate of the PMOS transistor M17is connected to the output node of the inverter341. The switch332is implemented with the NMOS transistors M19and M20serially connected between the buffer output node BufOut and the supply voltage VSS. The gate of the NMOS transistor M19is connected to the supply voltage VDDL, and the gate of the NMOS transistor M20is connected to the output node of the inverter342. For the regenerative circuit inFIG.3, the tracker circuit351is implemented with the PMOS transistors M21-M22and M23-M24. The PMOS transistor M21and M22are serially connected between the input node of the inverter341and the buffer output node BufOut. The gates of the PMOS transistor M21and M22are connected to the supply voltage VSSH. The PMOS transistors M23and M24are serially connected between the node of the input node of the inverter341and the supply voltage VSSH. The gate of the PMOS transistor M23is connected to the buffer output node BufOut. The gate of the PMOS transistor M24is connected to the supply voltage VSS. For the regenerative circuit inFIG.3, the tracker circuit352is implemented with the NMOS transistors M25-M26and M27-M28. The NMOS transistors M25and M26are serially connected between the node of the buffer output node BufOut and the input node of the inverter342. The gates of the NMOS transistor M25and M26are connected to the supply voltage VDDL. The NMOS transistors M27and M28are serially connected between the node of supply voltage VDDL and the input node of the inverter342. The gate of the NMOS transistor M27is connected to the supply voltage VDDH. The gate of the NMOS transistor M28is connected to the buffer output node BufOut. FIGS.4A-4Bare circuit diagrams of the inverters341and342in the input buffer circuit300inFIG.3, in accordance with some embodiments. In some embodiments, as shown inFIG.4A, the inverter341is implemented with a PMOS transistor and an NMOS transistor serially connected between the supply voltage VDDH and the supply voltage VSSH, and the inverter342is implemented with a PMOS transistor and an NMOS transistor serially connected between the supply voltage VDDL and the supply voltage VSS. In some alternative embodiments, as shown inFIG.4B, the inverter342is also implemented with a PMOS transistor and an NMOS transistor serially connected between the supply voltage VDDH and the supply voltage VSSH. InFIG.4B, however, the inverter342is implemented with one PMOS transistors and two NMOS transistor serially connected between the supply voltage VDDL and the supply voltage VSS. The gate of the NMOS transistor MFis connected to power supply VDDH. Adding the NMOS transistor MFto the inverter342enables compensation with the threshold voltage increasing effect on the NMOS transistor in the inverter342. With the inverter342inFIG.4B, smaller degradation of hysteresis window is created for the input buffer circuit300. In operation, if an input voltage signal PAD as shown inFIG.2Ais coupled to the input node102of the input buffer circuit300inFIG.3, the voltage at the buffer output node BufOut of the input buffer circuit300has the waveform as shown inFIG.2F. Before time t1, the tracking-up signal PADUP at the output of the high-side tracker112as induced by the input voltage signal PAD is below the upper threshold voltage VTHof the upper threshold detector114; consequently, the voltage VENupat the output of the upper threshold detector114is at voltage VDDH, which drives the PMOS transistor M13into the non-conducting state. The first enabling signal ENupas represented by the voltage VENupis logic FALSE for the purpose of controlling the first switch131, and the first switch131is driven into the disconnected state. Additionally, the tracking-down signal PADDN at the output of the low-side tracker122as induced by the input voltage signal PAD is below the lower threshold voltage VTLof the lower threshold detector124; consequently, the voltage VENdnat the output of the lower threshold detector124is at voltage VDDL, which drives the NMOS transistor M16into the conducting state. The second enabling signal ENdnas represented by the voltage VENdnis logic TRUE for the purpose of controlling the second switch132, and the second switch132is driven into the connected state. Furthermore, when the buffer output node BufOut is at voltage VSS which is lower than the lower limiting voltage (i.e., VSSH) of the tracker circuit351, the voltage at the input node of the tracker circuit351is maintained at the lower limiting voltage (i.e., VSSH). The voltage VSSH at the input node of the inverter341causes the output node of inverter341to raise towards a voltage HIGH level that drives the PMOS transistor M17into the non-conducting state. That is, due to voltage VSSH at the input node of the inverter341, the switch331is driven into the disconnected state. Additionally, when the buffer output node BufOut is at voltage VSS which is lower than the upper limiting voltage (i.e., VDDL) of the tracker circuit352, the voltage at the input node of the inverter342follows the voltage at the buffer output node BufOut and is also at voltage VSS. The voltage VSS at the input node of the inverter342causes the voltage at the output node of inverter342to drive the NMOS transistor M20into the conducting state. That is, due to voltage VSS at the input node of the inverter342, the switch332is driven into the connected state. Before time t1, as shown inFIG.2F, the buffer output node BufOut is at voltage VSS, when each of the first switch131and the switch331is at the disconnected state but each of the second switch132and the switch332is at the connected state. From time t1to time t2, the tracking-up signal PADUP at the output of the high-side tracker112as induced by the input voltage signal PAD is below the upper threshold voltage VTHof the upper threshold detector114; consequently, the voltage VENupat the output of the upper threshold detector114is at voltage VDDH, which drives the PMOS transistor M13into the non-conducting state. The first enabling signal ENupas represented by the voltage VENupis logic FALSE for the purpose of controlling the first switch131, and the first switch131is driven into the disconnected state. Additionally, the tracking-down signal PADDN at the output of the low-side tracker122as induced by the input voltage signal PAD is above the lower threshold voltage VTHof the lower threshold detector124; consequently, the voltage VENdnat the output of the lower threshold detector124is at voltage VSS, which drives the NMOS transistor M16into the non-conducting state. The second enabling signal ENdnas represented by the voltage VENdnis logic FALSE for the purpose of controlling the second switch132, and the second switch132is at the disconnected state. At the time following time t1, even though the second switch132is changed from the connected state to the disconnected state, the buffer output node BufOut is maintained at voltage VSS, because the switch332is still at the connected state for maintaining the voltage. From time t1to time t2, as shown inFIG.2F, the buffer output node BufOut is at voltage VSS, when each of the first switch131, the switch331, and second switch132is at the disconnected state but the switch332is at the connected state. The voltage at buffer output node BufOut is maintained by the switch332in the regenerative circuit135. At time t2, the tracking-up signal PADUP at the output of the high-side tracker112as induced by the input voltage signal PAD rises above the upper threshold voltage VTHof the upper threshold detector114; consequently, the voltage VENupat the output of the upper threshold detector114is at voltage VSSH, which drives the PMOS transistor M13into the conducting state. The first enabling signal ENupas represented by the voltage VENupis logic TRUE for the purpose of controlling the first switch131, and the first switch131is driven into the connected state. Additionally, at time t2, the tracking-down signal PADDN at the output of the low-side tracker122as induced by the input voltage signal PAD is above the lower threshold voltage VTLof the lower threshold detector124; consequently, the voltage VENdnat the output of the lower threshold detector124is at voltage VSS, which drives the NMOS transistor M16into the non-conducting state. The second enabling signal ENdnas represented by the voltage VENdnis logic FALSE for the purpose of controlling the second switch132, and the second switch132is driven into the disconnected state. After time t2, when the first switch131is driven into the connected state, the voltage at the buffer output node BufOut starts to rise from voltage VSS. The voltage at the input node of the inverter342follows the buffer output node BufOut until the buffer output node BufOut reaches voltage VDDL. The voltage at the input node of the inverter342is maintained at voltage VDDL when the buffer output node BufOut rises above voltage VDDL. The voltage VDDL at the input node of the inverter342causes the output node of inverter342to lower towards a voltage LOW level that drives the NMOS transistor M20into the non-conducting state. That is, due to the voltage VDDL at the input node of the inverter342, the switch332is driven into the disconnected state, which causes the voltage at the buffer output node BufOut to rise further because of the connection established between the buffer output node BufOu and the supply voltage VDDH by the inverter341. When the buffer output node BufOut rises above the voltage VSSH, the voltage at the input node of the inverter341continues to follow the voltage at the buffer output node BufOut until the voltage at the buffer output node BufOut reaches the VDDH. The voltage VDDH at the input node of the inverter341causes the voltage at the output node of the inverter341to drive the PMOS transistor M17into the conducting state. That is, the switch331is driven into the connected state. From time t2to time t3, as shown inFIG.2F, the buffer output node BufOut is at the voltage VDDH, when each of the first switch131and the switch331is at the connected state and each of the second switch132and the switch332is at the disconnected state. From time t3to time t4, the tracking-up signal PADUP at the output of the high-side tracker112as induced by the input voltage signal PAD is below the upper threshold voltage VTHof the upper threshold detector114; consequently, the voltage VENupat the output of the upper threshold detector114is at voltage VDDH, which drives the PMOS transistor M13into the non-conducting state. The first enabling signal ENupas represented by the voltage VENupis logic FALSE for the purpose of controlling the first switch131, and the first switch131is driven into the disconnected state. Additionally, the tracking-down signal PADDN at the output of the low-side tracker122as induced by the input voltage signal PAD is above the lower threshold voltage VTLof the lower threshold detector124; consequently, the voltage VENdnat the output of the lower threshold detector124is at voltage VSS, which drives the NMOS transistor M16into the non-conducting state. The second enabling signal ENdnas represented by the voltage VENdnis logic FALSE for the purpose of controlling the second switch132, and the second switch132is at the disconnected state. At the time following time t3, even though the first switch131is changed from the connected state to the disconnected state, the buffer output node BufOut is maintained at voltage VDDH, because the switch331is still at the connected state for maintaining the voltage. From time t3to time t4, as shown inFIG.2F, the buffer output node BufOut is at voltage VDDH, when each of the first switch131, second switch132, and switch332is at the disconnected state but the switch331is at the connected state. The voltage at buffer output node BufOut is maintained by the switch331in the regenerative circuit135. At time t4, the tracking-up signal PADUP at the output of the high-side tracker112as induced by the input voltage signal PAD is below the upper threshold voltage VTHof the upper threshold detector114; consequently, the voltage VENupat the output of the upper threshold detector114is at voltage VDDH, which drives the PMOS transistor M13into the non-conducting state. The first enabling signal ENupas represented by the voltage VENupis logic FALSE for the purpose of controlling the first switch131, and the first switch131is driven into the disconnected state. Additionally, at time t4, the tracking-down signal PADDN at the output of the low-side tracker122as induced by the input voltage signal PAD falls below the lower threshold voltage VTLof the lower threshold detector124; consequently, the voltage VENdnat the output of the lower threshold detector124is at voltage VDDL, which drives the NMOS transistor M16into the conducting state. The second enabling signal ENdnas represented by the voltage VENdnis logic TRUE for the purpose of controlling the second switch132, and the second switch132is driven into the connected state. After time t4, when the second switch132is changed from the disconnected state to the connected state, the buffer output node BufOut starts to fall from voltage VDDH. The voltage at the input node of the inverter341follows the buffer output node BufOut until the buffer output node BufOut reaches voltage VSSH. The voltage VSSH at the input node of the inverter341causes the output node of inverter341to raise towards a voltage HIGH level that drive the PMOS transistor M17into the non-conducting state. That is, due to the voltage VSSH at the input node of the inverter341, the switch331is driven into the disconnected state, which causes the voltage at the buffer output node BufOut to fall further because of the connection established between to the buffer output node BufOut and the supply voltage VSS by the inverter342. When the voltage at the buffer output node BufOut falls below the voltage VDDL, the voltage at the input node of the inverter342continues to follow the voltage at the buffer output node BufOut until the voltage at the buffer output node BufOut reaches the VSS. The voltage VSS at the input node of the inverter342causes the voltage at the output node of the inverter342to drive the NMOS transistor M20into the conducting state. That is, the switch332is driven into the connected state. After time t4, as shown inFIG.2F, the buffer output node BufOut is at voltage VSS, when each of the first switch131and the switch331is at the disconnected state but each of the second switch132and the switch332is at the connected state. Waveforms of signals at various circuit nodes inFIG.3are depicted inFIGS.5A-5D.FIG.5Ais the waveform of the input voltage signal PAD at the input node102of the input buffer circuit300, in accordance with some embodiments. The waveform of the input voltage signal PAD inFIG.5Ais the same as the waveform inFIG.2A.FIG.5Bis the waveform of the voltage at the buffer output node BufOut in the input buffer circuit300, in accordance with some embodiments. The waveform of the voltage at the buffer output node BufOut is the same as the waveform inFIG.2F.FIG.5Cis the waveform of the output voltage signal Vout at the output node108of the input buffer circuit100, in accordance with some embodiments.FIG.5Dis the waveform of the voltage signal CoreOut at the output node of the level shifter160, in accordance with some embodiments. InFIGS.5A-5D, as the input voltage signal PAD rises, the input voltage signal PAD crosses the lower threshold voltage VTLat time t1and crosses the upper threshold voltage VTHat time t2. The voltage at the buffer output node BufOut inFIG.5Bchanges from the lower supply voltage VSS to the upper supply voltage VDDH at time t2, which is identified as time t+ inFIGS.5B-5D. As the input voltage signal PAD falls, the input voltage signal PAD crosses the upper threshold voltage VTHat time t3and crosses the lower threshold voltage VTLat time t4. The voltage at the buffer output node BufOut inFIG.5Bchanges from the upper supply voltage VDDH to the lower supply voltage VSS at time t4, which is identified as time t− inFIGS.5B-5D. InFIG.3, the buffer output node BufOut is connected to the input node of the tracker circuit352and the output node108is connected to the output node of the tracker circuit352. The output voltage signal Vout follows the voltage at the buffer output node BufOut if the voltage at the buffer output node BufOut is smaller voltage VDDL, but the output voltage signal Vout is maintained at voltage VDDL if the voltage at the buffer output node BufOut is larger than or equal to voltage VDDL. Consequently, the output voltage signal Vout inFIG.5Cchanges from voltage VSS to voltage VDDL at time t+, and changes from voltage VDDL to voltage VSS to at time t−. InFIG.3, the output voltage signal Vout at the output node108is further coupled to the level shifter160, and the voltage signal CoreOut is generated at the output terminal of the level shifter160from the output voltage signal Vout. Because of the level shifter160, the voltage signal CoreOut inFIG.5Dchanges from voltage VSS to voltage VCC at time t+, and changes from voltage VCC to voltage VSS to at time t−. The upper threshold voltage VTHand the lower threshold voltage VTLas shown in inFIG.5Aare correspondingly determined by the upper threshold detector114and the lower threshold detector124(inFIG.1CandFIG.3). InFIG.3, the upper threshold voltage VTHis determined by the threshold voltage of the upper threshold detector114. When the upper threshold detector114is implemented with the PMOS transistor M9and the NMOS transistor M10serially connected between the supply voltage VDDH and the supply voltage VSSH, the threshold voltage of the upper threshold detector114is related to the driving strengths of the PMOS transistor M9and the NMOS transistor M10. The supply voltage VDDH is an upper supply voltage for the upper threshold detector114, and the supply voltage VSSH is an intermediate lower supply voltage for the upper threshold detector114. In some embodiments, if the driving strength of the PMOS transistor M9is equal to the driving strength of the NMOS transistor M10, the threshold voltage of the upper threshold detector114(and the upper threshold voltage VTHof the input buffer circuit300) is equal to (VDDH+VSSH)/2. In some embodiments, the upper threshold voltage VTHof the input buffer circuit300is adjusted by changing the driving strength of the PMOS transistor M9, the driving strength of the NMOS transistor M10, the supply voltage VDDH, and/or the supply voltage VSSH. InFIG.3, the lower threshold voltage VTLis determined by the threshold voltage of the lower threshold detector124. When the lower threshold detector124is implemented with the PMOS transistor M11and the NMOS transistor M12serially connected between the supply voltage VDDL and the supply voltage VSS, the threshold voltage of the lower threshold detector124is related to the driving strengths of the PMOS transistor M11and the NMOS transistor M12. The supply voltage VDDL is an intermediate upper supply voltage for the lower threshold detector124, and the supply voltage VSS is a lower supply voltage for the lower threshold detector124. In some embodiments, if the driving strength of the PMOS transistor M11is equal to the driving strength of the NMOS transistor M12, the threshold voltage of the lower threshold detector124(and the lower threshold voltage VTLof the input buffer circuit300) is equal to (VDDL+VSS)/2. In some embodiments, the lower threshold voltage VTLof the input buffer circuit300is adjusted by changing the driving strength of the PMOS transistor M11, the driving strength of the NMOS transistor M12, the supply voltage VDDL, and/or the supply voltage VSS. FIG.6is a flowchart of a method600of generating an output voltage signal from an input voltage signal, in accordance with some embodiments. The sequence in which the operations of method600are depicted inFIG.6is for illustration only; the operations of method600are capable of being executed in sequences that differ from that depicted inFIG.6. It is understood that additional operations may be performed before, during, and/or after the method600depicted inFIG.6, and that some other processes may only be briefly described herein. In operation605of method600, the input voltage signal is detected. In the embodiments as shown inFIG.1A, the input voltage signal PAD at an input node102is coupled to the upper threshold circuit110and the lower threshold circuit120. In the embodiments as shown inFIG.1C, the input voltage signal PAD at an input node102is coupled to the high-side tracker112in the upper threshold circuit110and coupled to the low-side tracker122in the lower threshold circuit120. In operation610of method600, a first enabling signal is generated based a comparison between the input voltage signal and an upper threshold voltage. In the embodiments as shown inFIG.1A, the input voltage signal PAD at an input node102of the input buffer circuit100is coupled to the upper threshold circuit110, and The upper threshold circuit110is configured to generate a first enabling signal ENupbased on a comparison between an input voltage signal PAD and the upper threshold voltage VTH. In operation620of method600, a second enabling signal is generated based on a comparison between the input voltage signal and a lower threshold voltage. In the embodiments as shown inFIG.1A, the input voltage signal PAD at an input node102of the input buffer circuit100is coupled to the lower threshold circuit120, and the lower threshold circuit120is configured to generated a second enabling signal ENdnbased on a comparison between the input voltage signal PAD and the lower threshold voltage VTL. In operation630of method600, a decision maker determines whether the first enabling signal and the second enabling signal have changed logic level consecutively. In operation640of method600, a decision maker determines whether the second enabling signal change changes logic level before the first enabling signal does. In the embodiments as shown inFIG.1C, the combination of the first switch131, the second switch132, and the regenerative circuit135determines whether the first enabling signal ENupand the second enabling signal ENdnhave changed logic level consecutively. The combination of the first switch131, the second switch132, and the regenerative circuit135also determines whether the second enabling signal change changes logic level before the first enabling signal does. If the second enabling signal changes logic level before the first enabling signal does, then, in operation642of method600, the output voltage signal is changed from the lower level to the higher level. On the other hand, if the first enabling signal changes logic level before the second enabling signal does, then, in operation648of method600, the output voltage signal is changed from the higher level to the lower level. In the embodiments as shown inFIG.1C, as shown inFIGS.2D-2E, after the second enabling signal ENdnis changed from logic TRUE to logic FALSE at time t1, when the first enabling signal ENupis changed from logic FALSE to logic TRUE at time t2, the voltage at the buffer output node BufOut is changed from the lower supply voltage VSS to the upper supply voltage VDDH at time t2. Correspondingly, the output voltage inFIG.5Bis changed from the lower supply voltage VSS to the upper supply voltage VDDH at time t2. On the other hand, as shown inFIGS.2D-2E, after the first enabling signal ENupis changed from logic TRUE to logic FALSE at time t3, when the second enabling signal ENdnis changed from logic FALSE to logic TRUE at time t4, the voltage at the buffer output node BufOut is changed from the upper supply voltage VDDH to the lower supply voltage VSS to at time t4. Correspondingly, the output voltage inFIG.5Bis changed from the upper supply voltage VDDL to the lower supply voltage VSS at time t4. An aspect of the present disclosure relates to an integrated circuit. The integrated circuit includes an upper threshold circuit, a lower threshold circuit, and a control circuit. The upper threshold circuit is configured to set a logic level of a first enabling signal based on comparing an input voltage signal with an upper threshold voltage. The lower threshold circuit is configured to set a logic level of a second enabling signal based on comparing the input voltage signal with a lower threshold voltage. The control circuit is configured to change an output voltage signal from a first voltage level to a second voltage level when the logic level of the first enabling signal and the logic level of the second enabling signal are changed consecutively. Another aspect of the present disclosure relates to a method. The method includes generating a first enabling signal based comparing an input voltage signal with an upper threshold voltage, and generating a second enabling signal based on comparing the input voltage signal with a lower threshold voltage. The method also includes changing an output voltage signal from a first voltage level to a second voltage level when each of the first enabling signal and the second enabling signal changes a logical level consecutively. Another aspect of the present disclosure still relates to an integrated circuit. The integrated circuit includes an upper threshold circuit, a lower threshold circuit, a first switch, and a second switch. The upper threshold circuit is configured to set a logic level of a first enabling signal based on comparing an input voltage signal with an upper threshold voltage. The lower threshold circuit is configured to set a logic level of a second enabling signal based on comparing the input voltage signal with a lower threshold voltage. The first switch electrically is connected between an upper supply voltage and a buffer output node and configured to receive the first enabling signal from the upper threshold circuit. The second switch electrically is connected between the buffer output node and a lower supply voltage and configured to receive the second enabling signal from the lower threshold circuit. The foregoing outlines features of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure. | 52,247 |
11863190 | It will be recognized that some or all of the figures are schematic representations for purposes of illustration. The figures are provided for the purpose of illustrating one or more embodiments with the explicit understanding that they will not be used to limit the scope or the meaning of the claims. DETAILED DESCRIPTION Referring generally to the figures, the systems and methods relate to circuits and techniques for generating data outputs utilizing shared clock-activated transistors, particularly in very large large-scale integration (VLSI) systems such as central processing units (CPUs), graphics processing units (GPUs), system-on-a-chip (SOC), and internet of things (IoT) devices. In many highly pipelined microprocessor systems, flip-flops are used in each pipeline stage to divide the processing logic for higher performance gains. Low power has become a dominant design priority with rise of mobile and battery operated ecosystem. Chief contributor to the power consumption of a digital system is the clock network, namely the large number of transistors that can be coupled to the clock network and are driven by the clock signal. That is, the large number of clock-coupled transistors inside a flip-flop burdens the clock signal. For example, a conventional flip-flop can have up to 12 transistors coupled to the clock signal (referred to herein as “clock-activated transistors” or “clock-coupled transistors”). Accordingly, many digital systems like CPUs or SOCs can have upwards of one hundred thousand to over millions of flip-flops, and the clock network in the digital systems can account for up to forty percent (40%) of the total power consumption. Additionally, with hundreds of thousands to millions of flip-flops used within a digital system, the large number of clock-activated transistors can force the clock network to use larger clock buffers to adequately drive the conventional flip-flops and thereby, increases the size of the clock trees. Thus, reducing the number of clock-activated transistors per flip-flop or per-compound circuit reduces the burden on the clock network and thus, reduces the total power consumption of the digital system. Furthermore, reducing total power consumption of clock networks by incorporating shared clock-activated transistors improves flip-flop circuit architectures while (1) avoiding (or reducing) contentions within the digital system, (2) reducing voltage drops across nodes of the digital system, and (3) restricting toggling of internal nodes when the main input signal is constant. Accordingly, the circuits and methods described herein disclose a compound sequential circuit architecture for sharing clock-activated transistors (sometimes referred to as “clock-coupled transistor”) across N number of flip-flops. That is, instead of each flip-flop within the N-bit array including clock-coupled transistors, each flip-flop includes clock terminals to electrically couple to other flip-flops. In this manner, multiple flip-flops can be chained together within an N-bit array controlled by a few clock-activated transistors. For comparison purposes, an array of 8 conventional flip-flops can have 96 clock-activated transistors, whereas an 8-bit array of the present circuits and methods has three clock-activated transistors. In this particular example and described herein, the operation of the 8-bit flip-flop array is (1) fully-static, (2) contention-free while utilizing single phase clocking without using local clock buffers, and (3) all internal nodes of the 8-bit flip-flop do not toggle when all the inputs are constant. In general, a Metal-Oxide Semiconductor Field-Effect Transistor (planar MOSFET) describes a type of transconductance (or transconductive) device that may be used in modern VLSI systems. Planar MOSFETs (referred to hereafter as “MOSFETs”) are designed as one of two basic types, n-channel and p-channel. N-channel MOSFETs open a conductive path between the source and drain when a positive voltage greater than the device's threshold voltage (VT) is applied from the gate to the source. P-channel MOSFETs open a conductive path when a voltage greater than the device's threshold voltage is applied from the source to the gate. Complimentary MOSFET (CMOS) describe a circuit designed with a mix of n-channel and p-channel MOSFETs. In CMOS designs, n-channel and p-channel may be arranged such that a second level on the gate of a MOSFET turns an n-channel device on (e.g., opens a conductive path), and turns a p-channel MOSFET off (e.g., closes conductive path). Conversely, a first level on the gate of a MOSFET turns a p-channel on and an n-channel off. It should be understood that while CMOS logic is used in the examples, any suitable digital logic process may be used for the circuits and methods described herein. Furthermore, all drawings depict n-channel and p-channel MOSFETs as three terminal devices including a drain, gate, and source unless stated otherwise. Moreover, the fourth terminal being the body substrate is assumed to be coupled to low-power supply for n-channel and high-power supply for p-channel unless stated otherwise. Notwithstanding planar MOSFET technology, the following FIGS. can be applied to FinFET transistor technologies or any suitable 3D vertical transistors such as FinFET, GAAFET (Gate All Around), and Fe FET (ferroelectric). Additionally, aspects of the present disclosure address problems in existing flip-flop circuit architectures by providing an improved power consumption technique utilizing shared clock-activated transistors that may reduce the transistor load on the clock network by up to eighty percent (80%). Furthermore, aspects of the present disclosure incorporate techniques that enable flip-flop circuits to remain fully-static and operate contention free. In addition to reducing the loading on the clock network, aspects of the present disclosure also avoid dependency on sizing relationship of transistor thereby allowing the various circuits described herein to be insensitive to process variation (e.g., attributes of transistors such as length, widths, oxide thickness). The present implementations will now be described in detail with reference to the drawings, which are provided as illustrative examples of the implementations so as to enable those skilled in the art to practice the implementations and alternatives apparent to those skilled in the art. Notably, the figures and examples below are not meant to limit the scope of the present implementations to a single implementation. Other implementations are possible by way of interchange of some or all of the described or illustrated elements. Moreover, where certain elements of the present implementations can be partially or fully implemented using known components, only those portions of such known components that are necessary for an understanding of the present implementations will be described, and detailed descriptions of other portions of such known components will be omitted so as not to obscure the present implementations. In the present specification, an implementation showing a singular component should not be considered limiting; rather, the present disclosure is intended to encompass other implementations including a plurality of the same component, and vice-versa, unless explicitly stated otherwise herein. Moreover, applicants do not intend for any term in the specification or claims to be ascribed an uncommon or special meaning unless explicitly set forth as such. Further, the present implementations encompass present and future known equivalents to the known components referred to herein by way of illustration. Referring toFIG.1, a block diagram illustrating a compound sequential circuit architecture100(hereafter referred to herein as “circuit architecture100”), according to an illustrative implementation. In some implementations, the circuit architecture100includes an N-bit array of flip-flops electrically coupled to each other via three shared clock nodes. Each flip-flop (FF1121, FF2122, FF3123. . . FFN124) can include three clock terminals (SP1, SP2, SN1), an input signal (D1, D2, D3 . . . DN), and an output signal (Q1, Q2, Q3 . . . QN). It should be understood that N is the number of flip-flops in the array, and is an integer number greater than one. In some implementations, the first clock terminal SP1 is electrically coupled to the first shared clock node131, the second clock terminal SP2 is electrically coupled to the second shared clock node132, and the third clock terminal SN1 is electrically coupled to the third shared clock node133. In some implementations, each of the shared clock nodes can be electrically coupled to a clock-activated transistor. For example, the first shared clock node131is electrically coupled to a first clock-activated transistor111, the second shared clock node132is electrically coupled to a second clock-activated transistor112, and the third shared clock node133is electrically coupled to a third clock-activated transistor113. As shown, the first clock-activated transistor111and second clock-activated transistor112can be electrically coupled to high-power supply, while the third clock-activated transistor113can be electrically coupled to low-power supply. Each of the clock-activated transistors can be driven by a main clock signal101. In some implementations, the N-bit array of flip-flops has at most three clock-activated transistors electrically coupled to the main clock signal101. In particular, as shown with reference toFIGS.3-6and8-15, none of the flip-flops include a clock-activated transistors (e.g., FF1121-FFN124). Referring toFIG.2, a block diagram illustrating the compound sequential circuit architecture100ofFIG.1used in a pipeline system, according to an illustrative implementation. In general,FIG.2depicts how the compound sequential circuit architecture100can be applied to a digital system, such as a microprocessor. For example, a microprocessor includes pipeline stages each separated by flip-flops to hold the intermediate results. In this example, there are four flip-flops (N=4) in the N-bit array. Flip-flops FF1211-FF4214can be conventional flip-flops each having ten clock-activated transistors as depicted by conventional flip-flops210. In particular, flip-flop FF1211samples a data bit at input signal D1 originating from logic unit 1 (LOGIC1) and generates an output signal Q1 that is an input to logic unit 2 (LOGIC2). The output of logic unit 2 is the input signal D2 of flip-flop FF2212which generates an output signal Q2 used elsewhere in the digital system. A logic unit 3 (LOGIC3) provides the input signal D3 to flip-flop FF3213which generates an output signal Q3 used elsewhere in the digital system. Finally, a logic unit 4 (LOGIC4) provides the input signal D4 to flip-flop FF4214which generates an output signal Q4 used elsewhere in the digital system. All the flip-flops (211-214) can be controlled by a main clock signal101(CLK). Since each conventional flip-flop (e.g., FF1211-FF4214) may have ten clock-activated transistors, the total transistor load on the main clock signal101can be 40 transistors. Furthermore,FIG.2shows how conventional flip-flops210can be replaced by the 4-bit compound sequential circuit220using the compound sequential circuit architecture100ofFIG.1, which contains the same four input signals (D1-D4) and same four output signals (Q1-Q4). Input signals D1-D4 come from the same sources and output signals Q1-Q4 have the same destinations as conventional flip-flops210and can be re-mapped onto the 4-bit compound sequential circuit220, which also is controlled by main clock signal101. However, instead of 40 transistors burdening the main clock signal101, only three transistor loads are presented to main clock signal101in the case of 4-bit compound sequential circuit220. Referring now toFIG.3, a block diagram illustrating a flip-flop circuit301of the compound sequential circuit architecture100ofFIG.1, according to an illustrative implementation. In general,FIG.3discloses the composition of a flip-flop that can be found in compound sequential circuit architecture100. Flip-flop circuit301includes a first sequence detector310, a second sequence detector320, a first buffer311, a second buffer321, a first keeper sub-circuit312, and a second keeper sub-circuit322, and a latch330. In some implementations, the first buffer311is within the first sequence detector310, and the second buffer321is within the second sequence detector320. In some implementations, the buffers within the sequence detectors described herein could be a separate circuit and/or system external from the sequence detector. In some implementations, latch330can be differential latch including an input sub-circuit or side, connected to signals ACT1313, called the first differential sub-circuit (sometimes referred to as a “side”) and an inverted input sub-circuit or side, connected to signals ACT2323, called the second differential sub-circuit. Latch330also can include differential outputs including a first output Y442and a second output Yn443(sometimes referred to as an “inverted second output Yn443”). First output Y442and second output Yn443have opposite polarities. In some implementations, the first sequence detector310is electrically coupled (e.g., via to the first differential sub-circuit of latch330, and the second sequence detector320is electrically coupled to the second differential sub-circuit of latch330. Furthermore, the first sequence detector310is electrically coupled to the first keeper sub-circuit312, and the second sequence detector320is electrically coupled to the second keeper sub-circuit322. In some implementations, the input signal D440can be electrically coupled to the second sequence detector320and to the second keeper sub-circuit322, whereas the inverted input signal Dn441can be electrically coupled to the first sequence detector310and to the first keeper sub-circuit312. As shown, the first output Y442of latch330is electrically coupled to the first sequence detector310and to the first keeper sub-circuit312. Additionally, as shown, the second output Yn443of latch330is electrically coupled to the second sequence detector320and to the second keeper sub-circuit322. It should be understood that the inputs of the first keeper sub-circuit312have the same logic polarity as the inputs of the first sequence detector310, and furthermore, the inputs of the second keeper sub-circuit322have the same logic polarity as the inputs of the second sequence detector320. Output signal Q446is the output of the exemplary flip-flop and is electrically coupled to latch330. As described herein, no clock-activated transistors are shown within flip-flop circuit301. Instead, the flip-flop circuit301is electrically coupled to first clock terminal SP1451, a second clock terminal SP2452, and a third clock terminal SN1453, each of which are shared across flip-flip circuits. In some implementations, the first buffer311is electrically coupled to the second buffer321via first clock terminal SP1451, and latch330is electrically coupled to the second clock terminal SP2452and to the third clock terminal SN1453. In some implementations, the third keeper sub-circuit within latch330(shown inFIG.4) is electrically coupled to the second clock terminal SP2452. In some implementations, the first keeper sub-circuit312is electrically coupled to the second keeper sub-circuit322via third clock terminal SN1453, and the first differential sub-circuit of latch330is electrically coupled to the second differential sub-circuit of latch330also via the third clock terminal SN1453. Consequently, the first and second differential sub-circuits of latch330are electrically coupled to the first keeper sub-circuit312and the second keeper sub-circuit322. In some implementations, the first output Y442and inverted input signal Dn441are electrically coupled to the first sequence detector310, and the second output Yn443and input signal D440are electrically coupled to the second sequence detector320. As a result, both sequence detectors (310and320) can monitor the current state (Y and Yn) from latch330and the next state (D and Dn) from input signal D440every clock cycle. If the next state and the current state are at the same level, latch330is in storage mode and maintains the output signal Q446at a current level. In some implementations, monitoring can include passively comparing the input D against the output Y for changes, and when the input changes and becomes an opposite polarity versus output Y, the sequence detectors activates ACT1 or ACT2 signals (e.g., the sequence detector monitors the input signal for changes). Thus, sequences (1) D=1, Q=1 and (2) D=0, Q=0 instruct both sequence detectors310and320to (1) de-assert signal ACT1313, (2) de-assert signal ACT2323(de-assert refers to deactivating, e.g., logic 0=0V=first level), and while (3) latch330remains in storage mode retaining the level of output signal446(Q). When the first output Y442is at a second level (e.g., high) and input signal440transitions to a first level (e.g., low), the first sequence detector310is enabled and asserts signal ACT1313(e.g., second level) when the main clock signal101is at a first level. Signal ACT1313also enables the first differential sub-circuit of latch330. Thus, detection of sequence D=0 and Q=1 causes the first sequence detector310to be enabled and in turn enables latch330. When the first sequence detector310is enabled, signal ACT2323remains de-asserted (e.g., first level), and the second sequence detector320remains disabled. When the second output Yn443is at a second level and input signal440transitions to a second level, the second sequence detector320is enabled to assert signal ACT2323when the main clock signal101is at a first level. Signal ACT2323also enables the second differential sub-circuit of latch330. Thus, detection of sequence D=1 and Q=0 causes the second sequence detector320to be enabled and in turn enables latch330. When the second sequence detector320is enabled, signal ACT1313remains de-asserted and the first sequence detector remains disabled. It should be understood that both sequence detectors cannot be enabled simultaneously and furthermore, either first or second sequence detector can assert signal ACT1313or ACT2323, respectively, when the main clock signal101is at a first level. The truth table shown inFIG.3summarizes the sequences to enable each sequence detector and when latch330remains in storage mode. Referring toFIG.4, a circuit diagram illustrating a flip-flop circuit400ofFIG.3of the compound sequential circuit architecture100ofFIG.1, according to an illustrative implementation. In some implementations, the first sequence detector310(with reference toFIG.3) includes transistors403,404,405,406and the first buffer311which includes transistors401and402, where transistor402is a first pull-down transistor of the first buffer311. In some implementations, the second sequence detector320includes transistors415,416,417,418and the second buffer321which includes transistors414and413, where transistor413is the second pull-down transistor of the second buffer321. In some implementations, the first keeper sub-circuit312includes transistors407408,409, and inverter431and the second keeper sub-circuit322includes transistors410,411,412, and inverter432. In some implementations, latch330includes transistors430,420,425,424,421,429,419,428,423,422,426, and427. The first differential sub-circuit of latch330includes transistors430and420while the second differential sub-circuit of latch330includes transistors429and419. Transistor430is the first reduction transistor, and transistor429is the second reduction transistor. Transistors426and427represent components of the third keeper sub-circuit within latch330. Input signal D440is electrically coupled to the input of inverter433to generate the inverted input signal Dn441. The second output Yn443of latch330is electrically coupled to the input of inverter434to generate the output signal Q446. Inverter434is an isolation buffer to isolate the output signal446from the internal nodes of flip-flop circuit400. Generally referring to the connectivity of flip-flop circuit400. In some implementations, the first sequence detector310, the drain terminals of transistors403,404, and405are electrically coupled to node n1444. The source terminals of transistors403and404are electrically coupled to high-power supply. The source terminal of transistor405is electrically coupled to the drain terminal of transistor406. The source terminal of transistor406is electrically coupled to low-power supply. The gate terminals of transistors403and405are electrically coupled to the inverted input signal Dn441while the gate terminals of transistors404and406are electrically coupled to the first output Y442of latch330. Node n1444is electrically coupled to the gate terminals of transistors401and402, and node n1444is also the input to the first buffer311. The source terminal of transistor401is electrically coupled to the first clock terminal SP1451. The drain terminals of transistors401and402are electrically coupled to signal ACT1313which is also the output of the first buffer311. The source terminal of transistor402is electrically coupled to low-power supply. With reference to the second sequence detector320, the drain terminals of transistors415,416, and417are electrically coupled to node n2445. The source terminals of transistors415and416are electrically coupled to high-power supply. The source terminal of transistor417is electrically coupled to the drain terminal of transistor418. The source terminal of transistor418is electrically coupled to low-power supply. The gate terminals of transistors416and417are electrically coupled to the input signal D440, while the gate terminals of transistors415and418are electrically coupled to the second output Yn443of latch330. Node n2445is electrically coupled to the gate terminals of transistors414and413, and node n2445is also the input to the second buffer321. The source terminal of transistor414is electrically coupled to the first clock terminal SP1451. The drain terminals of transistors414and413are electrically coupled to signal ACT2323which is also the output of the second buffer321. The source terminal of transistor413is electrically coupled to low-power supply. With reference to the first keeper sub-circuit312, signal ACT1313is electrically coupled to both the input of inverter431and the drain terminal of transistor409. The gate terminal of transistor409is electrically coupled to the output of inverter431, and the source terminal of transistor409is electrically coupled to the drain terminal of transistor408. The gate terminal of transistor408is electrically coupled to the inverted input signal Dn441, and the source terminal of transistor408is electrically coupled to the drain terminal of transistor407. The gate terminal of transistor407is electrically coupled to the first output Y442, and the source terminal of transistor407is electrically coupled to the third clock terminal SN1453. Transistors409,408, and407form a three stack series-coupled pull-down network. It should be understood that the transistor ordering within the stack can be interchanged without impacting functionality or performance. With reference to the second keeper sub-circuit322, signal ACT2323is electrically coupled to both the input of inverter432and the drain terminal of transistor410. The gate terminal of transistor410is electrically coupled to the output of inverter432, and the source terminal of transistor410is electrically coupled to the drain terminal of transistor411. The gate terminal of transistor411is electrically coupled to the input signal D440, and the source terminal of transistor411is electrically coupled to the drain terminal of transistor412. The gate terminal of transistor412is electrically coupled to the second output Yn443, and the source terminal of transistor412is electrically coupled to the third clock terminal SN1453. In this manner, the first and second keeper sub-circuits are electrically coupled to each other via the third clock terminal SN1453. Transistors410,411, and412also form a three stack series-coupled pull-down network. It should be understood that the transistor ordering within this stack also can be interchanged without impacting functionality or performance. With reference to latch330, transistors430and420of the first differential sub-circuit are electrically coupled in-series. The drain terminal of transistor430is electrically coupled to the first output Y442, and the gate terminal of transistor430is electrically coupled to high-power supply. The gate terminal of transistor420is electrically coupled to signal ACT1313, and the source terminal of transistor420is electrically coupled to the third clock terminal SN1453. Transistors429and419of the second differential sub-circuit are electrically coupled in-series. The drain terminal of transistor429is electrically coupled to the second output Yn443, and the gate terminal of transistor429is electrically coupled to high-power supply. The gate terminal of transistor419is electrically coupled to signal ACT2323, and the source terminal of transistor419is electrically coupled to the third clock terminal SN1453. In this manner, the first and second differential sub-circuits of latch330also are electrically coupled to the first and second keeper sub-circuits via the third clock terminal SN1453. Transistors424and421form a cross-coupled pair with transistors423and422. The gate terminals of transistors424and421are electrically coupled to the drain terminals (also known as second output Yn443) of transistors423and422. Likewise, the gate terminals of transistors423and422are electrically coupled to the drain terminals (also known as first output Y442) of transistors424and421. The source terminals of transistors421and422are electrically coupled to low-power supply. The source terminal of transistor424is electrically coupled to the drain terminals of transistors425and426. The gate terminal of transistor425is electrically coupled to signal ACT1313, and the gate terminal of transistor426is electrically coupled to node n1444. The source terminal of transistor423is electrically coupled to the drain terminals of transistors428and427. The gate terminal of transistor428is electrically coupled to signal ACT2323, and the gate terminal of transistor427is electrically coupled to node n2445. The source terminals of transistors425and428are electrically coupled to high-power supply while the source terminals of transistors426and427are electrically coupled to the second clock terminal SP2452. Now referring to the operation of flip-flop circuit400. As shown in the truth table ofFIG.3when both input signal D440and output signal Q446are at the same level, latch330is in a storage mode state. In some implementations, transistors425,424, and421form an inverter-like cross-coupled pair with transistors428,423, and422to retain the current levels of first output Y442and second output Yn443. For example, when input signal D440is constant, both the first and second sequence detectors are disabled and transistors402and413remain activated. Transistors425and428are activated in response to de-asserting both signals ACT1313and ACT2323. If first output Y442is at a first level and second output Yn443is at a second level, transistors423and421are activated. Transistor421maintains the first output Y442at a first level every clock cycle, and transistors423and428maintain the second output Yn443at a second level every clock cycle. It should be understood that as signal ACT1313is held at a first level by transistor402, transistor409is activated; however, a deactivated transistor407prevents the first keeper sub-circuit312from activating. Likewise, as signal ACT2323is held at a first level by transistor413, transistor410is activated; however, a deactivated transistor411prevents the second keeper sub-circuit322from activating. Conversely if a first output Y442is at a second level and a second output Yn443is at a first level, transistors424and422are activated. Transistor422maintains a second output Yn443at a first level every clock cycle, and transistors424and425maintain a first output Y442at a second level every clock cycle. In this first and second output combination, transistor408is responsible for deactivating the first keeper sub-circuit312while transistor412is responsible for deactivating the second keeper sub-circuit322. If the sequence D=0 and Q=1 is detected (e.g., input signal D440transitions from a second level to a first level and output signal Q446is at a second level) while the main clock signal101is at a first level, transistor405is activated to discharge node n1444to low-power supply. Transistor401is activated to provide a conductive path to high-power supply via first clock-activated transistor111and signal ACT1313is asserted to activate transistor420and thereby, enables the first differential sub-circuit of latch330. Meanwhile, node n2445remains at a second level continuing to activate transistor413and thereby, de-asserted signal ACT2323disables the second differential sub-circuit of latch330. When main clock signal101transitions to a second level, the third clock-activated transistor113discharges the third clock terminal SN1453to pull-down first output Y442to low-power supply. Transistors423and428are activated to charge second output Yn443to a second level. Isolation buffer434generates the new output signal Q446at a first level. When the first output Y442is at a same level as the input signal D440(e.g., first level), transistor404is activated to charge node n1444. Transistor402is activated to de-assert signal ACT1313, and transistor420is deactivated, disabling the first differential sub-circuit of latch330during a main clock signal101being at a second level. With both first and second differential sub-circuits disabled, latch330enters into the storage mode state (described herein). In some implementations, the detection of sequence D=0 and Q=1 to enable the first sequence detector310is accomplished by series-coupled transistors405and406. If the sequence D=1 and Q=0 is detected (e.g., input signal D440transitions from a first level to a second level and output signal Q446is at a first level) while main clock signal101is at a first level, transistor417is activated to discharge node n2445to low-power supply. Transistor414is activated to provide a conductive path to high-power supply via the first clock-activated transistor111and signal ACT2323is asserted to activate transistor419and thereby, enables the second differential sub-circuit of latch330. Meanwhile, node n1444remains at a second level continuing to activate transistor402and thereby, de-asserted signal ACT1313disables the first differential sub-circuit of latch330. When main clock signal101transitions to a second level, the third clock-activated transistor113discharges the third clock terminal SN1453to pull-down second output Yn443to low-power supply. Transistors424and425are activated to charge first output Y442to a second level. Isolation buffer434generates the new output signal Q446at a second level. When the first output Y442is at a same level as the input signal D440(e.g., second level), transistor415is activated to charge node n2445. Transistor413is activated to de-assert signal ACT2323, and transistor419is deactivated, disabling the second differential sub-circuit of latch330during main clock signal101at a second level. In some implementations, with both the first and second differential sub-circuits disabled, latch330enters into the storage mode state as previously described. The detection of sequence D=1 and Q=0 to enable the second sequence detector320is accomplished by series-coupled transistors417and418. When the input signal D440is constant across many clock cycles, transistors402and413are continuously activated to de-assert signals ACT1313and ACT2323, and thereby both sequence detectors remain inactive. When there is no input activity, all the internal nodes of flip-flop circuit400do not toggle and therefore the flip-flop circuit400does not consume any dynamic power. However, should the input signal D440change while main clock signal101is at a second level, either the first keeper sub-circuit312or the second keeper sub-circuit322is activated depending on which sequence is detected. Only either the first or second keeper sub-circuit can be activated while main clock signal101is at a second level. Furthermore, if the first keeper sub-circuit312is activated, transistor402(also known as first pull-down transistor) of the first buffer311must be deactivated. Activation of the first keeper sub-circuit312automatically deactivates the first pull-down transistor402of first buffer311because changes to the inverted input signal Dn441result in discharging node n1444to a first level. Furthermore, if transistor pair407/408is activated, transistor401of the first buffer311is also activated. Conversely, if the second keeper sub-circuit322is activated, transistor413(also known as second pull-down transistor) of the second buffer321must be deactivated. Activation of the second keeper sub-circuit322automatically deactivates the second pull-down transistor413of second buffer321because changes to the input signal D440result in discharging node n2445to a first level. Furthermore, if transistor pair411/412is activated, transistor414of the second buffer321is also activated. In other words, it is the responsibility of the first or second keeper sub-circuit to maintain signals ACT1313or ACT2323, respectively, at a first level when input signal D440changes while main clock signal101is at a second level. Thus, this compound sequential circuit architecture100does not allow both the first and second keeper sub-circuits to be activated simultaneously and requires the corresponding transistor in the pull-down network of the first or second buffer to be deactivated upon activation of the first or second keeper sub-circuit. These two rules prevent contention at the three clock terminals when multiple flip-flops are electrically coupled together within the N-bit array. The following example provides further details on the operation of the first keeper sub-circuit312. If sequence D=0 and Q=1 is detected while main clock signal101is at a second level, transistor402is deactivated by node n1444discharging to low-power supply. Transistor401is activated; however, no conductive path to high-power supply exists at signal ACT1313while main clock signal101is at a second level. Therefore, signal ACT1313is maintained at a first level by three series-coupled transistors409,408, and407. Third clock terminal SN1453is already discharged to low-power supply by the third clock-activated transistor113. Transistor409is activated by the output of inverter431at a second level. Transistor408is activated by inverted input signal Dn441at a second level. Transistor407is activated by the first output Y442at a second level. Therefore, all three series-coupled transistors409,408, and407are activated to maintain a conductive path to low-power supply for signal ACT1313while main clock signal101is at a second level. Furthermore, transistor420remains deactivated such that the first differential sub-circuit of latch330also remains disabled. Signal ACT2323also remains de-asserted and thus, transistor419is deactivated such that the second differential sub-circuit of latch330remains disabled. Despite transistor410being activated by signal ACT2323at a first level, transistors411and412are deactivated by their corresponding inputs and thus, the second keeper sub-circuit322remains deactivated. Accordingly, it is shown that when the first keeper sub-circuit312is activated, the first and second differential sub-circuits and second keeper sub-circuit322are disabled. This behavior of the first keeper sub-circuit312prevents contention at the three clock terminals when multiple flip-flops are electrically coupled together within the N-bit array. When main clock signal101transitions from a second level to a first level, the third clock-activated transistor113is deactivated and the conductive path to low-power supply of the first keeper sub-circuit312is interrupted. A conductive path to high-power supply is established via transistor401and first clock-activated transistor111to charge signal ACT1313to a second level without contention from the first keeper sub-circuit312. As the second differential sub-circuit of latch330and the second keeper sub-circuit322are disabled, the second output Yn443and signal ACT2323are decoupled from signal ACT1313and therefore, they cannot interfere with the transition of signal ACT1313to a second level. When signal ACT1313is at a second level, transistor409is deactivated to complete the deactivation of the first keeper sub-circuit312. The following example provides further details on the operation of the second keeper sub-circuit322. If sequence D=1 and Q=0 is detected while main clock signal101is at a second level, transistor413is deactivated by node n2445discharging to low-power supply. Transistor414is activated; however, no conductive path to high-power supply exists at signal ACT2323while main clock signal101is at a second level. Therefore, signal ACT2323is maintained at a first level by three series-coupled transistors410,411, and412. Third clock terminal SN1453is already discharged to low-power supply by the third clock-activated transistor113. Transistor410is activated by the output of inverter432at a second level. Transistor411is activated by input signal D440at a second level. Transistor412is activated by the second output Yn443at a second level. Therefore, all three series-coupled transistors410,411, and412are activated to maintain a conductive path to low-power supply for signal ACT2323while main clock signal101is at a second level. Furthermore, transistor419remains deactivated such that the second differential sub-circuit of latch330also remains disabled. Signal ACT1313also remains de-asserted and thus, transistor420is deactivated, such that the first differential sub-circuit of latch330remains disabled. Despite transistor409being activated by signal ACT1313at a first level, transistors408and407are deactivated by their corresponding inputs and thus, the first keeper sub-circuit312remains deactivated. Accordingly, it is shown that when the second keeper sub-circuit322is activated, the first and second differential sub-circuits and first keeper sub-circuit312are disabled. This behavior of the second keeper sub-circuit322is fundamental to preventing contention at the three clock terminals when multiple flip-flops are electrically coupled together within the N-bit array. When main clock signal101transitions from a second level to a first level, the third clock-activated transistor113is deactivated and the conductive path to low-power supply of the second keeper sub-circuit322is interrupted. A conductive path to high-power supply is established via transistor414and first clock-activated transistor111to charge signal ACT2323to a second level without contention from the second keeper sub-circuit322. As the first differential sub-circuit of latch330and the first keeper sub-circuit312are disabled, the first output Y442and signal ACT1313are decoupled from signal ACT2323and therefore, they cannot interfere with the transition of signal ACT2323to a second level. When signal ACT2323is at a second level, transistor410is deactivated to complete the deactivation of the second keeper sub-circuit322. In some implementations, either the first or second sequence detector can assert signal ACT1313or ACT2323, respectively, during main clock signal101at a first level in response to any changes in the input signal D440because the pull-up networks of the first and second buffers are clock-gated by the first clock-activated transistor111. The assertion of ACT1313or ACT2323deactivates transistor425or428, respectively, and interrupts the conductive path to high-power supply at either first output Y442or second output Yn443. To avoid first output Y442or second output Yn443from being in a floating state while main clock signal101is at a first level, a third keeper sub-circuit including transistors426and427is implemented within latch330to restore a conductive path to high-power supply via the second clock-activated transistor112. If sequence D=0 and Q=1 is detected by the first sequence detector310and signal ACT1313is asserted to deactivate transistor425, node n1444, after being discharged to low-power supply by transistors405and406, activates transistor426. As transistor424is still activated, transistor426and second clock-activated transistor112provide an alternative path to high-power supply at first output Y442. Likewise, if sequence D=1 and Q=0 is detected by the second sequence detector320and signal ACT2323is asserted to deactivate transistor428, node n2445, after being discharged to low-power supply by transistors417and418, activates transistor427. As transistor423is still activated, transistor427and second clock-activated transistor112provide an alternative path to high-power supply at second output Yn443. Thus, the third keeper sub-circuit ensures and maintains fully static operation of latch330. When the main clock signal101transitions to a second level for latch330to latch the changed input data bit, either first output Y442or second output Yn443is discharged to low-power supply based on the detected sequence. As the output signal Q446settles to the same level as input signal D440, either node n1444or node n2445is charged to a second level, thereby, deactivating the third keeper sub-circuit. This deactivation of the third keeper sub-circuit during main clock signal101at a second level avoids contention among flip-flops when multiple flip-flops are electrically coupled within the N-bit array. In some implementations, another method for avoiding contention is not permitting transistors426and427to be activated simultaneously. In some implementations, the flip-flop circuit400can include a plurality of electrical relationships. For example, a first electrical relationship of the plurality of electrical relationships is when the first differential sub-circuit is activated, the second differential sub-circuit is deactivated, the first keeper sub-circuit is deactivated, and the second keeper sub-circuit is deactivated. In another example, a second electrical relationship of the plurality of electrical relationships is when the second differential sub-circuit is activated, the first differential sub-circuit is deactivated, the first keeper sub-circuit is deactivated, and the second keeper sub-circuit is deactivated. In yet another example, a third electrical relationship of the plurality of electrical relationships is when the first keeper sub-circuit is activated, the second keeper sub-circuit is deactivated, the first differential sub-circuit is deactivated, the second differential sub-circuit is deactivated, and a first pull-down transistor of the first buffer is deactivated. In yet another example, a fourth electrical relationship of the plurality of electrical relationships is when the second keeper sub-circuit is activated, the first keeper sub-circuit is deactivated, the first differential sub-circuit is deactivated, and the second differential sub-circuit is deactivated, and a second pull-down transistor of the second buffer is deactivated. In yet another example, a fifth electrical relationship of the plurality of electrical relationships is when the first differential sub-circuit of the latch is activated when the main clock signal is at the first level, and when the second differential sub-circuit of the latch is activated when the main clock signal is at the first level. In yet another example, a sixth electrical relationship of the plurality of electrical relationships is when the first keeper sub-circuit is activated when the main clock signal is at the second level, and when the second keeper sub-circuit is activated only when the main clock signal is at the second level. In yet another example, a seventh electrical relationship of the plurality of electrical relationships is when the second output of the latch is the inverted polarity of the first output of the latch. In some implementations, the plurality of flip-flops are electrically coupled to at most three clock-activated transistors, where each source terminal of the first clock-activated transistor, the second clock-activated transistor, and the third clock-activated transistor is electrically coupled to at least one of a high-power supply or a low-power supply. In some implementations, the first differential sub-circuit transistors420and430of the latch includes a first reduction transistor430configured to reduce a first charge sharing between the first output of the latch (Y442) and the third shared clock node (SN1453), and wherein the second differential sub-circuit transistors419and429of the latch includes a second reduction transistor429configured to reduce a second charge sharing between the second output of the latch (Yn443) and the third shared clock node (SN1453). In some implementations, the flip-flop circuit400includes a first delay electrical element1437and1438(e.g., shown with reference toFIG.14) positioned between the first output (Y442) of the latch and the first sequence detector and a second delay electrical element1435and1436(e.g., shown with reference toFIG.14) positioned between the second output of the latch (Yn443) and the second sequence detector. In some implementations, the third keeper sub-circuit transistors426and427of the latch electrically couples to the first sequence detector and the second sequence detector. In particular, the third keeper sub-circuit is activated when the input signal (D440) is different (e.g., of opposite logic polarity) than the first output (Y442) of the latch and deactivated when the input signal is at same level as the first output of the latch. In some implementations, the third keeper sub-circuit is electrically coupled to the second shared clock node (SP2452), and the third keeper sub-circuit is further electrically coupled to the first keeper sub-circuit and the second keeper sub-circuit. In some implementations, the first keeper sub-circuit is gated by the first output (Y442) of the latch and an inverted input signal (Dn441), and the second keeper sub-circuit is gated by the second output (Yn443) of the latch and an input signal (D440). In some implementations, the first sequence detector is configured to activate a first differential sub-circuit of the latch in response to (1) the first output (Y442) of the latch being at a second level, (2) the input signal (D440) being at a first level, and (3) a main clock signal101being at the first level. Furthermore, the first differential sub-circuit of the latch is deactivated when the main clock signal101is at the second level. In some implementations, the second sequence detector is configured to activate a second differential sub-circuit of the latch in response to (1) the first output (Y442) of the latch being at a first level, (2) the input signal (D440) being at a second level, and (3) a main clock signal101being at the first level. Furthermore, the second differential sub-circuit of the latch is deactivated when the main clock signal101is at the second level. In some implementations, the first differential sub-circuit of the latch and second differential sub-circuit of the latch are electrically coupled to the third shared clock node (SN1453). In some implementations, the inputs of the first keeper sub-circuit have the same logic polarities compared to the inputs of the first sequence detector, and the inputs of the second keeper sub-circuit have the same logic polarities compared to the inputs of the second sequence detector. In some implementations, the first keeper sub-circuit is activated in response to (1) the first output of the latch (Y442) being at a second level, (2) the input signal (D440) being at a first level, and (3) a main clock signal101being at the second level. In some implementations, the second keeper sub-circuit is activated in response to (1) the first output (Y442) of the latch being at a first level, (2) the input signal (D440) being at a second level, and (3) a main clock signal101being at the second level. Furthermore, both the first keeper sub-circuit and the first sequence detector receive the inverted input signal (Dn441) and the first output (Y442) of the latch as inputs, and both the second keeper sub-circuit and the second sequence detector receive the input signal (D440) and the second output (Yn443) of the latch as inputs. Referring toFIG.5, a circuit diagram illustrating another flip-flop circuit500ofFIG.3of the compound sequential circuit architecture100ofFIG.1, according to an illustrative implementation. In some implementations, the third keeper sub-circuit also can be activated and deactivated by either the first keeper sub-circuit312or the second keeper sub-circuit322, in contrast toFIG.4where the third keeper sub-circuit of flip-flop circuit400is activated and deactivated by either the first sequence detector310or the second sequence detector320. Flip-flop circuit500depicts that the inputs of the third keeper sub-circuit are electrically coupled to the outputs of inverters431and432. In this manner, the third keeper sub-circuit is electrically coupled to the first and second keeper sub-circuits. The gate terminal of transistor426is electrically coupled to the output of inverter431at node n4548, and the gate terminal of transistor427is electrically coupled to the output of inverter432at node n3547. In this configuration, the activation of the third keeper sub-circuit is delayed until main clock signal101transitions to a first level even if the input signal D440changes during main clock signal101at a second level. Additionally, transistors425and428cannot be enabled simultaneously with the activation of the third keeper sub-circuit, which further avoids contention. It should be understood that the flip-flop configuration ofFIG.5has equivalent functionality and performance as flip-flop circuit400. Referring toFIGS.6A-6C, circuit diagrams illustrating contention avoidance of flip-flop circuits ofFIG.4of the compound sequential circuit architecture100ofFIG.1, according to an illustrative implementation.FIG.6Adepicts the interaction at the first shared clock node131for two flip-flops electrically coupled via their first clock terminals SP1451.FIG.6Bdepicts the interaction at the second shared clock node132for two flip-flops electrically coupled via their second clock terminals SP2452.FIG.6Cdepicts the interaction at the third shared clock node133for two flip-flops electrically coupled via their third clock terminals SN1453. Therefore, the contention-free operation of compound sequential circuit architecture100is discussed using a 4-bit array (N=4). In some implementations, activation of the first sequence detector310asserts signal ACT1313to activate the first activation transistor420in order to pull-down first output Y442upon the rising edge of main clock signal101. Thus, first output Y442is at a second level before activation of first sequence detector310. Likewise, activation of the second sequence detector320asserts signal ACT2323to activate the second activation transistor419in order to pull-down second output Yn443upon the rising edge of main clock signal101. Thus, second output Yn443is at a second level before activation of second sequence detector320. With reference to the second shared clock node132and second clock terminal SP2452, flip-flop FF1221is electrically coupled to flip-flop FF2222via second shared clock node132as shown inFIG.6B. In some implementations, each flip-flop has a second clock terminal SP2452that also is electrically coupled to the second shared clock node132, and is further electrically coupled to a third keeper sub-circuit within latch330. The third keeper sub-circuit is activated in response to a change in the input signal of the associated flip-flop, and is deactivated when the output signal of the associated flip-flop is at the same level as that input signal. For example, the first output Y442of flip-flop FF1221is at a second level, and second output Yn443of flip-flop FF1221is at a first level. The first output Y442of flip-flop FF2222is at a first level and second output Yn443of flip-flop FF2222is at a second level. Input signal D1 of flip-flop FF1221changes from a second level to a first level while main clock signal101is at a second level, and transistors405and406are activated to pull-down node n1444to activate transistor426of the third keeper sub-circuit within flip-flop FF1221. Transistor427within flip-flop FF1221remains inactive as the second sequence detector320senses no change in the input signal D1 relative to the second output Yn443. Transistor425remains active during this period and thus a low-impedance path to high-power supply via transistors426and425exists at the second shared clock node132. Conversely, input signal D2 of flip-flop FF2222changes from a first level to a second level while main clock signal101is at a second level, and transistors417and418are activated to pull-down node n2445to activate transistor427of the third keeper sub-circuit within flip-flop FF2222. Transistor426within flip-flop FF2222remains inactive as the first sequence detector310senses no change in the input signal D2 relative to the first output Y442. As transistor423remains active as part of the pull-up network to maintain second output Yn443at a second level, the conductive path to high-power supply via transistors425and426of FF1221and transistors427and423of FF2222serves as an additional pull-up network to maintain second output Yn443at a second level for flip-flop FF2222. Therefore, no contention exists between flip-flops FF1221and FF2222as the voltage polarities are the same throughout this conductive path. In some implementations, while input signal D1 is at a first level and as main clock signal101transitions from a second level to a first level, signal ACT1313is asserted, transistor420is activated, and transistor425is deactivated to interrupt the aforementioned conductive path to high-power supply. Furthermore, while main clock signal101is at a first level, signal ACT2323of flip-flop FF2222is asserted to activate transistor419and deactivate transistor428. With transistor428of FF2222and transistor425of FF1221deactivated, no conductive path to high-power supply exists by the time main clock signal101transitions from a first level to a second level. Thus, second output Yn443of FF2222and first output Y442of FF1221can discharge to low-power supply without any contention. In some implementations, while main clock signal101is at a second level and output signals Q1 of FF1221and Q2 of FF2222settle to the same level as D1 and D2, respectively, transistor426of FF1221and transistor427of FF2222are deactivated to completely isolate the second clock terminal SP2452of each flip-flop from the other flip-flops within the 4-bit array to avoid any contention. With reference to the third shared clock node133and third clock terminal SN1453, flip-flop FF1221is electrically coupled to flip-flop FF2222via third shared clock node133as shown inFIG.6C. Each flip-flop has a third clock terminal SN1453that also is electrically coupled to the third shared clock node133. Third clock terminal SN1453is further sub-divided into two linkages: a first linkage that electrically couples the first and second differential sub-circuits of latch330to SN1453, while the second linkage electrically couples the first keeper sub-circuit312and second keeper sub-circuit322to SN1453. For example, with reference to the first linkage, the first output Y442of flip-flop FF1221is at a second level and second output Yn443of flip-flop FF1221is at a first level. The first output Y442of flip-flop FF2222is at a first level and second output Yn443of flip-flop FF2222is at a second level. Input signal D1 of flip-flop FF1221changes from a second level to a first level while main clock signal101is at a first level. Signal ACT1313is asserted to activate transistor420and thereby, enabling the first differential sub-circuit of latch330. Since input D1 is at the same level as second output Yn443, signal ACT2323remains de-asserted, and the second differential sub-circuit of latch330remains disabled. Conversely, input signal D2 of flip-flop FF2222changes from a first level to a second level while main clock signal101is at a first level. Signal ACT2323is asserted to activate transistor419and thereby, enabling the second differential sub-circuit of latch330. Since input D1 is at same level as first output Y442, signal ACT1313remains de-asserted, and the first differential sub-circuit of latch330remains disabled. As the first differential sub-circuit of FF1221and the second differential sub-circuit of FF2222are enabled, a conductive path between flip-flops FF1221and FF2222via the third shared clock node133is formed. This conductive path electrically couples first output Y442of FF1221to second output Yn443of FF2222. However, as the voltage polarities (e.g., a second level) of both outputs are the same, no contention occurs. Therefore, the flip-flops electrically coupled via third shared clock node133within the 4-bit array do not exhibit contention at the third clock terminal SN1453. Prior to the main clock signal101transitioning to a first level, transistors420and419are deactivated as signals ACT1313and ACT2323are de-asserted due to the absence of a conductive path to high-power supply via the pull-up network of the first buffer311and second buffer321. This ensures that either the first or second differential sub-circuit is enabled when the main clock signal101is at a first level. Therefore, first output Y442and second output Yn443of one flip-flop are isolated from the first output Y442and second output Yn443of another flip-flop. By the time main clock signal101transitions from a second level to a first level even though the third keeper sub-circuit may be activated, no contention occurs at the first linkage of the third clock terminal SN1453. In another example, with reference to the second linkage, the first keeper sub-circuit312and the second keeper sub-circuit322are electrically coupled also to the third clock terminal SN1453within each flip-flop. Only either the first keeper sub-circuit312or second keeper sub-circuit322can be activated in each flip-flop when the main clock signal101is at a second level. Since both keeper sub-circuits are deactivated while the main clock signal101is at a first level, the first keeper sub-circuit312and second keeper sub-circuit322do not pose a risk of contention among flip-flops at the second linkage of the third clock terminal SN1453. With reference to the first shared clock node131and first clock terminal SP1451, flip-flop FF1221is electrically coupled to flip-flop FF2222via first shared clock node131as shown inFIG.6A. Each flip-flop has a first clock terminal SP1451that also is electrically coupled to the first shared clock node131. As mentioned in the second linkage discussion, either the first keeper sub-circuit312or second keeper sub-circuit322can be active when the main clock signal101is at a second level. As input signal D1 of flip-flop FF1221changes from a second level to a first level during main clock signal101at a second level, the first keeper sub-circuit312is activated to maintain signal ACT1313at a first level. Furthermore, transistor401of the first buffer311is activated. Likewise, as input signal D2 of flip-flop FF2222changes from a first level to a second level during main clock signal101at a second level, the second keeper sub-circuit322is activated to maintain signal ACT2323at a first level. Also, transistor414of the second buffer321is activated. A conductive path to low-power supply is provided by transistors407/408/409/401from flip-flop FF1221and transistors414/410/411/412from flip-flop FF2222via first clock terminal SP1451and third clock-activated transistor113. Since signal ACT1313of flip-flop FF1221and signal ACT2323of flip-flop FF2222are both at a first level while this conductive path to low-power supply is enabled, no contention occurs at the first clock terminal SP1451. When main clock signal101transitions to a first level, the third clock-activated transistor113is deactivated to interrupt the pull-down paths of the first keeper sub-circuit312from FF1221and second keeper sub-circuit322from FF2222. Furthermore, the first and second differential sub-circuits of latch330are disabled in both FF1221and FF2222prior to main clock signal101transitioning to a first level. Therefore, signals ACT1313from FF1221and ACT2323from FF2222can transition to a second level without any contention from third clock terminal SN1453. FIG.7is a block diagram illustrating another flip-flop circuit701of the compound sequential circuit architecture100ofFIG.1, according to an illustrative implementation. In contrast toFIGS.3-4where the architecture of flip-flop circuit400employs a first sequence detector310to control the first differential sub-circuit and a second sequence detector320to control the second differential sub-circuit of latch330,FIG.7illustrates a block diagram of an embodiment that employs only one sequence detector to control both differential sub-circuits of latch330. The block diagram depicts flip-flop circuit701to include a sequence detector710, a buffer711, a first keeper sub-circuit712, and a latch730. As shown, buffer711is part of sequence detector710. In some implementations, latch730is also a differential latch with a first differential sub-circuit and a second differential sub-circuit. In particular, the first differential sub-circuit receives the inverted input signal Dn841and the second differential sub-circuit receives the input signal D840. Latch730can also include a first output Y842and second output Yn843. In some implementations, the sequence detector710is electrically coupled to the first and second differential sub-circuits of latch730, and further electrically coupled to the first keeper sub-circuit712. Additionally, as shown the input signal D840is electrically coupled to (1) the second differential sub-circuit of latch730, (2) the sequence detector710, (2) the first keeper sub-circuit712, and (4) a second keeper sub-circuit within latch730. Moreover, as shown the inverted input signal Dn841is electrically coupled to (1) the first differential sub-circuit of latch730, (2) the sequence detector710, (3) the first keeper sub-circuit712, and (4) the second keeper sub-circuit within latch730. In some implementations, the first output Y842and second output Yn843are electrically coupled to the sequence detector710and to the first keeper sub-circuit712and output signal Q846is the output of flip-flop circuit701and is electrically coupled to latch730. Referring now toFIG.8, a circuit diagram illustrating a flip-flop ofFIG.7of the compound sequential circuit architecture ofFIG.1, according to an illustrative implementation. In general,FIG.8discloses a flip-flop circuit800to implement the architecture illustrated inFIG.7. Additionally, the flip-flop circuit800similarly does not include any clock-activated transistors but instead includes a first clock terminal SP1851, a second clock terminal SP2852, and a third clock terminal SN1853. In some implementations, buffer711is electrically coupled to the first clock terminal SP1851and latch730is electrically coupled to second clock terminal SP2852and to the third clock terminal SN1853. The second keeper sub-circuit within latch730is electrically coupled to second clock terminal SP2852and the first keeper sub-circuit712is electrically coupled to the third clock terminal SN1853. The first differential sub-circuit of latch730is electrically coupled to the second differential sub-circuit of latch730via third clock terminal SN1853. Consequently, the first and second differential sub-circuits of latch730are electrically coupled to the first keeper sub-circuit712. As the first output Y842, second output Yn843, and input signal D840are electrically coupled to the sequence detector710, the current state from latch730and the next state from input signal D840can be monitored every clock cycle. If the next state and the current state are at the same level, latch730is in storage mode and maintains the output signal Q846at current level. Hence sequences D=1, Q=1 and D=0, Q=0 instruct sequence detector710to de-assert signal ACT3713and latch730enters into storage mode retaining the level of output signal Q846. For example, if first output Y842is at a second level and input signal840transitions to a first level or if first output Y842is at a first level and input signal840transitions to a second level, sequence detector710is enabled to assert signal ACT3713(e.g., second level) when the main clock signal101is at a first level. Thus, sequence detector710responds to both sequences D=0, Q=1 and D=1, Q=0. If sequence D=0, Q=1 is detected, the first differential sub-circuit of latch730is enabled and if sequence D=1, Q=0 is detected, the second differential sub-circuit of latch730is enabled. Upon detection of either sequence, the sequence detector710can assert signal ACT3713when the main clock signal101is at a first level. Each branch of the first and second differential sub-circuits includes an activation transistor that can be enabled by signal ACT3713. The first and second differential sub-circuits of latch730also are input-gated, and the level of input signal D840determines which differential sub-circuit is enabled. It should be understood that an equivalent AND function is applied to the input signal D840and signal ACT3713in determining whether the first or the second differential sub-circuit is enabled. The truth table shown inFIG.7summarizes the operating mode of latch730based on which sequence is detected. The circuit elements including flip-flop circuit800is now described. The sequence detector710includes transistors803,804,805,806and buffer711, which includes transistors801and802. Transistor802is the pull-down transistor of buffer711. The first keeper sub-circuit712includes transistors807,808,809,810,811, and inverter827. Latch730includes transistors812-826. Transistors812and817include the first differential sub-circuit of latch730, and transistors813and814include the second differential sub-circuit of latch730. Transistors812and813are the two activation transistors. Transistors824and825are components of the second keeper sub-circuit within latch730. Input signal D840is electrically coupled to the input of inverter828to generate the inverted input signal Dn841. The second output Yn843of latch730is electrically coupled to the input of inverter829to generate the output signal Q846. Inverter829is an isolation buffer to isolate the output signal846from the internal nodes of flip-flop circuit800. The connectivity of flip-flop circuit800is now described. For the sequence detector710, the drain terminals of transistors803and804are electrically coupled to the input signal D840. The drain terminals of transistors805and806are electrically coupled to the inverted input signal Dn841. The source terminals of transistors803,804,805, and806are electrically coupled to node n5849. The gate terminals of transistors804and805are electrically coupled to the second output Yn843, and the gate terminals of transistors803and806are electrically coupled to the first output Y842. The gate terminals of transistors801and802are electrically coupled also to node n5849, and the drain terminals are electrically coupled to signal ACT3713. Node n5849and signal ACT3713are the input and output of buffer711, respectively. The source terminal of transistor801is electrically coupled to the first clock terminal SP1851, and the source terminal of transistor802is electrically coupled to low-power supply. With reference to the first keeper sub-circuit712, signal ACT3713is electrically coupled to the input of inverter827and also electrically coupled to the drain terminal of transistor807. The gate terminal of transistor807is electrically coupled to the output of inverter827, and the source terminal is electrically coupled to the drain terminals of transistors808and811. The source terminal of transistor808is electrically coupled to the drain terminal of transistor809. The gate terminal of transistor808is electrically coupled to the input signal D840, and the gate terminal of transistor809is electrically coupled to the second output Yn843. The source terminal of transistor811is electrically coupled to the drain terminal of transistor810. The gate terminal of transistor811is electrically coupled to the inverted input signal Dn841, and the gate terminal of transistor810is electrically coupled to the first output Y842. The source terminals of transistors809and810are electrically coupled to the third clock terminal SN1853. It should be understood that the order of transistors808and809can be interchanged, and the order of transistors811and810can be interchanged without impacting functionality or performance. Likewise, transistor807also can be reordered within the pull-down stack of first keeper sub-circuit712. With reference to latch730, transistors817and812of the first differential sub-circuit are electrically coupled in-series. The drain terminal of transistor817is electrically coupled to the first output Y842, and the gate terminals of transistors817,823, and824are electrically coupled to the inverted input signal Dn841. The gate terminal of transistor812is electrically coupled to signal ACT3713, and the source terminal is electrically coupled to the third clock terminal SN1853. Transistors814and813of the second differential sub-circuit are electrically coupled in-series. The drain terminal of transistor814is electrically coupled to the second output Yn843, and the gate terminals of transistors814,826, and825are electrically coupled to the input signal D840. The gate terminal of transistor813is electrically coupled to signal ACT3723, and the source terminal is electrically coupled to the third clock terminal SN1853. In this manner, the first and second differential sub-circuits of latch730also are electrically coupled to the first keeper sub-circuit via the third clock terminal SN1853. Transistors819and816form a cross-coupled pair with transistors820and815. The gate terminals of transistors819and816are electrically coupled to the drain terminals (also referred to herein as “second output Yn843”) of transistors820and815. Likewise, the gate terminals of transistors820and815are electrically coupled to the drain terminals (also known as first output Y842) of transistors819and816. The source terminals of transistors816and815are electrically coupled to low-power supply, and the source terminals of transistors819and820are electrically coupled to the drain terminal of transistor822. The source terminal of transistor822is electrically coupled to high-power supply, and the gate terminal is electrically coupled to signal ACT3713. The drain terminal of transistor818is electrically coupled to the first output Y842, and the gate terminal is electrically coupled to second output Yn843. The drain terminal of transistor821is electrically coupled to the second output Yn843, and the gate terminal is electrically coupled to the first output Y842. The source terminal of transistor821is electrically coupled to second intermediate node n7851, and the source terminal of transistor818is electrically coupled to first intermediate node n6850. The drain terminals of transistors824and825from the second keeper sub-circuit are electrically cross-coupled to second intermediate node n7851and first intermediate node n6850, respectively. The source terminals of transistors824and825are electrically coupled to the second clock terminal SP2852. The drain terminal of transistor823is electrically coupled to first intermediate node n6850, and the drain terminal of transistor826is electrically coupled to second intermediate node n7851. The source terminals of transistors823and826are electrically coupled to high-power supply. The operation of flip-flop circuit800is described now in detail. As shown in the truth table ofFIG.7when both input signal D840and output signal Q846are at the same level, latch730is in storage mode. Transistors823,818, and816form an inverter-like cross-coupled pair with transistors826,821, and815to retain the current levels of first output Y842and second output Yn843. For example, when input signal D840is constant, sequence detector710charges node n5849to a second level to activate transistor802. Signal ACT3713is de-asserted (e.g., first level) to disable the first and second differential sub-circuits of latch730by deactivating transistors812and813. If first output Y842is at a first level and second output Yn843is at a second level, transistors816and821are activated. The first output Y842at a first level implies that input signal D840is also at a first level and thus, transistor826is activated. Transistor816maintains first output Y842at a first level every clock cycle, and transistors821and826maintain second output Yn843at a second level every clock cycle. Additionally, second output Yn843is maintained at a second level further by transistors820and822because transistor820mirrors transistor821, and transistor822is activated by signal ACT3713at a first level. It should be understood that as signal ACT3713is held at a first level by transistor802, transistor807of the first keeper sub-circuit712is activated; however, transistors808and810are deactivated to prevent the pull-down network from activation. Conversely if first output Y842is at a second level and second output Yn843is at a first level, transistors815and818are activated. The first output Y842at a second level implies that input signal D840is also at a second level and thus, transistor823is activated. Transistor815maintains second output Yn843at a first level every clock cycle, and transistors818and823maintain first output Y842at a second level every clock cycle. Additionally, first output Y842is maintained at a second level further by transistors819and822because transistor819mirrors transistor818, and transistor822is activated by signal ACT3713at a first level. In this first and second output combination, transistors809and811are deactivated to prevent the pull-down network of the first keeper sub-circuit712from activation. For example, if the sequence D=0 and Q=1 is detected (e.g., input signal D840transitions from a second level to a first level and output signal Q846is at a second level) while main clock signal101is at a first level, transistors803and804are activated to discharge node n5849to low-power supply. Transistor801is activated to provide a conductive path to high-power supply via first clock-activated transistor111, and signal ACT3713is asserted to activate the two activation transistors812and813. Since input signal D840is at a first level, transistor817is activated while transistor814is deactivated and thereby, the first differential sub-circuit of latch730is enabled and the second differential sub-circuit of latch730is disabled. When main clock signal101transitions to a second level, the third clock-activated transistor113discharges the third clock terminal SN1853to pull-down first output Y842to low-power supply. Transistors821and826are activated to charge second output Yn843to a second level. Isolation buffer829generates the new output signal Q846at a first level. When the first output Y842is at same level as the input signal D840(e.g., first level), transistors805and806of the sequence detector710are activated to charge node n5849to a second level, thereby activating the pull-down transistor802of buffer711. Signal ACT3713is de-asserted to deactivate the two activation transistors812and813, thereby, disabling both the first and second differential sub-circuits of latch730while main clock signal101is at a second level. Latch730enters into the storage mode state as described herein. In another example, if the sequence D=1 and Q=0 is detected (e.g., input signal D840transitions from a first level to a second level and output signal Q846is at a first level) while main clock signal101is at a first level, transistors805and806are activated to discharge node n5849to low-power supply. Transistor801is activated to provide a conductive path to high-power supply via first clock-activated transistor111, and signal ACT3713is asserted to activate the two activation transistors812and813. Since input signal D840is at a second level, transistor814is activated while transistor817is deactivated and thereby, the second differential sub-circuit of latch730is enabled and the first differential sub-circuit of latch730is disabled. When main clock signal101transitions to a second level, the third clock-activated transistor113discharges the third clock terminal SN1853to pull-down second output Yn843to low-power supply. Transistors818and823are activated to charge first output Y842to a second level. Isolation buffer829generates the new output signal Q846at a second level. When the first output Y842is at same level as the input signal D840(e.g., second level), transistors803and804of the sequence detector710are activated to charge node n5849to a second level, thereby activating the pull-down transistor802of buffer711. Signal ACT3713is de-asserted to deactivate the two activation transistors812and813, thereby, disabling both the first and second differential sub-circuits of latch730while main clock signal101is at a second level. Latch730enters into storage mode as previously described. When input signal D840is constant across many clock cycles, transistor802is continuously activated to de-assert signal ACT3713, and thereby both first and second differential sub-circuits of latch730are disabled. When there is no input activity, all the internal nodes of flip-flop circuit800do not toggle, and flip-flop circuit800does not consume any dynamic power. However, should the input signal D840change while main clock signal101is at a second level, the first keeper sub-circuit712is activated. Activation of the first keeper sub-circuit712automatically deactivates the pull-down transistor802of buffer711because changes to the input signal D840result in discharging node n5849to a first level. Furthermore, if either transistor pair808/809or810/811is activated, transistor801of buffer711is also activated. Consequently, signal ACT3713is maintained at a first level by only the first keeper sub-circuit712if input signal D840changes while main clock signal101is at a second level. In some implementations, because the architecture of flip-flop circuit800requires the pull-down transistor of buffer711to be deactivated upon activation of the first keeper sub-circuit712, contention at the three clock terminals is avoided when multiple flip-flops are electrically coupled together within the N-bit array. The following example provides further details on the operation of the first keeper sub-circuit712. For example, if sequence D=0, Q=1 or D=1, Q=0 is detected while main clock signal101is at a second level, transistor802is deactivated as node n5849is at a first level. Transistor801is activated; however, no conductive path to high-power supply exists at signal ACT3713while main clock signal101is at a second level. Therefore, signal ACT3713is maintained at a first level by the pull-down network of transistors807-811. Third clock terminal SN1853is already discharged to low-power supply by the third clock-activated transistor113. Transistor807is activated by the output of inverter827at a second level. For sequence D=0 and Q=1, transistors810and811are activated. For sequence D=1 and Q=0, transistors808and809are activated. Signal ACT3713is maintained at a first level by either transistors807/808/809or transistors807/811/810when main clock signal101is at a second level. As signal ACT3713is de-asserted, activation transistors812and813remain deactivated to disable both the first and second differential sub-circuits of latch730. Hence it is shown that when the first keeper sub-circuit712is activated, the first and second differential sub-circuits must be disabled. This behavior of the first keeper sub-circuit712is fundamental to preventing contention at the three clock terminals when multiple flip-flops are electrically coupled together within the N-bit array. In some implementations, when main clock signal101transitions from a second level to a first level, the third clock-activated transistor113is deactivated and the conductive path to low-power supply of the first keeper sub-circuit712is interrupted. A conductive path to high-power supply is established via transistor801and first clock-activated transistor111to charge signal ACT3713to a second level without contention from the first keeper sub-circuit712. As both first and second differential sub-circuits of latch730are disabled initially, first output Y842and second output Yn843are decoupled from signal ACT3713, and thus, they do not interfere with the transition of signal ACT3713to a second level, which deactivates transistor807to complete the deactivation of first keeper sub-circuit712. Deactivating transistor807is sufficient to deactivate the first keeper sub-circuit712despite transistors808/809or transistors810/811being activated. In some implementations, should input signal D840and inverted input signal Dn841change while main clock signal101is at a first level, the first and second outputs of latch730must remain stable for flip-flop circuit800to have a fully static operation. If first output Y842is at a first level and second output Yn843is at a second level, transistors821and826are activated to maintain second output Yn843at a second level. If input signal D840changes from a first level to a second level during clock at a first level, transistor826is deactivated, and the conductive path to high-power supply at second output Yn843is interrupted. To avoid a floating second output Yn843, a second keeper sub-circuit including of transistors824and825is implemented within latch730to restore a conductive path to high-power supply via the second clock-activated transistor112. The drain terminals of transistors824and825are cross-coupled to intermediate nodes of the two pull-up branches, one of which is the interrupted path, to form an alternative conductive path to high-power supply. The drain terminal of transistor824is electrically coupled to the second intermediate node n7851while the gate terminal is electrically coupled to the inverted input signal Dn841. The drain terminal of transistor825is electrically coupled to the first intermediate node n6850while the gate terminal is electrically coupled to input signal D840. As transistor826is deactivated, transistor824is activated to form the alternative path to high-power supply via transistors821,824, and112for second output Yn843to remain at a second level. Conversely if first output Y842is at a second level and second output Yn843is at a first level and input signal D840changes from a second level to a first level during clock at a first level, transistor825is activated to form an alternative path to high-power supply via transistors818,825, and112for first output Y842to remain at a second level. The second keeper sub-circuit ensures a fully static operation for latch730should the input signal D840change while main clock signal101is at a first level. Likewise, in some implementations, should input signal D840and inverted input signal Dn841change while main clock signal101is at a second level, the first and second outputs of latch730must remain stable for flip-flop circuit800to have a fully static operation. If first output Y842is at a first level and second output Yn843is at a second level, transistors821and826are activated to maintain second output Yn843at a second level. Transistor816maintains first output Y842at a first level as signal ACT3713already has deactivated activation transistors812and813to disable the first and second differential sub-circuits of latch730. If input signal D840changes from a first level to a second level, transistor826is deactivated. Because the gate terminals of transistors820and821are electrically coupled together, transistor820also is activated. As signal ACT3713is at a first level, transistor822is activated to provide an additional conductive path to high-power supply via transistors820and822for second output Yn843to remain at a second level. Conversely, if first output Y842is at a second level and second output Yn843is at a first level, transistors819and822provide an additional conductive path to high-power supply for first output Y842to remain at a second level after transistor823has been deactivated upon input signal D840changing from a second level to a first level. Transistors819,820, and822ensure a fully static operation for latch730should the input signal D840change while main clock signal101is at a second level. In some implementations, when the input signal D840has changed prior to the rising edge of clock signal101, first output Y842or second output Yn843is discharged to low-power supply based on the level of the new input signal D840. Despite the second keeper sub-circuit being activated during the discharging of first output Y842or second output Yn843, there are no contention issues from transistors824and825because the second clock-activated transistor112is deactivated. Due to the cross-coupling of the drain terminals of transistors824and825, no conductive path to high-power supply will contend with the discharging of first or second output node. Thus, the second keeper sub-circuit does not interfere with the pull-down networks of the first and second differential sub-circuits of latch730and is not a source that can contribute contention when multiple flip-flops are electrically coupled together within the N-bit array. With reference to the interactions between flip-flops at the shared clock nodes for flip-flop circuit800. The exemplary flip-flops FF1221to FF4224in the 4-bit compound sequential circuit220ofFIG.2use the embodiment ofFIG.8in the following discussion, which details the interactions between two flip-flops at the second shared clock node132and illustrates how contention is avoided. The following example uses these conditions: the input signal D840is constant, the first output Y842is at a second level, and the main clock signal101is at a second level. Transistors825,826,821, and820are deactivated while transistors822,823,824,818, and819are activated. Any escape route from one flip-flop to another flip-flop via second clock terminal SP2852must traverse transistors824or825. However, the following paths bridging high-power supply to second clock terminal SP852within one flip-flop are disabled; transistors823and825; transistors822,819,818, and825; transistors826and824; transistors822,820,821,824. These paths are disabled due to at least one transistor being deactivated when the input signal D840is constant for each flip-flop and therefore, no contention occurs at the second clock terminal SP2852as all conductive paths from one flip-flop to another flip-flop via the second shared clock node132are disabled. For example, if the input signal D840changes within multiple flip-flops, a path from one flip-flop to another flip-flop via the second shared clock node132can exist. The following illustration discusses in detail the interaction between two flip-flops under this condition. The first output Y842of FF1221is at a second level while main clock signal101is at a second level and input signal D1 changes from a second level to a first level. Signal ACT3713is at a first level and disables the first and second differential sub-circuits of latch730within flip-flop FF1221. An escape route from high-power supply to second clock terminal SP2852is formed by transistors822,819,818, and825within flip-flop FF1221. Now the second output Yn843of FF2222is at a second level while main clock signal101is at a second level and input signal D2 changes from a first level to a second level. Signal ACT3713also is at a first level and disables the first and second differential sub-circuit of latch730within flip-flop FF2222. An escape route from high-power supply to second clock terminal SP2852is formed by transistors822,820,821, and824within flip-flop FF2222. The escape route from flip-flop FF1221electrically couples to the escape route from flip-flop FF2222via the second shared clock node132. However, both escape routes only couple the high-power supply of flip-flop FF1221to the high-power supply of flip-flop FF2222and thus, pose little (or no) risk of contention. It should be understood that whichever first output Y842node or second output Yn843node is maintained at a first level, the two pull-up transistors electrically coupled to that node are deactivated. For example, if the node of first output Y842is at a first level, transistors818and819are deactivated. Conversely if the node of second output Yn843is at a first level, transistors820and821are deactivated. The two inactive transistors effectively block any pull-up paths from contending with the first output Y842or second output Yn843that can be at a first level. In some implementations, as the main clock signal101transitions from a second level to a first level, signals ACT3713of both flip-flops FF1221and FF2222are asserted in response to the change in input signals D1 and D2, respectively. Transistor822becomes deactivated in both flip-flops, and second clock-activated transistor112becomes activated to complete the second keeper sub-circuit (transistors824and825) in maintaining the first output Y842of flip-flop FF1221and the second output Yn843of flip-flop FF2222at a second level. Deactivating transistor822in both flip-flops disable the only path to high-power supply. Moreover when the main clock signal101transitions to a second level to pull-down both the first output Y842of flip-flop FF1221and the second output Yn843of flip-flop FF2222, there is no contention with any pull-up paths to high-power supply from the pull-up networks of latch730within flip-flops FF1221and FF2222because transistors822and112are deactivated. Those skilled in the art can perform similar analysis at the first shared clock node131and third shared clock node133to show a lack of contention for flip-flop circuit800. In some implementations, to reduce voltage drop a series-coupled transistor can be inserted between second output Yn843and third clock terminal SN1853. In particular, the presence of two activation transistors (812for first differential sub-circuit and813for second differential sub-circuit) can accomplish this goal by further separating the node of second output Yn843from the node of third clock terminal SN1853. Now second output Yn843shares less charge due to a smaller capacitance at the source terminal of transistor814and drain terminal of transistor813when transistor814is activated while main clock signal101is at a first level. The presence of two activation transistors812and813has additional benefit of reducing the cross-over current between first output Y842and second output Yn843when both transistors817and814are simultaneously activated momentarily during the switching of input signal D840and inverted input signal Dn841. This cross-over current can cause a more pronounced voltage drop at the second output Yn843. In some implementations, to reduce the magnitude of a voltage drop, a second reduction transistor429is inserted between second output Yn443and transistor419. Similarly, a first reduction transistor430is inserted between first output Y443and transistor420to reduce the magnitude of the voltage droop at first output Y443when transistor420is activated in response to input signal D440changing to a first level. In some implementations, the node of second output Yn443is further separated from the node of third clock terminal SN1453, therefore less charge sharing occurs due to a smaller capacitance at the source terminal of transistor429and drain terminal of transistor419when transistor419is activated while main clock signal101is at a first level. Hence the role of reduction transistors430and429is to reduce the voltage droop at first output Y442and second output Yn443, respectively, due to charge sharing. Both reduction transistors430and429can be optional and can be implemented depending on the severity of the charge sharing for a given process technology. In some configurations of multiple flip-flops electrically coupled together within the N-bit array, the aggregate capacitance at the third shared clock node133is still large despite the presence of reduction transistors within each flip-flop. Under these circumstances, various embodiments are disclosed to further reduce the capacitance at the third clock terminal SN1453. In some implementations, the first sequence detector includes the first buffer and electrically couples to the first output of the latch, the second output of the latch, an input signal, and an inverted input signal. In particular, the first sequence detector is configured to activate a first transistor812and a second transistor813in response to (1) the input signal is the opposite logic polarity compared to the first output of the latch when a main clock signal is at a first level, or (2) the first transistor and the second transistor are deactivated when the main clock signal is at a second level. In some implementations, activating the first transistor and the second transistor when the main clock signal is at the first level activates the latch to capture a data bit from the input signal and to generate an output signal based on a level of the captured data bit when the main clock signal transitions from the first level to the second level. In some implementations, the first keeper sub-circuit is activated in response to a change in the input signal that is the opposite logic polarity compared to the first output of the latch when a main clock signal is at a second level, and when the first keeper sub-circuit is activated, a pull-down transistor of the first buffer is deactivated, the first transistor is deactivated and the second transistor is deactivated. In some implementations, the first keeper sub-circuit is configured to receive, as inputs, the first output of the latch, the second output of the latch, the input signal, and the inverted input signal, and wherein the latch further includes at least a first intermediate node and a second intermediate node, and at least the first transistor and the second transistor configured to be activated simultaneously by the output of the first buffer, where the first transistor and the second transistor are electrically coupled to the third shared clock node. In some implementations, with reference toFIG.12, the first keeper sub-circuit includes an inverter feedback loop at a bottom-most position of a pull-down network, wherein the pull-down network includes a third transistor808, a fourth transistor809, a fifth transistor811, a sixth transistor810and the inverter feedback loop, and wherein inverter feedback loop includes an inverter827and a seventh transistor807, and wherein the seventh transistor of the inverter feedback loop is electrically coupled to the third shared clocked node (SN1853). In some implementations, a second keeper sub-circuit is electrically coupled to an input signal and an inverted input signal, and the second keeper sub-circuit is activated in response to a change in the input signal when a main clock signal being at a first level. In some implementations, the second keeper sub-circuit is electrically coupled to the second shared clock node, and the second keeper sub-circuit is electrically cross-coupled to a first intermediate node of the latch and a second intermediate node of the latch. Referring toFIG.9, a circuit diagram illustrating another flip-flop circuit900ofFIG.3of the compound sequential circuit architecture100ofFIG.1, according to an illustrative implementation. In some implementations, flip-flop circuit900illustrates an architecture in which the first keeper sub-circuit312and second keeper sub-circuit322are electrically coupled to a fourth clock terminal SN2954. The first differential sub-circuit and second differential sub-circuit of latch330are electrically coupled still to the third clock terminal SN1453. The first and second keeper sub-circuits are electrically decoupled from the first and second differential sub-circuits of latch330. The fourth terminal SN2954is electrically coupled to a fourth shared clock node of the N-bit array. Now the amount of capacitance at either third shared clock node133or fourth shared clock node is 2N. Furthermore, the third keeper sub-circuit includes of only transistor935. The gate terminal of transistor935is electrically coupled to the main clock signal101. The drain terminal of transistor935is electrically coupled to the drain terminal of transistor425, and the source terminal of transistor935is electrically coupled to the drain terminal of transistor428. In this configuration, the main clock signal101is made available to flip-flop circuit900in order to maintain three shared clock nodes. The fourth clock terminal SN2954replaces the second clock terminal SP2452, and the fourth shared clock node replaces the second shared clock node132. A fourth clock-activated transistor replaces the second clock-activated transistor112. In some implementations, the fourth clock-activated transistor can be an N-channel transistor. Since transistor935can be minimum size, it does not present much burden on the main clock signal101. Transistor935functions as a shunt transistor to bridge nodes n8952and n9953when main clock signal101is at a first level. For example, if input signal D440transitions to a second level while main clock signal101is at a first level, signal ACT2323is asserted to deactivate transistor428. Transistor935provides an alternative conductive path to high-power supply via transistors423,935, and425to maintain second output Yn443at a second level. The functionality and performance of flip-flop circuit900is similar to flip-flop circuit400. Referring toFIG.10, a circuit diagram illustrating another flip-flop circuit1000ofFIG.3of the compound sequential circuit architecture100ofFIG.1, according to an illustrative implementation. The capacitance at third clock terminal SN1453is reduced by decoupling the first and second keeper sub-circuits from the first and second differential sub-circuits of latch330. The first and second differential sub-circuits are electrically coupled only to the third clock terminal SN1453. The first and second keeper sub-circuits are electrically coupled to a fourth clock terminal SN21054. Flip-flop circuit1000includes four clock terminals and thus, the N-bit array has four shared clock nodes. Flip-flop circuit1000is identical to flip-flop circuit400except the former has additional fourth clock terminal SN21054electrically coupled to a fourth shared clock node of the N-bit array. A fourth clock-activated transistor is electrically coupled to the fourth shared clock node. Now the third and fourth shared clock nodes each contains 2N amount of aggregate capacitance because only two transistors are electrically coupled to each clock terminal within each flip-flop. It should be noted that another embodiment of flip-flop circuit1000exists by electrically coupling the gate terminal of transistor426to the output of inverter431and also electrically coupling the gate terminal of transistor427to the output of inverter432. This configuration is a hybrid implementation of flip-flop circuit500and flip-flop circuit1000with four clock terminals and four clock-activated transistors. Referring toFIG.11, a circuit diagram illustrating another flip-flop circuit1100ofFIG.7of the compound sequential circuit architecture100ofFIG.1, according to an illustrative implementation. Flip-flop circuit1100shows an implementation including only two clock terminals. Furthermore, only two shared clock nodes and two clock-activated transistors are present within the N-bit array. Unlike flip-flop circuit800, flip-flop circuit1100can include two discrete clock-activated transistors1130and1131. Only the first clock terminal SP1851and third clock terminal SN1853are implemented in this configuration as the second clock terminal SP2852has been replaced by shunt transistor1130, which functions as the second keeper sub-circuit of latch730. Shunt transistor1130is activated during main clock signal101at a first level such that the first output Y842or second output Yn843can be maintained at a second level should the input signal D840change. For example, if first output Y842is at a second level during clock at a first level, an alternative conductive path to high-power supply via transistors818,1130, and826is enabled to maintain first output Y842at a second level. Conversely if second output Yn843is at a second level during clock at a first level, an alternative conductive path to high-power supply via transistors821,1130and823is enabled to maintain second output Yn843at a second level. A second shunt transistor1122is implemented in parallel with shunt transistor1130to function similarly as another keep sub-circuit should the input signal D840change while main clock signal101is at a second level. Shunt transistor1122also provides an alternative conductive path to high-power supply to maintain the first output Y842or second output Yn843at a second level while clock is at a second level. The gate terminal of transistor1130is electrically coupled to the main clock signal101, and the gate terminal of transistor1122is electrically coupled to signal ACT3713. The drain terminal of transistors1130and1122are electrically coupled to node n6850, and the source terminals are electrically coupled to node n7851. The third clock terminal SN1853is electrically coupled to only two transistors812and813, which reduces the aggregate capacitance at the third shared clock node133when multiple flip-flops are electrically coupled within the N-bit array. The aggregate capacitance at the third shared clock node133has been reduced to 2N. The first keeper sub-circuit712includes inverter827and transistors807and1131. The first keeper sub-circuit712is implemented with fewer transistors than the configuration of flip-flop circuit800and operates in a similar manner to maintain signal ACT3713at a first level while main clock signal101is at a second level. Since the first keeper sub-circuit712is clock-gated by clock-activated transistor1131, it is activated in conjunction with the activation of transistor802, and therefore does not depend on the input signal D840to change. Transistors1130and1131can be minimum size devices to present less of a burden on the main clock signal101. The configuration ofFIG.11can have fewer transistors than the configuration ofFIG.8, thereby occupying less silicon area and consuming less power. Referring toFIG.12, a circuit diagram illustrating another flip-flop circuit1200ofFIG.7of the compound sequential circuit architecture100ofFIG.1, according to an illustrative implementation. In some implementations, to reduce the capacitance at the third shared clock node133,FIG.12shows a configuration in which only three transistors are electrically coupled to the third clock terminal SN1853. Referring toFIG.8, inverter827and transistor807form an inverter feedback loop at the top-most position of the pull-down network within the first keeper sub-circuit712. This inverter feedback loop including transistors807and inverter827also can be placed at the bottom-most position of the pull-down network as shown inFIG.12. Thus, transistor807becomes the bottom-most transistor of the pull-down network, and the source terminal of transistor807is electrically coupled to the third clock terminal SN1853. The drain terminals of transistors808and811are electrically coupled to signal ACT3713, and the drain terminal of transistor807is electrically coupled to the source terminals of transistors809and810. Placement of the inverter feedback loop at the bottom-most position removes one transistor electrically coupled to the third clock terminal SN1853while retaining the same functionality asFIG.8. The aggregate capacitance at the third shared clock node133becomes 3N for this configuration when multiple flip-flops are electrically coupled together within an N-bit array. Referring toFIG.13, a circuit diagram illustrating another flip-flop circuit1300ofFIG.7of the compound sequential circuit architecture100ofFIG.1, according to an illustrative implementation. In some implementations, the capacitance at third clock terminal SN1853is reduced by decoupling the first keeper sub-circuit712from the first and second differential sub-circuits of latch730. The first and second differential sub-circuits are electrically coupled only to the third clock terminal SN1853. The first keeper sub-circuit712is electrically coupled to a fourth clock terminal SN21354. Flip-flop circuit1300includes four clock terminals and thus, the N-bit array has four shared clock nodes. Flip-flop circuit1300is identical to flip-flop circuit1200ofFIG.12except the former has an additional fourth clock terminal SN21354electrically coupled to a fourth shared clock node of the N-bit array. A fourth clock-activated transistor is electrically coupled to the fourth shared clock node. Flip-flop circuit1300also retains the inverter feedback loop at the bottom-most position of the pull-down network of the first keeper sub-circuit712. The source terminal of transistor807is electrically coupled to fourth clock terminal SN1354, and thereby, only one transistor contributes capacitance to the fourth shared clock node of the N-bit array. Consequently, the fourth shared clock node has N aggregate capacitance while the third shared clock node has 2N aggregate capacitance. Referring toFIG.14, a circuit diagram illustrating another flip-flop circuit1400ofFIG.3of the compound sequential circuit architecture100ofFIG.1, according to an illustrative implementation. In some implementations, flip-flop circuit1400includes a first delay element inserted between the first output Y442and the first sequence detector310and a second delay element inserted between the second output Yn443and the second sequence detector320. The first delay element includes inverter chain1437and1438, whereby the input of inverter1437is electrically coupled to the first output Y442and the output of inverter1438is electrically coupled to the gate terminals of transistors404and406. A second delay element includes inverter chain1435and1436, whereby the input of inverter1435is electrically coupled to the second output Yn443and the output of inverter1436is electrically coupled to the gate terminals of transistors415and418. When signal ACT1313is enabled to activate first activation transistor420and main clock signal101transitions to a second level, first output Y442discharges to a first level. The discharging of first output Y442triggers a chain effect whereby transistor404is activated to charge node n1444to a second level, which in turn activates transistor402to disable signal ACT1313. This feedback behavior may deactivate transistor420before first output Y442fully discharges to a first level. Due to the high capacitance at third clock terminal SN1453, the discharge waveform of first output Y442may have a large slew rate, and the discharging of first output Y442may be halted. Normally if this occurs, transistor421completes the discharge of first output Y442. Due to process variation, the threshold to activate transistor421may not be met; therefore, a delay element including two series-coupled inverters can be inserted between the first output Y442and first sequence detector310to allow sufficient time for first output Y442to fully discharge. Similar discussion can justify the insertion of a delay element between second output Yn443and second sequence detector320. It should be noted that the insertion of delay elements to allow first and second outputs to fully discharge is applicable to all embodiments with the circuit architecture of flip-flop circuit400. Referring toFIG.15, a circuit diagram illustrating another flip-flop circuit1500ofFIG.3of the compound sequential circuit architecture100ofFIG.1, according to an illustrative implementation. In some implementations, the first and sequence detectors are implemented using a different logic gate which combines both sequence detector and buffer into a single logic gate. This configuration reduces the total transistor count but introduces three series-coupled P-channel transistors into a timing path of flip-flop circuit1500. There are two P-channel transistors from the logic gate and the first clock-activated transistor111which include the pull-up network to enable signal ACT1313or ACT2323. The first sequence detector310includes transistors1503,1504,1505, and1506while the second sequence detector320includes transistors1515,1516,1517, and1518. Input signal D440is electrically coupled to the gate terminals of transistors1503and1505, and second output Yn443is electrically coupled to the gate terminals of transistors1504and1506. In this manner, the first sequence detector310receives input signal D440and second output Yn443. Likewise, the second sequence detector320receives inverted input signal Dn441and first output Y442. Inverted input signal Dn441is electrically coupled to the gate terminals of transistors1516and1517, and first output Y442is electrically coupled to the gate terminals of transistors1515and1518. The inputs to the first and second keeper sub-circuits remain the same fromFIG.4. In the configuration ofFIG.15, the first sequence detector310has the complement logic polarity as the inputs of the first keeper sub-circuit312. Likewise, the second sequence detector320has the complement logic polarity as the inputs of the second keeper sub-circuit322. The output of each logic gate including the first and second sequence detectors drives signal ACT1313and ACT2323, respectively, thereby eliminating the need for a buffer circuit. The third keeper sub-circuit is electrically coupled to the first and second keeper sub-circuits in the same manner asFIG.5. Referring toFIG.16, a block diagram illustrating a cell placement diagram1600of the compound sequential circuit architecture100ofFIG.1, according to an illustrative implementation. In some implementations,FIG.16illustrates how multiple flip-flops within an N-bit array can be arranged to create a “mega” cell for use in a standard cell library as part of an ASIC design flow. This placement diagram includes four flip-flops based on the 4-bit compound sequential circuit220ofFIG.2. The three clock-activated transistors111-113are positioned in the middle of the “mega” cell with two flip-flops FF1221and FF2222to the left and two flip-flops FF3223and FF4224to the right. This arrangement provides symmetry and balance to distribute the routing and interconnects of the main clock signal101and the shared clock nodes. A metal strip for high-power supply VDD and another metal strip for low-power supply GND are situated at the top track 1 and bottom track 14, respectively. Tracks 2-5 for the input signals D4 to D1 are below high-power supply while tracks 10-13 for the output signals Q4 to Q1 are above low-power supply. In between the input signal group and output signal group is the clock signal group: SP2452, SN1453, clk101, and SP1451at tracks 6-9. AsFIG.16is an illustrative example of a 4-bit array, larger or smaller bit arrays can be implemented with this methodology for sharing clock nodes and clock-activated transistors. Referring toFIG.17, a flowchart for a method1700of generating data outputs based on sampled data inputs, according to an illustrative implementation. Circuit architecture100, flip-flop circuit400, and flip-flop circuit800(or any other circuit described herein) can be configured to perform method1700. Further, any clock-activated transistor architecture (e.g.,100-1500) described herein can be configured to perform method1700. In broad overview of method1700, at block1710, when a clock signal is at a first level, a sequence detector is activated (1712) and a keeper sub-circuit of the latch is activated (1714). At block1720, when a clock signal is at a second level, a latch, for a duration, can be activated (1722) and a keeper sub-circuit can be activated (1724). Additional, fewer, or different operations may be performed depending on the particular implementation. In some implementations, each operation may be re-ordered, added, removed, or repeated. In general, method1700includes an N-bit array of flip-flops electrically coupled to a plurality of shared clock nodes. Each flip-flop within the N-bit array includes a buffer, a latch with first and second differential sub-circuits (e.g.,FIG.4) and a keeper sub-circuit. In some implementations, when the first differential sub-circuit is enabled, the second differential sub-circuit is disabled and the keeper sub-circuit is deactivated. When the second differential sub-circuit is enabled, the first differential sub-circuit is disabled and the keeper sub-circuit is deactivated. When the keeper sub-circuit is activated, the first and second differential sub-circuits are disabled and a pull-down transistor of the buffer is deactivated. In some implementations, the keeper sub-circuit is active while the main clock signal is at second level. In some implementations, the buffer, latch, and keeper sub-circuit are electrically coupled to the plurality of shared clock nodes. Referring to method1700in more detail, at block1712, when a clock signal is at a first level and in response to a first change in an input signal that is an opposite logic polarity compared to an output of a latch, sequence detector is activated, wherein the sequence detector activates a differential sub-circuit of a latch. At block1714, when a clock signal is at a first level and in response to a first change in an input signal that is an opposite logic polarity compared to an output of a latch, a keeper sub-circuit of the latch can be activated. In some implementations, at block1720and with reference toFIG.4a first sequence detector is activated to activate a first differential sub-circuit of a first latch when an input signal is at first level and an output signal is at second level. In some implementations, a second sequence detector is activated to activate a second differential sub-circuit of the first latch when the input signal is at second level and the output signal is at first level. In particular, if the first sequence detector is activated, the second sequence detector is deactivated and if the second sequence detector is activated, the first sequence detector is deactivated. In some implementations, at block1720a keeper sub-circuit (e.g., transistors426/427) of the first latch is activated when the input signal is of opposite logic polarity than the output signal (e.g.,FIG.4). In some implementations, at block1720and with reference toFIG.8a sequence detector is activated to activate a first transistor and a second transistor of a second latch when the input signal is of opposite logic polarity than the output signal. In some implementations, a first keeper sub-circuit (FIG.4), a second keeper sub-circuit (FIG.4), and a keeper sub-circuit (FIG.8) are deactivated. At block1722, when the clock signal is at a second level, a latch, for a duration can be activated, and deactivate the latch when the output of latch becomes a same level as the input signal, wherein the duration starts when the clock signal goes to the second level and the duration ends when the output of latch becomes the same level as the input signal. At block1724, when the clock signal is at a second level and in response to a second change in the input signal that is the opposite logic polarity compared to the output of the latch, a keeper sub-circuit is activated. In some implementations, at block1720one or more sequence detectors (e.g.,FIG.4(two sequence detectors) andFIG.8(one sequence detector)) can be deactivated. In some implementations, if the first or second differential sub-circuit of the first latch is activated, the first latch is activated and becomes deactivated after the output signal settles to new level. For example, when the first latch becomes deactivated, the first and second differential sub-circuits are deactivated. In some implementations, if the first and second transistors of the second latch are activated, the second latch is activated and becomes deactivated after the output signal settles to new level. For example, when the second latch becomes deactivated, the first and second transistors are deactivated. In some implementations, if the input signal changes to a level different than the output signal, either the first keeper sub-circuit or second keeper sub-circuit (e.g.,FIG.4) is activated. In particular, if the first keeper sub-circuit is activated, the first differential sub-circuit, the second differential sub-circuit, and the second keeper sub-circuit are deactivated (e.g.,FIG.4). Furthermore, if the second keeper sub-circuit is activated, the first differential sub-circuit, the second differential sub-circuit, and the first keeper sub-circuit are deactivated (e.g.,FIG.4). In some implementations, if the input signal changes to a level different than the output signal, a keeper sub-circuit is activated (e.g.,FIG.8and as compared to the two keeper sub-circuits ofFIG.4) While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any inventions or of what may be claimed, but rather as descriptions of features specific to particular implementations of the systems and methods described herein. Certain features that are described in this specification in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Having now described some illustrative implementations it is apparent that the foregoing is illustrative and not limiting, having been presented by way of example. In particular, although many of the examples presented herein involve specific combinations of method acts or system elements or components, those acts, and those elements may be combined in other ways to accomplish the same objectives. Acts, elements and features discussed only in connection with one implementation are not intended to be excluded from a similar role in other implementations. The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including” “comprising” “having” “containing” “involving” “characterized by” “characterized in that” and variations thereof herein, is meant to encompass the components items listed thereafter, equivalents thereof, and additional items, as well as alternate implementations consisting of the items listed thereafter exclusively. In one implementation, the systems and methods described herein consist of one, each combination of more than one, or all of the described elements, acts, or components. Any references to implementations or elements or acts of the systems and methods herein referred to in the singular may also embrace implementations including a plurality of these elements or components, and any references in plural to any implementation, arrangement, or element or act herein may also embrace implementations including only a single element or component. References in the singular or plural form are not intended to limit the presently disclosed systems or methods, their components, acts, or elements to single or plural configurations. References to any act or element being based on any information, act or element may include implementations where the act or element is based at least in part on any information, act, or element. Any implementation disclosed herein may be combined with any other implementation, and references to “an implementation,” “some implementations,” “an alternate implementation,” “various implementation,” “one implementation” or the like are not necessarily mutually exclusive and are intended to indicate that a particular feature, structure, or characteristic described in connection with the implementation may be included in at least one implementation. Such terms as used herein are not necessarily all referring to the same implementation. Any implementation may be combined with any other implementation, inclusively or exclusively, in any manner consistent with the aspects and implementations disclosed herein. References to “or” may be construed as inclusive so that any terms described using “or” may indicate any of a single, more than one, and all of the described terms. Where technical features in the drawings, detailed description or any claim are followed by reference signs, the reference signs have been included for the sole purpose of increasing the intelligibility of the drawings, detailed description, and claims. Accordingly, neither the reference signs nor their absence have any limiting effect on the scope of any claim elements. The systems and methods described herein may be embodied in other specific forms without departing from the characteristics thereof. Although the examples provided herein relate to systems (e.g., circuits) and methods for generating data outputs utilizing shared clock-activated transistors, the systems and methods described herein can be applied to other circuits and methods. The foregoing implementations are illustrative rather than limiting of the described systems and methods. Scope of the systems and methods described herein is thus indicated by the appended claims, rather than the foregoing description, and changes that come within the meaning and range of equivalency of the claims are embraced therein. | 123,725 |
11863191 | DETAILED DESCRIPTION Embodiments of the ramp generators described herein produce an output voltage ramp signal from two initial or preliminary voltage ramp signals. By multiplexing between the two preliminary voltage ramp signals during first and second alternating output ramp periods of the output voltage ramp signal, the ramp generator alternates between using the first preliminary voltage ramp signal to produce the output voltage ramp signal during the first output ramp periods and using the second preliminary voltage ramp signal to produce the output voltage ramp signal during the second output ramp periods. In some embodiments, the two preliminary voltage ramp signals are generated by alternatingly charging two separate (preferably identical) capacitors. The capacitors are charged alternatingly, so that individual first and second preliminary voltage ramps of the first and second preliminary voltage ramp signals, respectively, are produced alternatingly and 180 degrees out of phase with each other. Thus, the first preliminary voltage ramps of the first preliminary voltage ramp signal occur during (or overlap) the first output ramp periods of the output voltage ramp signal, and the second preliminary voltage ramps of the second preliminary voltage ramp signal occur during (or overlap) the second output ramp periods of the output voltage ramp signal. By using two capacitors and multiplexing between the preliminary voltage ramp signals generated thereby, the return or reset time of each of the individual preliminary voltage ramps within the preliminary ramp periods of the preliminary voltage ramp signals does not have to be very fast. Instead, the reset time can be relatively slow, as long as the capacitor (and, thus, the voltage level of each preliminary voltage ramp) is fully reset before the end of each preliminary ramp period. By allowing the reset time to be relatively slow, the noise or nonlinearity that is typically generated by a fast return slew can be eliminated or greatly reduced within the individual preliminary voltage ramps. Additionally, when the ramp generator switches from one preliminary voltage ramp signal to the other, the current preliminary voltage ramp of the selected preliminary voltage ramp signal is already stable and at the starting or initial voltage of the output voltage ramp signal. In this manner, the output voltage ramps generated from the preliminary voltage ramps have a very high degree of linearity. Additionally, in some embodiments, another ramp generator uses two of these output voltage ramp signals as preliminary voltage ramp signals for generating yet another output voltage ramp signal having an even higher degree of linearity. As used herein, the term “voltage ramp signal” refers to an overall signal with a voltage level that repeatedly ramps (up or down, depending on the embodiment). An “output voltage ramp signal” is, thus, the voltage ramp signal that is output by any of the ramp generators described herein. A “preliminary voltage ramp signal” or “initial voltage ramp signal,” on the other hand, is a voltage ramp signal within any of the ramp generators described herein and from which the output voltage ramp signal is generated. The output voltage ramp signal for each ramp generator is formed from two initial or preliminary voltage ramp signals. Additionally, in some embodiments ofFIGS.7-10, each preliminary voltage ramp signal is further formed from two initial voltage ramp signals. Each (output, preliminary or initial) voltage ramp signal includes a series of cycles or periods (i.e., “ramp periods”), each (output, preliminary or initial) ramp period having a single, individual continuous (output, preliminary or initial) “voltage ramp” that continuously ramps (up or down) from a first voltage level to a second voltage level within either the entire ramp period or at least a portion of the ramp period. A “continuous ramp period” is a ramp period within which the voltage ramp continuously ramps throughout the entire ramp period, i.e., without resetting the voltage ramp, except at the beginning or end of the period. On the other hand, some of the initial or preliminary voltage ramp signals (e.g., forFIGS.1-6) have ramp periods (“initial ramp periods” or “preliminary ramp periods”) that include a first portion (a “continuous ramp portion”) within which the initial or preliminary voltage ramp continuously ramps (i.e., from a first voltage level to a second voltage level) and a second portion (a “non-ramp portion,” “flat portion” or “reset portion”) within which the voltage level is held flat (i.e., relatively unchanging) at a reset or initial level at which the voltage ramp begins, which may be the first voltage level. A continuous ramp period, therefore, has a continuous ramp portion, without a non-ramp portion. An initial or preliminary ramp period, on the other hand, either can have only a continuous ramp portion (as a continuous ramp period) or can have both a continuous ramp portion and a non-ramp portion, depending on the embodiment being described. The output ramp periods of the output voltage ramp signal of each ramp generator described herein include only continuous ramp periods. In embodiments ofFIGS.1-6, the initial or preliminary ramp periods of each initial or preliminary voltage ramp signal include a continuous ramp portion and a non-ramp portion. Additionally, in some embodiments ofFIGS.7-10, each preliminary voltage ramp signal (from which the output voltage ramp signal is formed) includes preliminary ramp periods that have only a continuous ramp portion (i.e., only continuous ramp periods), and each initial voltage ramp signal (from which the preliminary voltage ramp signals are formed) includes initial ramp periods that have a continuous ramp portion and a non-ramp portion. An example improved ramp generator100is shown inFIG.1, in accordance with some embodiments. The ramp generator100generally includes first and second current sources101and102, first and second capacitors103and104, first and second reset switches105and106, first and second output switches107and108, a D flip-flop109, and a comparator110, among other components not shown for simplicity. The ramp generator100generates an output voltage ramp signal VrampA, which ramps from a first (or start, initial, lower, minimum, or bottom) voltage level to a second (or end, final, upper, maximum, or top) voltage level. The output voltage ramp signal VrampA is typically provided to any appropriate downstream electronic component, e.g., an amplifier or a downstream comparator111that compares the output voltage ramp signal VrampA with a reference voltage Vref to generate a voltage pulse signal112. For an application or circuit design that uses a relatively short duration voltage pulse (e.g., a few nanoseconds long) and/or that requires high precision in the rising and falling edges of the voltage pulse, the precision and linearity of the voltage ramp signal is of great importance in order to ensure that the comparator111is triggered at the precise required timing points. The output voltage ramp signal VrampA is a very precise and linear voltage ramp signal that can be used in such applications. The first current source101is connected between a voltage supply Vdd and a node C1A (e.g., an anode) of the first capacitor103. A cathode of the first capacitor103is connected to ground. The first reset switch105may be a MOSFET (e.g., NMOS) device with source and drain connected between the node C1A and a reset voltage node113, body connected to ground, and gate connected to clock Clkn1. The first output switch107may be a MOSFET (e.g., NMOS) device with source and drain connected between the node C1A and an output node114, body connected to ground, and gate connected to clock Clkp2. The second current source102is connected between the voltage supply Vdd and a node C2A (e.g., an anode) of the second capacitor104. A cathode of the second capacitor104is connected to ground. The second reset switch106may be a MOSFET (e.g., NMOS) device with source and drain connected between the node C2A and the reset voltage node113, body connected to ground, and gate connected to clock Clkp1. The second output switch108may be a MOSFET (e.g., NMOS) device with source and drain connected between the node C2A and the output node114, body connected to ground, and gate connected to clock Clkn2. The reset voltage node113is connected to receive a start voltage Vstart (having a first voltage level). The comparator110is connected to receive an end voltage Vend (having a second voltage level greater than the first voltage level in some embodiments) at a negative input thereof. A voltage level of the start voltage Vstart is approximately the first (or initial, lower, or bottom) voltage level of the output voltage ramp signal VrampA. The start voltage Vstart is a baseline voltage, which can be either a positive voltage, ground, or nonzero voltage. A voltage level of the end voltage Vend is approximately the second (end, final, upper, maximum, or top) voltage level of the output voltage ramp signal VrampA. Delays within some of the components of the ramp generator100may cause the second (or final, upper, or top) voltage level of the output voltage ramp signal VrampA not to be exactly the same as, but slightly greater than, the voltage level of the end voltage Vend. The comparator110is also connected to the output node114to receive the output voltage ramp signal VrampA at a positive input thereof. An output of the comparator110is connected to a clock input CLK of the D flip-flop109. An input D of the D flip-flop109is connected to an inverted outputQof the D flip-flop109. An output Q of the D flip-flop109produces the clock Clkp2, and the inverted outputQproduces the clock Clkn2. Thus, the clocks Clkp2and Clkn2are inversions of each other. In some embodiments, the clocks Clkp1and Clkn1can be the same as, and connected directly to, the clocks Clkp2and Clkn2. However, in the illustrated embodiment, the clocks Clkp1and Clkn1are separate from each other, separate from the clocks Clkp2and Clkn2, synchronized to the clocks Clkp2and Clkn2, respectively, and have a duty cycle of 50% or less, as illustrated inFIG.2. The first initial or preliminary voltage ramp signal for the ramp generator100is produced at the node C1A (i.e., a first capacitor node or a first preliminary ramp node). The second initial or preliminary voltage ramp signal for the ramp generator100is produced at the node C2A (i.e., a second capacitor node or a second preliminary ramp node). The output voltage ramp signal VrampA is produced at the output node114. Generation of the first and second preliminary voltage ramp signals and the output voltage ramp signal VrampA are described with reference toFIGS.1and2. FIG.2shows example timing diagrams for the output voltage ramp signal VrampA, the first preliminary voltage ramp signal (Vc1a), the second preliminary voltage ramp signal (Vc2a), the clock Clkn1, the clock Clkp1, and the clock Clkp2. Additionally, the clock Clkn2is simply the inversion of the clock Clkp2, so its timing diagram is omitted for simplicity. The timing diagrams were generated by a simulation running at about 100 MHz with the start voltage Vstart at about one volt and the end voltage Vend at about two volts. As shown inFIG.2, the first preliminary voltage ramp signal Vc1ahas preliminary ramp periods (e.g.,201) that include a continuous ramp portion (e.g.,202) and a non-ramp portion (e.g.,203). Each first preliminary voltage ramp (e.g.,204) (of the first preliminary voltage ramp signal Vc1a) continuously ramps from a first voltage level205to a second voltage level206within the continuous ramp portion202. The voltage level of the first preliminary voltage ramp signal Vc1ais held flat (i.e., relatively unchanging) at the first voltage level205within the non-ramp portion203. (It is understood that the first preliminary voltage ramp signal Vc1ais shown as an idealized ramp signal having straight lines with no curve when reset or noise at the start or end of the ramps and resets. However, the real-world ramp signal may exhibit such curves and/or noise.) The first voltage level205is generally the same as the start voltage Vstart, which is the reset or initial level at which the first preliminary voltage ramps204begin. The second voltage level206of the first preliminary voltage ramps204is shown as being greater or higher than the end voltage Vend (i.e., the second, end, final, upper, maximum, or top voltage level of the output voltage ramp signal VrampA). Thus, the first preliminary voltage ramps204have an initial (linear) portion (e.g.,207) (i.e., between the first voltage level205and the end voltage Vend) and a final (also potentially linear) portion (e.g.,208) (i.e., between the end voltage Vend and the second voltage level206). In other words, the first preliminary voltage ramps204continuously ramp from the voltage level of the start voltage Vstart to a voltage level greater than the end voltage Vend in some embodiments. Similar to the first preliminary voltage ramp signal Vc1a, the second preliminary voltage ramp signal Vc2a(180 degrees out of phase with the first preliminary voltage ramp signal Vc1a) has preliminary ramp periods (e.g.,211) that include a continuous ramp portion (e.g.,212) and a non-ramp portion (e.g.,213). Each second preliminary voltage ramp (e.g.,214) (of the second preliminary voltage ramp signal Vc2a) continuously ramps from the first voltage level205to the second voltage level206within the continuous ramp portion212. The voltage level of the second preliminary voltage ramp signal Vc2ais held flat (i.e., relatively unchanging) at the first voltage level205within the non-ramp portion213. (It is understood that the second preliminary voltage ramp signal Vc2ais shown as an idealized ramp signal having straight lines with no curve when reset or noise at the start or end of the ramps and resets. However, the real-world ramp signal may exhibit such curves and/or noise.) The first voltage level205is, thus, also the reset or initial level at which the second preliminary voltage ramps214begin. The second voltage level206of the second preliminary voltage ramps214is shown as being greater or higher than the end voltage Vend (i.e., the second, end, final, upper, maximum, or top voltage level of the output voltage ramp signal VrampA). Thus, the second preliminary voltage ramps214have an initial (linear) portion (e.g.,217) (i.e., between the first voltage level205and the end voltage Vend) and a final (also potentially linear) portion (e.g.,218) (i.e., between the end voltage Vend and the second voltage level206). In other words, like the first preliminary voltage ramps204, the second preliminary voltage ramps214continuously ramp from the voltage level of the start voltage Vstart to a voltage level greater than the end voltage Vend in some embodiments. (The ends of the first and second preliminary voltage ramps204and214overscan the end voltage Vend.) The second preliminary ramp periods211are about the same as the first preliminary ramp periods201, the second continuous ramp portion212is about the same as the first continuous ramp portion202, the second non-ramp portion213is about the same as the first non-ramp portion203, the second preliminary voltage ramps214are about the same as the first preliminary voltage ramps204, the second initial portion217is about the same as the first initial portion207, and the second final portion218is about the same as first the final portion208. In the illustrated example, the clock Clkn1and the clock Clkp1(which are 180 degrees out of phase with each other) have clock periods that are about the same as the preliminary ramp periods201and211(which are also 180 degrees out of phase with each other), respectively. Additionally, the duty cycle of the clocks Clkn1and Clkp1is shown as being less than 50%. (In other embodiments, the duty cycle of the clocks Clkn1and Clkp1can be about equal to 50%, such that the continuous ramp portions202and212and the non-ramp portions203and213are about equal to each other, the final portions208and218are almost nonexistent, and the second voltage level206is about the same as the end voltage Vend.) The clock Clkp2(and, thus, also the clock Clkn2) also has a clock period that is about the same as the preliminary ramp periods201and211, but it is shown with about a 50% duty cycle. The output voltage ramp signal VrampA has first and second continuous output ramp periods (e.g.,221and222) during first and second time periods (e.g.,223and224), respectively. The first and second continuous output ramp periods221and222have first and second continuous output voltage ramps (e.g.,225and226), respectively, that continuously ramp from the first voltage level of the start voltage Vstart to the second voltage level of the end voltage Vend. The first and second time periods223and224(and, thus, also the first and second continuous output ramp periods221and222and the first and second continuous output voltage ramps225and226) alternate with each other. The first time periods223correspond to the first half of the clock periods of the clock Clkp2(and the clock Clkn2), and the second time periods224correspond to the second half of the clock periods of the clock Clkp2(and the clock Clkn2). The first continuous output ramp periods221and the first continuous output voltage ramps225correspond to the first continuous ramp portion202(of the first preliminary ramp periods201), the first preliminary voltage ramps204, or the first initial portion207of the first preliminary voltage ramps204. The second continuous output ramp periods222and the first continuous output voltage ramps226correspond to the second continuous ramp portion212(of the second preliminary ramp periods211), the second preliminary voltage ramps214, or the second initial portion217of the second preliminary voltage ramps214. Each of the first and second continuous output voltage ramps225and226continuously ramps from the start voltage Vstart (i.e., the first voltage level205) to the end voltage Vend within the first and second continuous output ramp periods221and222(or the first and second time periods223and224). The first continuous ramp periods221are produced from the first preliminary ramp periods201of the first preliminary voltage ramp signal Vc1aduring the first time periods223, although the first preliminary voltage ramps204continuously ramp not only during but also beyond (e.g., after, as shown) the first time periods223. The second continuous ramp periods222are produced from the second preliminary ramp periods211of the second preliminary voltage ramp signal Vc2aduring the second time periods224, although the second preliminary voltage ramps214continuously ramp not only during but also beyond (e.g., after, as shown) the second time periods224. Thus, each of the first continuous voltage ramps225is produced from the first initial linear portion207of the corresponding first preliminary voltage ramp204, and each of the second continuous voltage ramps226is produced from the second initial linear portion217of the corresponding second preliminary voltage ramp214. In some embodiments, the set of the switches105-108enable the generation of the first preliminary voltage ramps204at least during the first time periods223, electrically connect the first capacitor node C1A to the output node114to produce the first continuous output voltage ramps225during the first time periods223, enable the generation of the second preliminary voltage ramps214at least during the second time periods224, and electrically connect the second capacitor node C2A to the output node114to produce the second continuous output voltage ramps226during the second time periods224. At a time t1, corresponding to the beginning of one of the first time periods223and the end of a previous second time period224, a falling edge of the clock Clkn1occurs along with a rising edge of the clock Clkp2and a falling edge of the clock Clkn2, since these clocks are synchronized to these edges. Additionally, in some embodiments, the clock Clkp1is still low at the time t1(as illustrated); but in other embodiments, the clock Clkp1rises at this time. The rise of the clock Clkp2and the fall of the clock Clkn2at the time t1are triggered by the comparator110. When the output voltage ramp signal VrampA reaches or passes the end voltage Vend, the comparator110outputs a voltage pulse. The voltage pulse triggers the clock input CLK of the D flip-flop109. Since the input D is connected to the inverted outputQ, the triggering of the clock input CLK causes the output Q and the inverted outputQto reverse their high/low states, thereby resulting in rising and falling edges of the clock Clkp2and the clock Clkn2whenever the output voltage ramp signal VrampA reaches or passes the end voltage Vend, or within an acceptable delay thereafter. Additionally, the reset of the output voltage ramp signal VrampA causes the comparator110to end the voltage pulse. The fall of the clock Clkn1causes the first reset switch105to open, so that the first capacitor node C1A of the first capacitor103is not electrically connected to the reset voltage node113and the start voltage Vstart, thereby causing or allowing the current from the first current source101to be applied to periodically charge the first capacitor103and, thus, to start the continuous ramping of the first preliminary voltage ramp204. The first preliminary voltage ramp204starts ramping from the start voltage Vstart (i.e., the first voltage level205), because immediately prior to the time t1, the clock Clkn1was high, which held the first reset switch105closed, so that the first capacitor node C1A was electrically connected to the reset voltage node113and the start voltage Vstart, thereby sinking the current from the first current source101through the first reset switch105to the source of the start voltage Vstart, and thereby preventing the current from being applied to charge the first capacitor103and preventing the first preliminary voltage ramp204from ramping. Therefore, the first reset switch105is closed during at least a portion of each of the second time periods224and is open during at least the first time periods223. Additionally, at the time t1, the rise of the clock Clkp2causes the first output switch107to close, so that the first capacitor node C1A is periodically electrically connected to the output node114during the first time periods223, thereby causing the first preliminary voltage ramp204to be used to generate the first continuous voltage ramp225of the output voltage ramp signal VrampA. In other words, the closing of the first output switch107triggers the end of the previous second continuous voltage ramp226(at the end voltage Vend) of the output voltage ramp signal VrampA and a very quick reset of the output voltage ramp signal VrampA to the start voltage Vstart for the start of the first continuous voltage ramp225. (The first output switch107is closed during the first time periods223and open during the second time periods224.) Furthermore, in some embodiments, since the clock Clkp1is low, the second reset switch106is still open at the time t1, so that the previous second preliminary voltage ramp214continues to linearly ramp (at the second final portion218) passed the end voltage Vend to the second voltage level206due to the continued application of the current from the second current source102to the second capacitor104. However, the fall of the clock Clkn2causes the second output switch108to open, so that the second capacitor node C2A is not electrically connected to the output node114, thereby ensuring that the continuation of the second preliminary voltage ramp214does not interfere with the generation of the first continuous voltage ramp225. At a time point after the time t1(that depends on the duty cycle of the clock Clkp1), a rising edge of the clock Clkp1occurs, so that the second reset switch106is closed, and so that the second capacitor node C2A is electrically connected to the reset voltage node113and the start voltage Vstart, thereby causing the second capacitor104to be periodically discharged, and the second preliminary voltage ramp214at the second capacitor node C2A to be reset to the start voltage Vstart, i.e., a reset voltage. Since the first continuous voltage ramp225is being produced from the first preliminary voltage ramp204at this time, however, any noise or curvature due to the time it takes to discharge the second capacitor104that might occur in the second preliminary voltage ramp214does not affect the first continuous voltage ramp225. Instead, the quick reset for the start of the first continuous voltage ramp225occurred with a minimum of noise at the time t1, since the second preliminary voltage ramp214was still linearly ramping into the second final portion218. In other embodiments, if the rising edge of the clock Clkp1occurs at the time t1or the second reset switch106is triggered by the clock Clkp2instead, then the discharge of the second capacitor104would occur immediately after the quick reset for the start of the first continuous voltage ramp225. Thus, although the end of the previous second continuous voltage ramp226would occur very close to the beginning of the first continuous voltage ramp225, most or all of the noise would be cut off by the switch of the first continuous voltage ramp225from the second preliminary voltage ramp214to the first preliminary voltage ramp204. At a time t2, corresponding to the end of the first time period223and the beginning of the second time period224, a falling edge of the clock Clkp1occurs along with a falling edge of the clock Clkp2and a rising edge of the clock Clkn2, since these clocks are synchronized to these edges. Additionally, in some embodiments, the clock Clkn1is still low at the time t2(as illustrated); but in other embodiments, the clock Clkn1rises at this time. As above, the fall of the clock Clkp2and the rise of the clock Clkn2at the time t2are triggered by the comparator110when the output voltage ramp signal VrampA reaches or passes the end voltage Vend. The fall of the clock Clkp1causes the second reset switch106to open, so that the second capacitor node C2A of the second capacitor104is not electrically connected to the reset voltage node113and the start voltage Vstart, thereby causing or allowing the current from the second current source102to be applied to periodically charge the second capacitor104and, thus, to start the continuous ramping of the second preliminary voltage ramp214. The second preliminary voltage ramp214starts ramping from the start voltage Vstart (i.e., the first voltage level205), because immediately prior to the time t2, the clock Clkp1was high, which held the second reset switch106closed, so that the second capacitor node C2A was electrically connected to the reset voltage node113and the start voltage Vstart, thereby sinking the current from the second current source102through the second reset switch106to the source of the start voltage Vstart, and thereby preventing the current from being applied to charge the second capacitor104and preventing the second preliminary voltage ramp214from ramping. Therefore, the second reset switch106is closed during at least a portion of each of the first time periods223and is open during at least the second time periods224. Additionally, at the time t2, the rise of the clock Clkn2causes the second output switch108to close, so that the second capacitor node C2A is electrically connected to the output node114, thereby causing the second preliminary voltage ramp214to be used to generate the second continuous voltage ramp226of the output voltage ramp signal VrampA. In other words, the closing of the second output switch108triggers the end of the previous first continuous voltage ramp225(at the end voltage Vend) of the output voltage ramp signal VrampA and a very quick reset of the output voltage ramp signal VrampA to the start voltage Vstart for the start of the second continuous voltage ramp226. (The second output switch108is open during the first time periods223and closed during the second time periods224.) Furthermore, in some embodiments, since the clock Clkn1is low, the first reset switch105is still open at the time t2, so that the previous first preliminary voltage ramp204continues to linearly ramp (at the first final portion208) passed the end voltage Vend to the second voltage level206due to the continued application of the current from the first current source101to the first capacitor103. However, the fall of the clock Clkp2causes the first output switch107to open, so that the first capacitor node C1A is not electrically connected to the output node114, thereby ensuring that the continuation of the first preliminary voltage ramp204does not interfere with the generation of the second continuous voltage ramp226. At a time point after the time t2(that depends on the duty cycle of the clock Clkn1), a rising edge of the clock Clkn1occurs, so that the first reset switch105is closed, and so that the first capacitor node C1A is electrically connected to the reset voltage node113and the start voltage Vstart, thereby causing the first capacitor103to be periodically discharged, and the first preliminary voltage ramp204at the first capacitor node C1A to be reset, to the start voltage Vstart, i.e., the reset voltage. Since the second continuous voltage ramp226is being produced from the second preliminary voltage ramp214at this time, however, any noise or curvature due to the time it takes to discharge the first capacitor103that might occur in the first preliminary voltage ramp204does not affect the second continuous voltage ramp226. Instead, the quick reset for the start of the second continuous voltage ramp226occurred with a minimum of noise at the time t2, since the first preliminary voltage ramp204was still linearly ramping into the first final portion208. In other embodiments, if the rising edge of the clock Clkn1occurs at the time t2or the first reset switch105is triggered by the clock Clkn2instead, then the discharge of the first capacitor103would occur immediately after the quick reset for the start of the second continuous voltage ramp226, so although the end of the previous first continuous voltage ramp225would occur very close to the beginning of the second preliminary voltage ramp226, most or all of the noise would be cut off by the switch of the second continuous voltage ramp226from the first preliminary voltage ramp204to the second preliminary voltage ramp214. At a time t3, the above process repeats as if at the time t1. In this manner, the ramp generator100multiplexes at each edge of the clock Clkp2(or Clkn2) between the preliminary voltage ramp signals Vc1aand Vc2ato generate the first and second continuous voltage ramps225and226, respectively, of the output voltage ramp signal VrampA. The return or reset of the output voltage ramp signal VrampA at the end of each first and second continuous voltage ramp225and226occurs very rapidly and results in very little noise. Additionally, when the first reset switch105is closed and the first capacitor node C1A is electrically connected to the reset voltage node113and the start voltage Vstart, the first reset switch105has to sink the current from the first current source101to the source of the start voltage Vstart. Similarly, when the second reset switch106is closed and the second capacitor node C2A is electrically connected to the reset voltage node113and the start voltage Vstart, the second reset switch106has to sink the current from the second current source102to the source of the start voltage Vstart. Therefore, the first and second reset switches105and106have to be large enough to handle the level of this current. In some embodiments, however, it is advantageous that the voltage level at the first and second capacitor nodes C1A and C2A has to be pulled down only to a positive voltage of the start voltage Vstart, instead of having to be pulled all the way down to zero, which would potentially result in additional noise and power consumption. As a result of the use of the positive start voltage Vstart, the first and second reset switches105and106do not have to be as large as they would have to be if they had to sink the current to pull the voltage level all the way to ground, and they do not generate as much noise. Additionally, any noise that might be injected by the voltage pulldown to the start voltage Vstart has the entire non-ramp portion203or213to recover, so the reset of the first and second preliminary voltage ramps204and214can be done relatively slowly. An additional benefit of having the positive voltage level for the start voltage Vstart is due to the downstream electronic component (e.g., an amplifier or the downstream comparator111). The power supply for the downstream electronic component will likely be from ground (zero) to a maximum value. Many comparators, however, cannot reliably handle a lower voltage below a minimum value, such as about 500 millivolts, so the start voltage Vstart prevents the voltage level from dropping too low. For a similar reason, the end voltage Vend should not be above the maximum value of the power supply. The start voltage Vstart (e.g., about one volt) and the end voltage Vend (e.g., about two volts), therefore, place the output voltage ramp signal VrampA within the operating range (e.g., about zero to three volts) of the downstream electronic component. Additionally, the first and second output switches107and108do not experience a very high current flow, since the downstream electronic component (e.g., an amplifier or the downstream comparator111) typically does not pull much current. Therefore, the first and second output switches107and108can be relatively small, so that they inject very little noise into the output voltage ramp signal VrampA. The example embodiment ofFIGS.1and2assumes that all of the voltage ramps are positive and that the voltage ramps start at a lower fixed voltage level. In other embodiments, however, the circuit can be inverted, with the current sources at the bottom and negative voltage ramps that start at an upper fixed voltage level. For such embodiments,FIGS.1and2represent an inverted schematic and inverted timing diagrams. An example improved ramp generator300is shown inFIG.3, in accordance with some embodiments. The ramp generator300generally includes first and second current sources301and302, first and second capacitors303and304, first and second reset switches305and306, first and second output switches307and308, a D flip-flop309, a comparator310, and first and second ramp generator switches315and316, among other components not shown for simplicity. The ramp generator300generates an output voltage ramp signal VrampB, which ramps from a first (or start, initial, lower, minimum, or bottom) voltage level to a second (or end, final, upper, maximum, or top) voltage level. The output voltage ramp signal VrampB is typically provided to any appropriate downstream electronic component, e.g., an amplifier or the downstream comparator111(FIG.1) that compares the output voltage ramp signal VrampB with a reference voltage Vref to generate a voltage pulse signal112(FIG.1). For an application or circuit design that uses a relatively short duration voltage pulse (e.g., a few nanoseconds long) and/or that requires high precision in the rising and falling edges of the voltage pulse, the precision and linearity of the voltage ramp signal is of great importance in order to ensure that the comparator111is triggered at the precise required timing points. The output voltage ramp signal VrampB is a very precise and linear voltage ramp signal that can be used in such applications. The first current source301is connected between a voltage supply Vdd and the first ramp generator switch315. The first ramp generator switch315may be a MOSFET (e.g., PMOS) device with source and drain connected between the first current source301and a node C1B (e.g., an anode) of the first capacitor303, body connected to the first current source301, and gate connected to clock Clkn1. A cathode of the first capacitor303is connected to ground. The first reset switch305may be a MOSFET (e.g., NMOS) device with source and drain connected between the node C1B and a reset voltage node313, body connected to ground, and gate connected to clock Clkn1. The first output switch307may be a MOSFET (e.g., NMOS) device with source and drain connected between the node C1B and an output node314, body connected to ground, and gate connected to clock Clkp2. The second current source302is connected between the voltage supply Vdd and the second ramp generator switch316. The second ramp generator switch316may be a MOSFET (e.g., PMOS) device with source and drain connected between the second current source302and a node C2B (e.g., an anode) of the second capacitor304, body connected to the second current source302, and gate connected to clock Clkp1. A cathode of the second capacitor304is connected to ground. The second reset switch306may be a MOSFET (e.g., NMOS) device with source and drain connected between the node C2B and the reset voltage node313, body connected to ground, and gate connected to clock Clkp1. The second output switch308may be a MOSFET (e.g., NMOS) device with source and drain connected between the node C2B and the output node314, body connected to ground, and gate connected to clock Clkn2. The reset voltage node313is connected to receive a start voltage Vstart (having a first voltage level). The comparator310is connected to receive an end voltage Vend (having a second voltage level greater than the first voltage level in some embodiments) at a negative input thereof. A voltage level of the start voltage Vstart is approximately the first (or initial, lower, or bottom) voltage level of the output voltage ramp signal VrampB. The start voltage Vstart is a baseline voltage, which can be either a positive voltage, ground, or nonzero voltage. A voltage level of the end voltage Vend is approximately the second (end, final, upper, maximum, or top) voltage level of the output voltage ramp signal VrampB. Delays within some of the components of the ramp generator300may cause the second (or final, upper, or top) voltage level of the output voltage ramp signal VrampB not to be exactly the same as, but slightly greater than, the voltage level of the end voltage Vend. The comparator310is also connected to the output node314to receive the output voltage ramp signal VrampB at a positive input thereof. An output of the comparator310is connected to a clock input CLK of the D flip-flop309. An input D of the D flip-flop309is connected to an inverted outputQof the D flip-flop309. An output Q of the D flip-flop309produces the clock Clkp2, and the inverted outputQproduces the clock Clkn2. Thus, the clocks Clkp2and Clkn2are inversions of each other. In some embodiments, the clocks Clkp1and Clkn1can be the same as, and connected directly to, the clocks Clkp2and Clkn2. However, in the illustrated embodiment, the clocks Clkp1and Clkn1are separate from each other, separate from the clocks Clkp2and Clkn2, synchronized to the clocks Clkp2and Clkn2, respectively, and have a duty cycle of 50% or less, as illustrated inFIG.4. The first initial or preliminary voltage ramp signal for the ramp generator300is produced at the node C1B (i.e., a first capacitor node or a first preliminary ramp node). The second initial or preliminary voltage ramp signal for the ramp generator300is produced at the node C2B (i.e., a second capacitor node or a second preliminary ramp node). The output voltage ramp signal VrampB is produced at the output node314. Generation of the first and second preliminary voltage ramp signals and the output voltage ramp signal VrampB are described with reference toFIGS.3and4. FIG.4shows example timing diagrams for the output voltage ramp signal VrampB, the first preliminary voltage ramp signal (Vc1b), the second preliminary voltage ramp signal (Vc2b), the clock Clkn1, the clock Clkp1, and the clock Clkp2. Additionally, the clock Clkn2is simply the inversion of the clock Clkp2, so its timing diagram is omitted for simplicity. The timing diagrams were generated by a simulation running at about 300 MHz with the start voltage Vstart at about one volt and the end voltage Vend at about two volts. As shown inFIG.4, the first preliminary voltage ramp signal Vc1bhas preliminary ramp periods (e.g.,401) that include a continuous ramp portion (e.g.,402) and a non-ramp portion (e.g.,403). Each first preliminary voltage ramp (e.g.,404) (of the first preliminary voltage ramp signal Vc1b) continuously ramps from a first voltage level405to a second voltage level406within the continuous ramp portion402. The voltage level of the first preliminary voltage ramp signal Vc1bis held flat (i.e., relatively unchanging) at the first voltage level405within the non-ramp portion403. (It is understood that the first preliminary voltage ramp signal Vc1bis shown as an idealized ramp signal having straight lines with no curve when reset or noise at the start or end of the ramps and resets. However, the real-world ramp signal may exhibit such curves and/or noise.) The first voltage level405is generally the same as the start voltage Vstart, which is the reset or initial level at which the first preliminary voltage ramps404begin. The second voltage level406of the first preliminary voltage ramps404is shown as being greater or higher than the end voltage Vend (i.e., the second, end, final, upper, maximum, or top voltage level of the output voltage ramp signal VrampB). Thus, the first preliminary voltage ramps404have an initial (linear) portion (e.g.,407) (i.e., between the first voltage level405and the end voltage Vend) and a final (also potentially linear) portion (e.g.,408) (i.e., between the end voltage Vend and the second voltage level406). In other words, the first preliminary voltage ramps404continuously ramp from the voltage level of the start voltage Vstart to a voltage level greater than the end voltage Vend in some embodiments. Similar to the first preliminary voltage ramp signal Vc1b, the second preliminary voltage ramp signal Vc2b(180 degrees out of phase with the first preliminary voltage ramp signal Vc1b) has preliminary ramp periods (e.g.,411) that include a continuous ramp portion (e.g.,412) and a non-ramp portion (e.g.,413). Each second preliminary voltage ramp (e.g.,414) (of the second preliminary voltage ramp signal Vc2b) continuously ramps from the first voltage level405to the second voltage level406within the continuous ramp portion412. The voltage level of the second preliminary voltage ramp signal Vc2bis held flat (i.e., relatively unchanging) at the first voltage level405within the non-ramp portion413. (It is understood that the second preliminary voltage ramp signal Vc2bis shown as an idealized ramp signal having straight lines with no curve when reset or noise at the start or end of the ramps and resets. However, the real-world ramp signal may exhibit such curves and/or noise.) The first voltage level405is, thus, also the reset or initial level at which the second preliminary voltage ramps414begin. The second voltage level406of the second preliminary voltage ramps414is shown as being greater or higher than the end voltage Vend (i.e., the second, end, final, upper, maximum, or top voltage level of the output voltage ramp signal VrampB). Thus, the second preliminary voltage ramps414have an initial (linear) portion (e.g.,417) (i.e., between the first voltage level405and the end voltage Vend) and a final (also potentially linear) portion (e.g.,418) (i.e., between the end voltage Vend and the second voltage level406). In other words, like the first preliminary voltage ramps404, the second preliminary voltage ramps414continuously ramp from the voltage level of the start voltage Vstart to a voltage level greater than the end voltage Vend in some embodiments. (The ends of the first and second preliminary voltage ramps404and414overscan the end voltage Vend.) The second preliminary ramp periods411are about the same as the first preliminary ramp periods401, the second continuous ramp portion412is about the same as the first continuous ramp portion402, the second non-ramp portion413is about the same as the first non-ramp portion403, the second preliminary voltage ramps414are about the same as the first preliminary voltage ramps404, the second initial portion417is about the same as the first initial portion407, and the second final portion418is about the same as first the final portion408. In the illustrated example, the clock Clkn1and the clock Clkp1(which are 180 degrees out of phase with each other) have clock periods that are about the same as the preliminary ramp periods401and411(which are also 180 degrees out of phase with each other), respectively. Additionally, the duty cycle of the clocks Clkn1and Clkp1is shown as being less than 50%. (In other embodiments, the duty cycle of the clocks Clkn1and Clkp1can be about equal to 50%, such that the continuous ramp portions402and412and the non-ramp portions403and413are about equal to each other, the final portions408and418are almost nonexistent, and the second voltage level406is about the same as the end voltage Vend.) The clock Clkp2(and, thus, also the clock Clkn2) also has a clock period that is about the same as the preliminary ramp periods401and411, but it is shown with about a 50% duty cycle. The output voltage ramp signal VrampB has first and second continuous output ramp periods (e.g.,421and422) during first and second time periods (e.g.,423and424), respectively. The first and second continuous output ramp periods421and422have first and second continuous output voltage ramps (e.g.,425and426), respectively, that continuously ramp from the first voltage level of the start voltage Vstart to the second voltage level of the end voltage Vend. The first and second time periods423and424(and, thus, also the first and second continuous output ramp periods421and422and the first and second continuous output voltage ramps425and426) alternate with each other. The first time periods423correspond to the first half of the clock periods of the clock Clkp2(and the clock Clkn2), and the second time periods424correspond to the second half of the clock periods of the clock Clkp2(and the clock Clkn2). The first continuous output ramp periods421and the first continuous output voltage ramps425correspond to the first continuous ramp portion402(of the first preliminary ramp periods401), the first preliminary voltage ramps404, or the first initial portion407of the first preliminary voltage ramps404. The second continuous output ramp periods422and the first continuous output voltage ramps426correspond to the second continuous ramp portion412(of the second preliminary ramp periods411), the second preliminary voltage ramps414, or the second initial portion417of the second preliminary voltage ramps414. Each of the first and second continuous output voltage ramps425and426continuously ramps from the start voltage Vstart (i.e., the first voltage level405) to the end voltage Vend within the first and second continuous output ramp periods421and422(or the first and second time periods423and424). The first continuous ramp periods421are produced from the first preliminary ramp periods401of the first preliminary voltage ramp signal Vc1bduring the first time periods423, although the first preliminary voltage ramps404continuously ramp not only during but also beyond (e.g., after, as shown) the first time periods423. The second continuous ramp periods422are produced from the second preliminary ramp periods411of the second preliminary voltage ramp signal Vc2bduring the second time periods424, although the second preliminary voltage ramps414continuously ramp not only during but also beyond (e.g., after, as shown) the second time periods424. Thus, each of the first continuous voltage ramps425is produced from the first initial linear portion407of the corresponding first preliminary voltage ramp404, and each of the second continuous voltage ramps426is produced from the second initial linear portion417of the corresponding second preliminary voltage ramp414. In some embodiments, the set of the switches305-308,315and316enable the generation of the first preliminary voltage ramps404at least during the first time periods423, electrically connect the first capacitor node C1B to the output node314to produce the first continuous output voltage ramps425during the first time periods423, enable the generation of the second preliminary voltage ramps414at least during the second time periods424, and electrically connect the second capacitor node C2B to the output node314to produce the second continuous output voltage ramps426during the second time periods424. At a time t1, the beginning of one of the first time periods423and the end of a previous second time period424, a falling edge of the clock Clkn1occurs along with a rising edge of the clock Clkp2and a falling edge of the clock Clkn2, since these clocks are synchronized to these edges. Additionally, in some embodiments, the clock Clkp1is still low at the time t1(as illustrated); but in other embodiments, the clock Clkp1rises at this time. The rise of the clock Clkp2and the fall of the clock Clkn2at the time t1are triggered by the comparator310. When the output voltage ramp signal VrampB reaches or passes the end voltage Vend, the comparator310outputs a voltage pulse. The voltage pulse triggers the clock input CLK of the D flip-flop309. Since the input D is connected to the inverted output Q, the triggering of the clock input CLK causes the output Q and the inverted output Q to reverse their high/low states, thereby resulting in rising and falling edges of the clock Clkp2and the clock Clkn2whenever the output voltage ramp signal VrampB reaches or passes the end voltage Vend, or within an acceptable delay thereafter. Additionally, the reset of the output voltage ramp signal VrampB causes the comparator310to end the voltage pulse. The fall of the clock Clkn1causes the first ramp generator switch315to close, so that the first current source301is electrically connected to the first capacitor node C1B and the first capacitor303, thereby causing or allowing the current from the first current source301to be applied to periodically charge the first capacitor303and, thus, to start the continuous ramping of the first preliminary voltage ramp404. Additionally, the fall of the clock Clkn1causes the first reset switch305to open, so that the first capacitor node C1B of the first capacitor303is not electrically connected to the reset voltage node313and the start voltage Vstart, thereby not interfering with the current from the first current source301being applied to periodically charge the first capacitor303. The first preliminary voltage ramp404starts ramping from the start voltage Vstart (i.e., the first voltage level405), because immediately prior to the time t1, the clock Clkn1was high, which held the first reset switch305closed, so that the first capacitor node C1B was electrically connected to the reset voltage node313and the start voltage Vstart. Additionally, since the clock Clkn1was high immediately prior to the time t1, the first ramp generator switch315was open, so that the first capacitor node C1B was not electrically connected to the first current source301. Thus, the first reset switch305does not need to sink the current from the first current source301to the source of the start voltage Vstart in order to reset the first preliminary voltage ramp signal Vc1b, as described above for the first current source101. Instead, the open first ramp generator switch315prevents the current from being applied to charge the first capacitor303and prevents the first preliminary voltage ramp404from ramping prior to the time t1or during the second time periods424, so that the first reset switch305can hold the first capacitor node C1B to the start voltage Vstart. Therefore, the first reset switch305is closed during at least a portion of each of the second time periods424and is open during at least the first time periods423, and the first ramp generator switch315is open during at least a portion of each of the second time periods424and is closed during at least the first time periods423. Additionally, at the time t1, the rise of the clock Clkp2causes the first output switch307to close, so that the first capacitor node C1B is periodically electrically connected to the output node314during the first time periods423, thereby causing the first preliminary voltage ramp404to be used to generate the first continuous voltage ramp425of the output voltage ramp signal VrampB. In other words, the closing of the first output switch307triggers the end of the previous second continuous voltage ramp426(at the end voltage Vend) of the output voltage ramp signal VrampB and a very quick reset of the output voltage ramp signal VrampB to the start voltage Vstart for the start of the first continuous voltage ramp425. (The first output switch307is closed during the first time periods423and open during the second time periods424.) Furthermore, in some embodiments, since the clock Clkp1is low, the second reset switch306is still open and the second ramp generator switch316is still closed at the time t1, so that the previous second preliminary voltage ramp414continues to linearly ramp (at the second final portion418) passed the end voltage Vend to the second voltage level406due to the continued application of the current from the second current source302to the second capacitor304. However, the fall of the clock Clkn2causes the second output switch308to open, so that the second capacitor node C2B is not electrically connected to the output node314, thereby ensuring that the continuation of the second preliminary voltage ramp414does not interfere with the generation of the first continuous voltage ramp425. At a time point after the time t1(that depends on the duty cycle of the clock Clkp1), a rising edge of the clock Clkp1occurs, so that the second reset switch306is closed and the second ramp generator switch316is open, and so that the second capacitor node C2B is electrically connected to the reset voltage node313and the start voltage Vstart but not to the second current source302, thereby causing the second capacitor304to be periodically discharged, and the second preliminary voltage ramp414at the second capacitor node C2B to be reset, to the start voltage Vstart, i.e., a reset voltage. Since the first continuous voltage ramp425is being produced from the first preliminary voltage ramp404at this time, however, any noise or curvature due to the time it takes to discharge the second capacitor304that might occur in the second preliminary voltage ramp414does not affect the first continuous voltage ramp425. Instead, the quick reset for the start of the first continuous voltage ramp425occurred with a minimum of noise at the time t1, since the second preliminary voltage ramp414was still linearly ramping into the second final portion418. In other embodiments, if the rising edge of the clock Clkp1occurs at the time t1or the second reset switch306and the second ramp generator switch316are triggered by the clock Clkp2instead, then the discharge of the second capacitor304would occur immediately after the quick reset for the start of the first continuous voltage ramp425, so although the end of the previous second continuous voltage ramp426would occur very close to the beginning of the first continuous voltage ramp425, most or all of the noise would be cut off by the switch of the first continuous voltage ramp425from the second preliminary voltage ramp414to the first preliminary voltage ramp404. At a time t2, the end of the first time period423and the beginning of the second time period424, a falling edge of the clock Clkp1occurs along with a falling edge of the clock Clkp2and a rising edge of the clock Clkn2, since these clocks are synchronized to these edges. Additionally, in some embodiments, the clock Clkn1is still low at the time t2(as illustrated); but in other embodiments, the clock Clkn1rises at this time. As above, the fall of the clock Clkp2and the rise of the clock Clkn2at the time t2are triggered by the comparator310when the output voltage ramp signal VrampB reaches or passes the end voltage Vend. The fall of the clock Clkp1causes the second ramp generator switch316to close, so that the second current source302is electrically connected to the second capacitor node C2B and the second capacitor304, thereby causing or allowing the current from the second current source302to be applied to periodically charge the second capacitor304and, thus, to start the continuous ramping of the second preliminary voltage ramp414. Additionally, the fall of the clock Clkp1causes the second reset switch306to open, so that the second capacitor node C2B of the second capacitor304is not electrically connected to the reset voltage node313and the start voltage Vstart, thereby not interfering with the current from the second current source302being applied to periodically charge the second capacitor304. The second preliminary voltage ramp414starts ramping from the start voltage Vstart (i.e., the first voltage level405), because immediately prior to the time t2, the clock Clkp1was high, which held the second reset switch306closed, so that the second capacitor node C2B was electrically connected to the reset voltage node313and the start voltage Vstart. Additionally, since the clock Clkp1was high immediately prior to the time t2, the second ramp generator switch316was open, so that the second capacitor node C2B was not electrically connected to the second current source302. Thus, the second reset switch306does not need to sink the current from the second current source302to the source of the start voltage Vstart in order to reset the second preliminary voltage ramp signal Vc2b, as described above for the first current source101. Instead, the open second ramp generator switch316prevents the current from being applied to charge the second capacitor304and prevents the second preliminary voltage ramp414from ramping prior to the time t2or during the first time periods423, so that the second reset switch306can hold the second capacitor node C2B to the start voltage Vstart. Therefore, the second reset switch306is closed during at least a portion of each of the first time periods423and is open during at least the second time periods424, and the second ramp generator switch316is open during at least a portion of each of the first time periods423and is closed during at least the second time periods424. Additionally, at the time t2, the rise of the clock Clkn2causes the second output switch308to close, so that the second capacitor node C2B is electrically connected to the output node314, thereby causing the second preliminary voltage ramp414to be used to generate the second continuous voltage ramp426of the output voltage ramp signal VrampB. In other words, the closing of the second output switch308triggers the end of the previous first continuous voltage ramp425(at the end voltage Vend) of the output voltage ramp signal VrampB and a very quick reset of the output voltage ramp signal VrampB to the start voltage Vstart for the start of the second continuous voltage ramp426. (The second output switch308is open during the first time periods423and closed during the second time periods424.) Furthermore, in some embodiments, since the clock Clkn1is low, the first reset switch305is still open and the first ramp generator switch315is still closed at the time t2, so that the previous first preliminary voltage ramp404continues to linearly ramp (at the first final portion408) passed the end voltage Vend to the second voltage level406due to the continued application of the current from the first current source301to the first capacitor303. However, the fall of the clock Clkp2causes the first output switch307to open, so that the first capacitor node C1B is not electrically connected to the output node314, thereby ensuring that the continuation of the first preliminary voltage ramp404does not interfere with the generation of the second continuous voltage ramp426. At a time point after the time t2(that depends on the duty cycle of the clock Clkn1), a rising edge of the clock Clkn1occurs, so that the first reset switch305is closed and the first ramp generator switch315is open, and so that the first capacitor node C1B is electrically connected to the reset voltage node313and the start voltage Vstart but not to the first current source301, thereby causing the first capacitor303to be periodically discharged, and the first preliminary voltage ramp404at the first capacitor node C1B to be reset, to the start voltage Vstart, i.e., the reset voltage. Since the second continuous voltage ramp426is being produced from the second preliminary voltage ramp414at this time, however, any noise or curvature due to the time it takes to discharge the first capacitor303that might occur in the first preliminary voltage ramp404does not affect the second continuous voltage ramp426. Instead, the quick reset for the start of the second continuous voltage ramp426occurred with a minimum of noise at the time t2, since the first preliminary voltage ramp404was still linearly ramping into the first final portion408. In other embodiments, if the rising edge of the clock Clkn1occurs at the time t2or the first reset switch305and the second ramp generator switch316are triggered by the clock Clkn2instead, then the discharge of the first capacitor303would occur immediately after the quick reset for the start of the second continuous voltage ramp426, so although the end of the previous first continuous voltage ramp425would occur very close to the beginning of the second preliminary voltage ramp426, most or all of the noise would be cut off by the switch of the second continuous voltage ramp426from the first preliminary voltage ramp404to the second preliminary voltage ramp414. At a time t3, the above process repeats as if at the time t1. In this manner, the ramp generator300multiplexes at each edge of the clock Clkp2(or Clkn2) between the preliminary voltage ramp signals Vc1band Vc2bto generate the first and second continuous voltage ramps425and426, respectively, of the output voltage ramp signal VrampB. The return or reset of the output voltage ramp signal VrampB at the end of each first and second continuous voltage ramp425and426occurs very rapidly and results in very little noise. Additionally, since the open first and second ramp generator switches315and316eliminate any need for the closed first and second reset switch305and306, respectively, to sink the current from the first and second current sources301and302to the source of the start voltage Vstart, the first and second reset switches305and306do not have to be large enough to handle the level of this current. Instead, the first and second reset switches305and306can be relatively small, as needed for a relatively small current. Additionally, in some embodiments, it is advantageous that the voltage level at the first and second capacitor nodes C1B and C2B has to be pulled down only to a positive voltage of the start voltage Vstart, instead of having to be pulled all the way down to zero, which would potentially result in additional noise and power consumption. Additionally, any noise that might be injected by the voltage pulldown to the start voltage Vstart has the entire non-ramp portion403or413to recover, so the reset of the first and second preliminary voltage ramps404and414can be done relatively slowly. An additional benefit of having the positive voltage level for the start voltage Vstart is due to the downstream electronic component (e.g., an amplifier or the downstream comparator111). The power supply for the downstream electronic component will likely be from ground (zero) to a maximum value. Many comparators, however, cannot reliably handle a lower voltage below a minimum value, such as about 500 millivolts, so the start voltage Vstart prevents the voltage level from dropping too low. For a similar reason, the end voltage Vend should not be above the maximum value of the power supply. The start voltage Vstart (e.g., about one volt) and the end voltage Vend (e.g., about two volts), therefore, place the output voltage ramp signal VrampB within the operating range (e.g., about zero to three volts) of the downstream electronic component. Additionally, the first and second output switches307and308do not experience a very high current flow, since the downstream electronic component (e.g., an amplifier or the downstream comparator111) typically does not pull much current. Therefore, the first and second output switches307and308can be relatively small, so that they inject very little noise into the output voltage ramp signal VrampB. The example embodiment ofFIGS.3and4assumes that all of the voltage ramps are positive and that the voltage ramps start at a lower fixed voltage level. In other embodiments, however, the circuit can be inverted, with the current sources at the bottom and negative voltage ramps that start at an upper fixed voltage level. For such embodiments,FIGS.3and4represent an inverted schematic and inverted timing diagrams. An example improved ramp generator500is shown inFIG.5, in accordance with some embodiments. The ramp generator500generally includes a current source501, first and second capacitors503and504, first and second reset switches505and506, first and second output switches507and508, a D flip-flop509, a comparator510, and first and second ramp generator switches515and516, among other components not shown for simplicity. The ramp generator500generates an output voltage ramp signal VrampC, which ramps from a first (or start, initial, lower, minimum, or bottom) voltage level to a second (or end, final, upper, maximum, or top) voltage level. The output voltage ramp signal VrampC is typically provided to any appropriate downstream electronic component, e.g., an amplifier or the downstream comparator111(FIG.1) that compares the output voltage ramp signal VrampC with a reference voltage Vref to generate a voltage pulse signal112(FIG.1). For an application or circuit design that uses a relatively short duration voltage pulse (e.g., a few nanoseconds long) and/or that requires high precision in the rising and falling edges of the voltage pulse, the precision and linearity of the voltage ramp signal is of great importance in order to ensure that the comparator111is triggered at the precise required timing points. The output voltage ramp signal VrampC is a very precise and linear voltage ramp signal that can be used in such applications. The current source501is connected between a voltage supply Vdd and the first ramp generator switch515. The first ramp generator switch515may be a MOSFET (e.g., PMOS) device with source and drain connected between the current source501and a node C1C (e.g., an anode) of the first capacitor503, body connected to the current source501, and gate connected to clock Clkn. A cathode of the first capacitor503is connected to ground. The first reset switch505may be a MOSFET (e.g., NMOS) device with source and drain connected between the node C1C and a reset voltage node513, body connected to ground, and gate connected to clock Clkn. The first output switch507may be a MOSFET (e.g., NMOS) device with source and drain connected between the node C1C and an output node514, body connected to ground, and gate connected to clock Clkp. The current source501is also connected between the voltage supply Vdd and the second ramp generator switch516. The second ramp generator switch516may be a MOSFET (e.g., PMOS) device with source and drain connected between the current source501and a node C2C (e.g., an anode) of the second capacitor504, body connected to the current source501, and gate connected to clock Clkp. A cathode of the second capacitor504is connected to ground. The second reset switch506may be a MOSFET (e.g., NMOS) device with source and drain connected between the node C2C and the reset voltage node513, body connected to ground, and gate connected to clock Clkp1. The second output switch508may be a MOSFET (e.g., NMOS) device with source and drain connected between the node C2C and the output node514, body connected to ground, and gate connected to clock Clkn2. The reset voltage node513is connected to receive a start voltage Vstart (having a first voltage level). The comparator510is connected to receive an end voltage Vend (having a second voltage level greater than the first voltage level in some embodiments) at a negative input thereof. A voltage level of the start voltage Vstart is approximately the first (or initial, lower, or bottom) voltage level of the output voltage ramp signal VrampC. The start voltage Vstart is a baseline voltage, which can be either a positive voltage, ground, or nonzero voltage. A voltage level of the end voltage Vend is approximately the second (end, final, upper, maximum, or top) voltage level of the output voltage ramp signal VrampC. Delays within some of the components of the ramp generator500may cause the second (or final, upper, or top) voltage level of the output voltage ramp signal VrampC not to be exactly the same as, but slightly greater than, the voltage level of the end voltage Vend. The comparator510is also connected to the output node514to receive the output voltage ramp signal VrampC at a positive input thereof. An output of the comparator510is connected to a clock input CLK of the D flip-flop509. An input D of the D flip-flop509is connected to an inverted output Q of the D flip-flop509. An output Q of the D flip-flop509produces the clock Clkp, and the inverted output Q produces the clock Clkn. Thus, the clocks Clkp and Clkn are inversions of each other. The first initial or preliminary voltage ramp signal for the ramp generator500is produced at the node C1C (i.e., a first capacitor node or a first preliminary ramp node). The second initial or preliminary voltage ramp signal for the ramp generator500is produced at the node C2C (i.e., a second capacitor node or a second preliminary ramp node). The output voltage ramp signal VrampC is produced at the output node514. Generation of the first and second preliminary voltage ramp signals and the output voltage ramp signal VrampC are described with reference toFIGS.5and6. FIG.6shows example timing diagrams for the output voltage ramp signal VrampC, the first preliminary voltage ramp signal (Vc1c), the second preliminary voltage ramp signal (Vc2c), and the clock Clkp. Additionally, the clock Clkn is simply the inversion of the clock Clkp, so its timing diagram is omitted for simplicity. The timing diagrams were generated by a simulation running at about 500 MHz with the start voltage Vstart at about one volt and the end voltage Vend at about two volts. As shown inFIG.6, the first preliminary voltage ramp signal Vc1chas preliminary ramp periods (e.g.,601) that include a continuous ramp portion (e.g.,602) and a non-ramp portion (e.g.,603). Each first preliminary voltage ramp (e.g.,604) (of the first preliminary voltage ramp signal Vc1c) continuously ramps from a first voltage level605to a second voltage level606within the continuous ramp portion602. The voltage level of the first preliminary voltage ramp signal Vc1cis held flat (i.e., relatively unchanging) at the first voltage level605within the non-ramp portion603. (It is understood that the first preliminary voltage ramp signal Vc1cis shown as an idealized ramp signal having straight lines with no curve when reset or noise at the start or end of the ramps and resets. However, the real-world ramp signal may exhibit such curves and/or noise.) The first voltage level605is generally the same as the start voltage Vstart, which is the reset or initial level at which the first preliminary voltage ramps604begin. The second voltage level606of the first preliminary voltage ramps604is generally the same as the end voltage Vend (i.e., the second, end, final, upper, maximum, or top voltage level of the output voltage ramp signal VrampC). Thus, the first preliminary voltage ramps604continuously ramp from the voltage level of the start voltage Vstart to the voltage level of the end voltage Vend, without overscanning the end voltage Vend as shown above for embodiments ofFIGS.1-4. Similar to the first preliminary voltage ramp signal Vc1c, the second preliminary voltage ramp signal Vc2c(180 degrees out of phase with the first preliminary voltage ramp signal Vc1c) has preliminary ramp periods (e.g.,611) that include a continuous ramp portion (e.g.,612) and a non-ramp portion (e.g.,613). Each second preliminary voltage ramp (e.g.,614) (of the second preliminary voltage ramp signal Vc2c) continuously ramps from the first voltage level605to the second voltage level606within the continuous ramp portion612. The voltage level of the second preliminary voltage ramp signal Vc2cis held flat (i.e., relatively unchanging) at the first voltage level605within the non-ramp portion613. (It is understood that the second preliminary voltage ramp signal Vc2cis shown as an idealized ramp signal having straight lines with no curve when reset or noise at the start or end of the ramps and resets. However, the real-world ramp signal may exhibit such curves and/or noise.) The first voltage level605is, thus, also the reset or initial level at which the second preliminary voltage ramps614begin. The second voltage level606of the second preliminary voltage ramps614is generally the same as the end voltage Vend (i.e., the second, end, final, upper, maximum, or top voltage level of the output voltage ramp signal VrampC). Thus, like the first preliminary voltage ramps604, the second preliminary voltage ramps614continuously ramp from the voltage level of the start voltage Vstart to the voltage level of the end voltage Vend, without overscanning the end voltage Vend as shown above for embodiments ofFIGS.1-4. The second preliminary ramp periods611are about the same as the first preliminary ramp periods601, the second continuous ramp portion612is about the same as the first continuous ramp portion602, the second non-ramp portion613is about the same as the first non-ramp portion603, and the second preliminary voltage ramps614are about the same as the first preliminary voltage ramps604. In the illustrated example, the clock Clkn and the clock Clkp (which are 180 degrees out of phase with each other) have clock periods that are about the same as the preliminary ramp periods601and611(which are also 180 degrees out of phase with each other), respectively. Additionally, the duty cycle of the clocks Clkn and Clkp is shown as being about equal to 50%, such that the continuous ramp portions602and612and the non-ramp portions603and613are about equal to each other. The output voltage ramp signal VrampC has first and second continuous output ramp periods (e.g.,621and622) during first and second time periods (e.g.,623and624), respectively. The first and second continuous output ramp periods621and622have first and second continuous output voltage ramps (e.g.,625and626), respectively, that continuously ramp from the first voltage level of the start voltage Vstart to the second voltage level of the end voltage Vend. The first and second time periods623and624(and, thus, also the first and second continuous output ramp periods621and622and the first and second continuous output voltage ramps625and626) alternate with each other. The first time periods623correspond to the first half of the clock periods of the clock Clkp (and the clock Clkn), and the second time periods624correspond to the second half of the clock periods of the clock Clkp (and the clock Clkn). The first continuous output ramp periods621and the first continuous output voltage ramps625correspond to the first continuous ramp portion602(of the first preliminary ramp periods601), or the first preliminary voltage ramps604. The second continuous output ramp periods622and the first continuous output voltage ramps626correspond to the second continuous ramp portion612(of the second preliminary ramp periods611), or the second preliminary voltage ramps614. Each of the first and second continuous output voltage ramps625and626continuously ramps from the start voltage Vstart (i.e., the first voltage level605) to the end voltage Vend within the first and second continuous output ramp periods621and622(or the first and second time periods623and624). The first continuous ramp periods621are produced from the first preliminary ramp periods601of the first preliminary voltage ramp signal Vc1cduring the first time periods623. The second continuous ramp periods622are produced from the second preliminary ramp periods611of the second preliminary voltage ramp signal Vc2cduring the second time periods624. Thus, each of the first continuous voltage ramps625is produced from the corresponding first preliminary voltage ramp604, and each of the second continuous voltage ramps626is produced from the corresponding second preliminary voltage ramp614. In some embodiments, the set of the switches505-508,515and516enable the generation of the first preliminary voltage ramps604at least during the first time periods623, electrically connect the first capacitor node C1C to the output node514to produce the first continuous output voltage ramps625during the first time periods623, enable the generation of the second preliminary voltage ramps614at least during the second time periods624, and electrically connect the second capacitor node C2C to the output node514to produce the second continuous output voltage ramps626during the second time periods624. At a time t1, the beginning of one of the first time periods623and the end of a previous second time period624, a rising edge of the clock Clkp occurs along with a falling edge of the clock Clkn. The rise of the clock Clkp and the fall of the clock Clkn at the time t1are triggered by the comparator510. When the output voltage ramp signal VrampC reaches or passes the end voltage Vend, the comparator510outputs a voltage pulse. The voltage pulse triggers the clock input CLK of the D flip-flop509. Since the input D is connected to the inverted outputQ, the triggering of the clock input CLK causes the output Q and the inverted outputQto reverse their high/low states, thereby resulting in rising and falling edges of the clock Clkp and the clock Clkn whenever the output voltage ramp signal VrampC reaches or passes the end voltage Vend, or within an acceptable delay thereafter. Additionally, the reset of the output voltage ramp signal VrampC causes the comparator510to end the voltage pulse. The fall of the clock Clkn causes the first ramp generator switch515to close, so that the current source501is electrically connected to the first capacitor node C1C and the first capacitor503, thereby causing or allowing the current from the current source501to be applied to periodically charge the first capacitor503and, thus, to start the continuous ramping of the first preliminary voltage ramp604. Additionally, the fall of the clock Clkn causes the first reset switch505to open, so that the first capacitor node C1C of the first capacitor503is not electrically connected to the reset voltage node513and the start voltage Vstart, thereby not interfering with the current from the current source501being applied to periodically charge the first capacitor503. The first preliminary voltage ramp604starts ramping from the start voltage Vstart (i.e., the first voltage level605), because immediately prior to the time t1, the clock Clkn was high, which held the first reset switch505closed, so that the first capacitor node C1C was electrically connected to the reset voltage node513and the start voltage Vstart. Additionally, since the clock Clkn was high immediately prior to the time t1, the first ramp generator switch515was open, so that the first capacitor node C1C was not electrically connected to the current source501. Thus, the first reset switch505does not need to sink the current from the current source501to the source of the start voltage Vstart in order to reset the first preliminary voltage ramp signal Vc1c, as described above for the first current source101. Instead, the open first ramp generator switch515prevents the current from being applied to charge the first capacitor503and prevents the first preliminary voltage ramp604from ramping prior to the time t1or during the second time periods624, so that the first reset switch505can hold the first capacitor node C1C to the start voltage Vstart. Therefore, the first reset switch505is closed during at least a portion of each of the second time periods624and is open during at least the first time periods623, and the first ramp generator switch515is open during at least a portion of each of the second time periods624and is closed during at least the first time periods623. Additionally, at the time t1, the rise of the clock Clkp causes the first output switch507to close, so that the first capacitor node C1C is periodically electrically connected to the output node514during the first time periods623, thereby causing the first preliminary voltage ramp604to be used to generate the first continuous voltage ramp625of the output voltage ramp signal VrampC. In other words, the closing of the first output switch507triggers the end of the previous second continuous voltage ramp626(at the end voltage Vend) of the output voltage ramp signal VrampC and a very quick reset of the output voltage ramp signal VrampC to the start voltage Vstart for the start of the first continuous voltage ramp625. (The first output switch507is closed during the first time periods623and open during the second time periods624.) Furthermore, at the time t1, the rising edge of the clock Clkp causes the second reset switch506to close and the second ramp generator switch516to open, so that the second capacitor node C2C is electrically connected to the reset voltage node513and the start voltage Vstart but not to the current source501, thereby causing the second capacitor504to be periodically discharged, and the second preliminary voltage ramp614at the second capacitor node C2C to be reset, to the start voltage Vstart, i.e., a reset voltage. Since the rising edge of the clock Clkp occurs at the time t1, the discharge of the second capacitor504occurs immediately after the quick reset for the start of the first continuous voltage ramp625, so although the end of the previous second continuous voltage ramp626occurs very close to the beginning of the first continuous voltage ramp625, most or all of the noise is cut off by the switch of the first continuous voltage ramp625from the second preliminary voltage ramp614to the first preliminary voltage ramp604. At a time t2, the end of the first time period623and the beginning of the second time period624, a falling edge of the clock Clkp occurs along with a rising edge of the clock Clkn. As above, the fall of the clock Clkp and the rise of the clock Clkn at the time t2are triggered by the comparator510when the output voltage ramp signal VrampC reaches or passes the end voltage Vend. The fall of the clock Clkp causes the second ramp generator switch516to close, so that the current source501is electrically connected to the second capacitor node C2C and the second capacitor504, thereby causing or allowing the current from the current source501to be applied to periodically charge the second capacitor504and, thus, to start the continuous ramping of the second preliminary voltage ramp614. Additionally, the fall of the clock Clkp causes the second reset switch506to open, so that the second capacitor node C2C of the second capacitor504is not electrically connected to the reset voltage node513and the start voltage Vstart, thereby not interfering with the current from the current source501being applied to periodically charge the second capacitor504. The second preliminary voltage ramp614starts ramping from the start voltage Vstart (i.e., the first voltage level605), because immediately prior to the time t2, the clock Clkp was high, which held the second reset switch506closed, so that the second capacitor node C2C was electrically connected to the reset voltage node513and the start voltage Vstart. Additionally, since the clock Clkp was high immediately prior to the time t2, the second ramp generator switch516was open, so that the second capacitor node C2C was not electrically connected to the current source501. Thus, the second reset switch506does not need to sink the current from the current source501to the source of the start voltage Vstart in order to reset the second preliminary voltage ramp signal Vc2c, as described above for the second current source102. Instead, the open second ramp generator switch516prevents the current from being applied to charge the second capacitor504and prevents the second preliminary voltage ramp614from ramping prior to the time t2or during the first time periods623, so that the second reset switch506can hold the second capacitor node C2C to the start voltage Vstart. Therefore, the second reset switch506is closed during at least a portion of each of the first time periods623and is open during at least the second time periods624, and the second ramp generator switch516is open during at least a portion of each of the first time periods623and is closed during at least the second time periods624. Additionally, at the time t2, the rise of the clock Clkn causes the second output switch508to close, so that the second capacitor node C2C is electrically connected to the output node514, thereby causing the second preliminary voltage ramp614to be used to generate the second continuous voltage ramp626of the output voltage ramp signal VrampC. In other words, the closing of the second output switch508triggers the end of the previous first continuous voltage ramp625(at the end voltage Vend) of the output voltage ramp signal VrampC and a very quick reset of the output voltage ramp signal VrampC to the start voltage Vstart for the start of the second continuous voltage ramp626. (The second output switch508is open during the first time periods623and closed during the second time periods624.) Furthermore, at the time t2, the rising edge of the clock Clkn causes the first reset switch505to close and the first ramp generator switch515to open, so that the first capacitor node C1C is electrically connected to the reset voltage node513and the start voltage Vstart but not to the current source501, thereby causing the first capacitor503to be periodically discharged, and the first preliminary voltage ramp604at the first capacitor node C1C to be reset, to the start voltage Vstart, i.e., the reset voltage. Since the rising edge of the clock Clkn occurs at the time t2, the discharge of the first capacitor503occurs immediately after the quick reset for the start of the second continuous voltage ramp626, so although the end of the previous first continuous voltage ramp625occurs very close to the beginning of the second preliminary voltage ramp626, most or all of the noise is cut off by the switch of the second continuous voltage ramp626from the first preliminary voltage ramp604to the second preliminary voltage ramp614. At a time t3, the above process repeats as if at the time t1. In this manner, the ramp generator500multiplexes at each edge of the clock Clkp (or Clkn) between the preliminary voltage ramp signals Vc1cand Vc2cto generate the first and second continuous voltage ramps625and626, respectively, of the output voltage ramp signal VrampC. The return or reset of the output voltage ramp signal VrampC at the end of each first and second continuous voltage ramp625and626occurs very rapidly and results in very little noise. Additionally, since the open first and second ramp generator switches515and516eliminate any need for the closed first and second reset switch505and506, respectively, to sink the current from the current source501to the source of the start voltage Vstart, the first and second reset switches505and506do not have to be large enough to handle the level of this current. Instead, the first and second reset switches505and506can be relatively small, as needed for a relatively small current. Additionally, in some embodiments, it is advantageous that the voltage level at the first and second capacitor nodes C1C and C2C has to be pulled down only to a positive voltage of the start voltage Vstart, instead of having to be pulled all the way down to zero, which would potentially result in additional noise and power consumption. Additionally, any noise that might be injected by the voltage pulldown to the start voltage Vstart has the entire non-ramp portion603or613to recover, so the reset of the first and second preliminary voltage ramps604and614can be done relatively slowly. An additional benefit of having the positive voltage level for the start voltage Vstart is due to the downstream electronic component (e.g., an amplifier or the downstream comparator111). The power supply for the downstream electronic component will likely be from ground (zero) to a maximum value. Many comparators, however, cannot reliably handle a lower voltage below a minimum value, such as about 500 millivolts, so the start voltage Vstart prevents the voltage level from dropping too low. For a similar reason, the end voltage Vend should not be above the maximum value of the power supply. The start voltage Vstart (e.g., about one volt) and the end voltage Vend (e.g., about two volts), therefore, place the output voltage ramp signal VrampC within the operating range (e.g., about zero to three volts) of the downstream electronic component. Additionally, the first and second output switches507and508do not experience a very high current flow, since the downstream electronic component (e.g., an amplifier or the downstream comparator111) typically does not pull much current. Therefore, the first and second output switches507and508can be relatively small, so that they inject very little noise into the output voltage ramp signal VrampC, as compared to a design that takes an output voltage ramp signal from a point immediately after the current sources and before the first and second ramp generator switches515and516, such that the ramp generator switches would have to be relatively large to handle the level of the current and would potentially be a source of noise for the output voltage ramp signal. The example embodiment ofFIGS.5and6assumes that all of the voltage ramps are positive and that the voltage ramps start at a lower fixed voltage level. In other embodiments, however, the circuit can be inverted, with the current sources at the bottom and negative voltage ramps that start at an upper fixed voltage level. For such embodiments,FIGS.5and6represent an inverted schematic and inverted timing diagrams. An example improved ramp generator700is shown inFIG.7, in accordance with some embodiments. The ramp generator700generally includes first and second output switches707and708, a D flip-flop709, first and second comparators710and711, and first and second preliminary ramp generators721and722, among other components not shown for simplicity. The ramp generator700generates an output voltage ramp signal VrampD at an output node714, which ramps from a first (or start, initial, lower, minimum, or bottom) voltage level of a start voltage VstartD to a second (or end, final, upper, maximum, or top) voltage level of an end voltage VendD. Delays within some of the components of the ramp generator700may cause the second (or final, upper, or top) voltage level of the output voltage ramp signal VrampD not to be exactly the same as, but slightly greater than, the voltage level of the end voltage VendD. The output voltage ramp signal VrampD is typically provided to any appropriate downstream electronic component, e.g., an amplifier or the downstream comparator111(FIG.1) that compares the output voltage ramp signal VrampD with a reference voltage Vref to generate a voltage pulse signal112(FIG.1). For an application or circuit design that uses a relatively short duration voltage pulse (e.g., a few nanoseconds long) and/or that requires high precision in the rising and falling edges of the voltage pulse, the precision and linearity of the voltage ramp signal is of great importance in order to ensure that the comparator111is triggered at the precise required timing points. The output voltage ramp signal VrampD is a very precise and linear voltage ramp signal that can be used in such applications. The first output switch707may be a MOSFET (e.g., NMOS) device with source and drain connected between the first preliminary ramp generator721and the output node714, body connected to ground, and gate connected to a clock ClknD. The second output switch708may be a MOSFET (e.g., NMOS) device with source and drain connected between the second preliminary ramp generator722and the output node714, body connected to ground, and gate connected to clock ClkpD. The first and second comparators710and711are connected to receive the end voltage VendD at negative inputs thereof. The first comparator710is also connected to a first preliminary output node723(or first preliminary ramp node) to receive a first preliminary voltage ramp signal Vramp1from the first preliminary ramp generator721at a positive input thereof. The second comparator711is also connected to a second preliminary output node724(or second preliminary ramp node) to receive a second preliminary voltage ramp signal Vramp2from the second preliminary ramp generator722at a positive input thereof. An output of the first comparator710is connected to a preset input PRE of the D flip-flop709. An output of the second comparator711is connected to a clear input CLR of the D flip-flop709. An input D and a clock input CLK of the D flip-flop709are connected to ground. An output Q of the D flip-flop709produces the clock ClkpD, and an inverted output Q produces the clock ClknD. Thus, the clocks ClkpD and ClknD are inversions of each other. In some embodiments, the first and second preliminary ramp generators721and722are any appropriate ramp generator circuit. For example, the first and second preliminary ramp generators721and722can each have a design like any of the ramp generators100,300or500, described above. Additionally, it is preferable that the first and second preliminary ramp generators721and722have the same design to ensure that the first and second preliminary voltage ramp signals Vramp1and Vramp2are almost identical to each other. Thus, the first and second preliminary voltage ramp signals Vramp1and Vramp2are like any of the output voltage ramp signals VrampA, VrampB or VrampC described above. Generation of the first and second preliminary voltage ramp signals Vramp1and Vramp2and the output voltage ramp signal VrampD are described with reference toFIGS.7-10. In accordance with a first embodiment ofFIG.7,FIG.8shows example timing diagrams for the output voltage ramp signal VrampD, the first preliminary voltage ramp signal Vramp1, the second preliminary voltage ramp signal Vramp2, the clock ClkpD, and clocks Clkp1D and Clkp2D. (In some embodiments, the clock Clkp1D and the clock Clkp2D are like the clock Clkp2inFIGS.2and4or the clock Clkp inFIG.6, depending on the implementation of the first and second preliminary ramp generators721and722.) Additionally, the clock ClknD is simply the inversion of the clock ClkpD, so its timing diagram is omitted for simplicity. The timing diagrams were generated by a simulation running at about 500 MHz with the start voltage VstartD at about 1.0 volt and the end voltage VendD at about 2.0 volts. Additionally, the preliminary start voltage Vstart and preliminary end voltage Vend (as described above for the ramp generator100,300or500) for the first and second preliminary ramp generators721and722were about 0.5 volts and 2.5 volts, respectively. As shown inFIG.8, in some embodiments, the first and second preliminary voltage ramp signals Vramp1and Vramp2have first and second preliminary ramp periods (e.g.,801and811, respectively) that are similar to the continuous ramp periods221/222,421/422or621/622, as described above. Each first and second preliminary voltage ramp (e.g.,805and815) (of the first and second preliminary voltage ramp signals Vramp1and Vramp2, respectively) continuously and linearly ramps throughout the first and second preliminary ramp periods801and811from the first voltage level of the preliminary start voltage Vstart to the second voltage level of the preliminary end voltage Vend, as described above. (Since the first and second preliminary voltage ramp signals Vramp1and Vramp2are like any of the output voltage ramp signals VrampA, VrampB or VrampC, as described above, they generally have almost straight lines with almost no curve when reset or almost no noise at the start or end of the first and second preliminary voltage ramps805and815.) The preliminary start voltage Vstart is the initial level at which the first and second preliminary voltage ramps805and815begin to ramp at the beginning of each first and second preliminary ramp period801and811, as described above, and is shown as being less or lower than the start voltage VstartD (i.e., the first, start, initial, lower, minimum, or bottom voltage level of the output voltage ramp signal VrampD). The preliminary end voltage Vend is the final level at which the first and second preliminary voltage ramps805and815stop ramping at the end of each first and second preliminary ramp period801and811, as described above, and is shown as being greater or higher than the end voltage VendD (i.e., the second, end, final, upper, maximum, or top voltage level of the output voltage ramp signal VrampD). Thus, the first and second preliminary voltage ramps805and815have an initial linear portion (e.g.,806and816) (i.e., between the preliminary start voltage Vstart and the start voltage VstartD, or within a first/initial portion802or812of the first or second preliminary ramp period801or811), a middle linear portion (e.g.,807and817) (i.e., between the start voltage VstartD and the end voltage VendD, or within a second/middle portion803or813of the first or second preliminary ramp period801or811), and a final linear portion (e.g.,808and818) (i.e., between the end voltage VendD and the preliminary end voltage Vend, or within a third/final portion804or814of the first or second preliminary ramp period801or811). Additionally, except for being offset from each other, the second preliminary ramp periods811are about the same as the first preliminary ramp periods801, the second preliminary voltage ramps815are about the same as the first preliminary voltage ramps805, the initial portion816is about the same as the initial portion806, the middle portion817is about the same as the middle portion807, and the final portion818is about the same as the final portion808. In the illustrated example, the clock Clkp1D and the clock Clkp2D (which are 180 degrees out of phase with each other) have clock periods that are about twice the first and second preliminary ramp periods801and811(which are also 180 degrees out of phase with each other), respectively. Additionally, the duty cycle of the clocks Clkp1D and Clkp2D is shown as being about equal to 50%, such that the first and second preliminary voltage ramps805and815are about equal to each other. On the other hand, the clock ClkpD (and, thus, also the clock ClknD) has a clock period that is about the same as the preliminary ramp periods801and811and is shown with about a 50% duty cycle. The output voltage ramp signal VrampD has first and second continuous output ramp periods (e.g.,821and822) during first and second time periods (e.g.,823and824), respectively. The first and second continuous output ramp periods821and822have first and second continuous output voltage ramps (e.g.,825and826), respectively, that continuously ramp from the first voltage level of the start voltage VstartD to the second voltage level of the end voltage VendD. The first and second time periods823and824(and, thus, also the first and second continuous output ramp periods821and822and the first and second continuous output voltage ramps825and826) alternate with each other. The first time periods823correspond to the first half of the clock periods of the clock ClkpD (and the clock ClknD), and the second time periods824correspond to the second half of the clock periods of the clock ClkpD (and the clock ClknD). The first continuous output ramp periods821and the first continuous output voltage ramps825correspond to the middle linear portion807of the first preliminary voltage ramps805and the second portion803of the first preliminary ramp period801. The second continuous output ramp periods822and the first continuous output voltage ramps826correspond to the second middle portion817of the second preliminary voltage ramps815and the second portion813of the second preliminary ramp period811. Each of the first and second continuous output voltage ramps825and826continuously ramps from the start voltage VstartD to the end voltage VendD within the first and second continuous output ramp periods821and822(or the first and second time periods823and824). The first continuous ramp periods821are produced from the first preliminary ramp periods801of the first preliminary voltage ramp signal Vramp1during the first time periods823, although the first preliminary voltage ramps805continuously ramp not only during but also beyond (e.g., before and after, as shown) the first time periods823. The second continuous ramp periods822are produced from the second preliminary ramp periods811of the second preliminary voltage ramp signal Vramp2during the second time periods824, although the second preliminary voltage ramps815continuously ramp not only during but also beyond (e.g., before and after, as shown) the second time periods824. Thus, each of the first continuous voltage ramps825is produced from the middle linear portion807of the corresponding first preliminary voltage ramp805, and each of the second continuous voltage ramps826is produced from the middle linear portion817of the corresponding second preliminary voltage ramp815. In some embodiments, the set of the switches707and708electrically connect the first preliminary output node723to the output node714to produce the first continuous output voltage ramps825during the first time periods823, and electrically connect the second preliminary output node724to the output node714to produce the second continuous output voltage ramps826during the second time periods824. At a time t1, the beginning of one of the first time periods823and the end of a previous second time period824, a falling edge of the clock ClkpD occurs along with a rising edge of the clock ClknD. The fall of the clock ClkpD and the rise of the clock ClknD at the time t1are triggered by the second comparator711. When the second preliminary voltage ramp signal Vramp2reaches or passes the end voltage VendD, the second comparator711outputs a voltage pulse. The voltage pulse triggers the clear input CLR of the D flip-flop709. Since the input D and the clock input CLK are connected to ground, the triggering of the clear input CLR causes the output Q to go low and the inverted output Q to go high, thereby resulting in a falling edge of the clock ClkpD and a rising edge of the clock ClknD whenever the second preliminary voltage ramp signal Vramp2reaches or passes the end voltage VendD, or within an acceptable delay thereafter. Additionally, the later reset of the second preliminary voltage ramp signal Vramp2causes the second comparator711to end the voltage pulse. At the time t1, the rise of the clock ClknD causes the first output switch707to close, so that the first preliminary output node723is periodically electrically connected to the output node714during the first time periods823, thereby causing the first preliminary voltage ramp805to be used to generate the first continuous voltage ramp825of the output voltage ramp signal VrampD. In other words, the closing of the first output switch707triggers the end of the previous second continuous voltage ramp826(at the end voltage VendD) of the output voltage ramp signal VrampD and a very quick reset of the output voltage ramp signal VrampD to the start voltage VstartD for the start of the first continuous voltage ramp825. (The first output switch707is closed during the first time periods823and open during the second time periods824.) The ramping of the first preliminary voltage ramp805had already started prior to the time t1, so the start voltage VstartD is the voltage level of the first preliminary voltage ramp805at the time t1when the switch occurs from the previous second preliminary voltage ramp815to the first preliminary voltage ramp805. Additionally, any noise that might have been generated at the start of the first preliminary voltage ramp805(i.e., in the initial linear portion806well before the first preliminary output node723is electrically connected to the output node714) will have settled out during the initial portion802of the first preliminary ramp period801. Furthermore, the fall of the clock ClkpD at the time t1causes the second output switch708to open, so that the second preliminary output node724is not electrically connected to the output node714, thereby ensuring that the continuation of the second preliminary voltage ramp signal Vramp2into the final portion818thereof does not interfere with the generation of the first continuous voltage ramp825. Also, since the second preliminary voltage ramp signal Vramp2does not reset at the time t1, but continues into the final portion818, the later reset of the second preliminary voltage ramp signal Vramp2(well after the second preliminary output node724has been electrically disconnected from the output node714) does not cause any noise in the first continuous voltage ramp825. At a time t2, the end of the first time period823and the beginning of the second time period824, a falling edge of the clock ClknD occurs along with a rising edge of the clock ClkpD. The fall of the clock ClknD and the rise of the clock ClkpD at the time t2are triggered by the first comparator710. When the first preliminary voltage ramp signal Vramp1reaches or passes the end voltage VendD, the first comparator710outputs a voltage pulse. The voltage pulse triggers the preset input PRE of the D flip-flop709. Since the input D and the clock input CLK are connected to ground, the triggering of the preset input PRE causes the output Q to go high and the inverted outputQto go low, thereby resulting in a falling edge of the clock ClknD and a rising edge of the clock ClkpD whenever the first preliminary voltage ramp signal Vramp1reaches or passes the end voltage VendD, or within an acceptable delay thereafter. Additionally, the reset of the first preliminary voltage ramp signal Vramp1causes the second comparator710to end the voltage pulse. At the time t2, the rise of the clock ClkpD causes the second output switch708to close, so that the second preliminary output node724is electrically connected to the output node714during the second time periods824, thereby causing the second preliminary voltage ramp815to be used to generate the second continuous voltage ramp826of the output voltage ramp signal VrampD. In other words, the closing of the second output switch708triggers the end of the previous first continuous voltage ramp825(at the end voltage VendD) of the output voltage ramp signal VrampD and a very quick reset of the output voltage ramp signal VrampD to the start voltage VstartD for the start of the second continuous voltage ramp826. (The second output switch708is open during the first time periods823and closed during the second time periods824.) The ramping of the second preliminary voltage ramp815had already started prior to the time t2, so the start voltage VstartD is the voltage level of the second preliminary voltage ramp815at the time t2when the switch occurs from the previous first preliminary voltage ramp805to the second preliminary voltage ramp815. Additionally, any noise that might have been generated at the start of the second preliminary voltage ramp815(i.e., in the initial linear portion816well before the second preliminary output node724is electrically connected to the output node714) will have settled out during the initial portion812of the second preliminary ramp period811. Furthermore, the fall of the clock ClknD at the time t2causes the first output switch707to open, so that the first preliminary output node723is not electrically connected to the output node714, thereby ensuring that the continuation of the first preliminary voltage ramp signal Vramp1into the final portion808thereof does not interfere with the generation of the second continuous voltage ramp826. Also, since the first preliminary voltage ramp signal Vramp1does not reset at the time t2, but continues into the final portion808, the later reset of the first preliminary voltage ramp signal Vramp1(well after the first preliminary output node723has been electrically disconnected from the output node714) does not cause any noise in the second continuous voltage ramp826. At a time t3, the above process repeats as if at the time t1. In this manner, the ramp generator700multiplexes at each edge of the clock ClkpD (or ClknD) between the first and second preliminary voltage ramp signals Vramp1and Vramp2to generate the first and second continuous voltage ramps825and826, respectively, of the output voltage ramp signal VrampD. The return or reset of the output voltage ramp signal VrampD at the end of each first and second continuous voltage ramp825and826occurs very rapidly and results in very little noise. Additionally, the first and second continuous voltage ramps825and826exhibit a very high degree of linearity, potentially even higher than that of the first and second continuous voltage ramps225,226,425,426,625and626, due to the fact that the first and second continuous voltage ramps825and826are taken from the middle linear portions807and817(a linear “sweet spot”) of the first and second preliminary voltage ramps805and815, well separated from the reset points thereof, and where linearity of the first and second preliminary voltage ramps805and815is ensured. Additionally, the first and second output switches707and708do not experience a very high current flow, since the downstream electronic component (e.g., an amplifier or the downstream comparator111) typically does not pull much current. Therefore, the first and second output switches707and708can be relatively small, so that they inject very little noise into the output voltage ramp signal VrampD. The example embodiment ofFIGS.7and8assumes that all of the voltage ramps are positive and that the voltage ramps start at a lower fixed voltage level. In other embodiments, however, the circuit can be inverted, with the current sources at the bottom and negative voltage ramps that start at an upper fixed voltage level. For such embodiments,FIGS.7and8represent an inverted schematic and inverted timing diagrams. In accordance with a second embodiment ofFIG.7,FIG.9shows alternative example timing diagrams for an output voltage ramp signal VrampE, the first preliminary voltage ramp signal Vramp1, the second preliminary voltage ramp signal Vramp2, a clock ClkpE, and clocks Clkp1D and Clkp2D. Additionally, a clock ClknE is simply the inversion of the clock ClkpE, so its timing diagram is omitted for simplicity. The output voltage ramp signal VrampE, the clock ClkpE, and the clock ClknE take the place of the output voltage ramp signal VrampD, the clock ClkpD, and the clock ClknD, respectively, in the above description ofFIGS.7and8; but the first preliminary voltage ramp signal Vramp1, the second preliminary voltage ramp signal Vramp2, and clocks Clkp1D and Clkp2D remain the same as described above with respect toFIGS.7and8. The timing diagrams were generated by a simulation running at about 500 MHz with the start voltage VstartE at about 1.2 volt and the end voltage VendE at about 2.2 volts. Additionally, the preliminary start voltage Vstart and preliminary end voltage Vend (as described above for the ramp generator100,300or500) for the first and second preliminary ramp generators721and722were about 0.5 volts and 2.5 volts, respectively. As shown inFIG.9, in some embodiments, the first and second preliminary voltage ramp signals Vramp1and Vramp2have the first and second preliminary ramp periods801and811, respectively, with the first and second preliminary voltage ramps805and815, respectively, as described above with respect toFIG.8. The preliminary start voltage Vstart, described above, is shown as being less or lower than the start voltage VstartE (i.e., the first, start, initial, lower, minimum, or bottom voltage level of the output voltage ramp signal VrampE). The preliminary end voltage Vend, described above, is shown as being greater or higher than the end voltage VendE (i.e., the second, end, final, upper, maximum, or top voltage level of the output voltage ramp signal VrampE). Thus, the first and second preliminary voltage ramps805and815have an initial linear portion (e.g.,906and916) (i.e., between the preliminary start voltage Vstart and the start voltage VstartE, or within a first/initial portion902or912of the first or second preliminary ramp period801or811), a middle linear portion (e.g.,907and917) (i.e., between the start voltage VstartE and the end voltage VendE, or within a second/middle portion903or913of the first or second preliminary ramp period801or811), and a final linear portion (e.g.,908and918) (i.e., between the end voltage VendE and the preliminary end voltage Vend, or within a third/final portion904or914of the first or second preliminary ramp period801or811). Additionally, except for being offset from each other, the initial portion916is about the same as the initial portion906, the middle portion917is about the same as the middle portion907, and the final portion918is about the same as the final portion908. Similar to the clock ClkpD, the clock ClkpE (and, thus, also the clock ClknE) has a clock period that is about the same as the preliminary ramp periods801and811and is shown with about a 50% duty cycle. The output voltage ramp signal VrampE has first and second continuous output ramp periods (e.g.,921and922) during first and second time periods (e.g.,923and924), respectively. The first and second continuous output ramp periods921and922have first and second continuous output voltage ramps (e.g.,925and926), respectively, that continuously ramp from the first voltage level of the start voltage VstartE to the second voltage level of the end voltage VendE. The first and second time periods923and924(and, thus, also the first and second continuous output ramp periods921and922and the first and second continuous output voltage ramps925and926) alternate with each other. The first time periods923correspond to the first half of the clock periods of the clock ClkpE (and the clock ClknD), and the second time periods924correspond to the second half of the clock periods of the clock ClkpE (and the clock ClknE). The first continuous output ramp periods921and the first continuous output voltage ramps925correspond to the middle linear portion907of the first preliminary voltage ramps805and the second portion903of the first preliminary ramp period801. The second continuous output ramp periods922and the first continuous output voltage ramps926correspond to the second middle portion917of the second preliminary voltage ramps815and the second portion913of the second preliminary ramp period811. Each of the first and second continuous output voltage ramps925and926continuously ramps from the start voltage VstartE to the end voltage VendE within the first and second continuous output ramp periods921and922(or the first and second time periods923and924). The first continuous ramp periods921are produced from the first preliminary ramp periods801of the first preliminary voltage ramp signal Vramp1during the first time periods923, although the first preliminary voltage ramps805continuously ramp not only during but also beyond (e.g., before and after, as shown) the first time periods923. The second continuous ramp periods922are produced from the second preliminary ramp periods811of the second preliminary voltage ramp signal Vramp2during the second time periods924, although the second preliminary voltage ramps815continuously ramp not only during but also beyond (e.g., before and after, as shown) the second time periods924. Thus, each of the first continuous voltage ramps925is produced from the middle linear portion907of the corresponding first preliminary voltage ramp805, and each of the second continuous voltage ramps926is produced from the middle linear portion917of the corresponding second preliminary voltage ramp815. In some embodiments, the set of the switches707and708electrically connect the first preliminary output node723to the output node714to produce the first continuous output voltage ramps925during the first time periods923, and electrically connect the second preliminary output node724to the output node714to produce the second continuous output voltage ramps926during the second time periods924. The above described actions that occur at the times t1, t2and t3inFIG.8are generally the same or similar to the actions that occur at the times t1, t2and t3inFIG.9. However, the end voltage VendE is greater or higher than the end voltage VendD ofFIG.8. Therefore, the first and second preliminary voltage ramps805and815cross the end voltage VendE later than was the case for crossing the end voltage VendD inFIG.8. As a result, the start voltage VstartE is also greater or higher than the start voltage VstartD, and the clock ClkpE and the clock ClknE are delayed relative to the clock ClkpD and the clock ClknD, respectively. As a further result, the continuous ramp periods921/922, the time periods923/924, and the continuous output voltage ramps925/926ofFIG.9are similarly delayed relative to the continuous ramp periods821/822, the time periods823/824, and the continuous output voltage ramps825/826ofFIG.8. Additionally, the middle linear portions907and917(of the corresponding first and second preliminary voltage ramps805and815, and from which the first and second continuous output voltage ramps925and926are produced) are closer to the end voltage Vend than are the middle linear portions807and817. Thus, the final portions908and918are smaller than the final portions808and818, the initial portions906and916are larger than the initial portions806and816, and the initial portions906and916are larger than the final portions908and918. The example embodiment ofFIGS.7and9assumes that all of the voltage ramps are positive and that the voltage ramps start at a lower fixed voltage level. In other embodiments, however, the circuit can be inverted, with the current sources at the bottom and negative voltage ramps that start at an upper fixed voltage level. For such embodiments,FIGS.7and9represent an inverted schematic and inverted timing diagrams. In accordance with a third embodiment ofFIG.7,FIG.10shows alternative example timing diagrams for an output voltage ramp signal VrampF, the first preliminary voltage ramp signal Vramp1, the second preliminary voltage ramp signal Vramp2, a clock ClkpF, and the clocks Clkp1D and Clkp2D. Additionally, a clock ClknF is simply the inversion of the clock ClkpF, so its timing diagram is omitted for simplicity. The output voltage ramp signal VrampF, the clock ClkpF, and the clock ClknF take the place of the output voltage ramp signal VrampD, the clock ClkpD, and the clock ClknD, respectively, in the above description ofFIGS.7and8; but the first preliminary voltage ramp signal Vramp1, the second preliminary voltage ramp signal Vramp2, and clocks Clkp1D and Clkp2D remain the same as described above with respect toFIGS.7and8. The timing diagrams were generated by a simulation running at about 500 MHz with the start voltage VstartF at about 0.8 volts and the end voltage VendF at about 1.8 volts. Additionally, the preliminary start voltage Vstart and preliminary end voltage Vend (as described above for the ramp generator100,300or500) for the first and second preliminary ramp generators721and722were about 0.5 volts and 2.5 volts, respectively. As shown inFIG.10, in some embodiments, the first and second preliminary voltage ramp signals Vramp1and Vramp2have the first and second preliminary ramp periods801and811, respectively, with the first and second preliminary voltage ramps805and815, respectively, as described above. The preliminary start voltage Vstart, described above, is shown as being less or lower than the start voltage VstartF (i.e., the first, start, initial, lower, minimum, or bottom voltage level of the output voltage ramp signal VrampF). The preliminary end voltage Vend, described above, is shown as being greater or higher than the end voltage VendF (i.e., the second, end, final, upper, maximum, or top voltage level of the output voltage ramp signal VrampF). Thus, the first and second preliminary voltage ramps805and815have an initial linear portion (e.g.,1006and1016) (i.e., between the preliminary start voltage Vstart and the start voltage VstartF, or within a first/initial portion1002or1012of the first or second preliminary ramp period801or811), a middle linear portion (e.g.,1007and1017) (i.e., between the start voltage VstartF and the end voltage VendF, or within a second/middle portion1003or1013of the first or second preliminary ramp period801or811), and a final linear portion (e.g.,1008and1018) (i.e., between the end voltage VendF and the preliminary end voltage Vend, or within a third/final portion1004or1014of the first or second preliminary ramp period801or811). Additionally, except for being offset from each other, the initial portion1016is about the same as the initial portion1006, the middle portion1017is about the same as the middle portion1007, and the final portion1018is about the same as the final portion1008. Similar to the clock ClkpD, the clock ClkpF (and, thus, also the clock ClknF) has a clock period that is about the same as the preliminary ramp periods801and811and is shown with about a 50% duty cycle. The output voltage ramp signal VrampF has first and second continuous output ramp periods (e.g.,1021and1022) during first and second time periods (e.g.,1023and1024), respectively. The first and second continuous output ramp periods1021and1022have first and second continuous output voltage ramps (e.g.,1025and1026), respectively, that continuously ramp from the first voltage level of the start voltage VstartF to the second voltage level of the end voltage VendF. The first and second time periods1023and1024(and, thus, also the first and second continuous output ramp periods1021and1022and the first and second continuous output voltage ramps1025and1026) alternate with each other. The first time periods1023correspond to the first half of the clock periods of the clock ClkpF (and the clock ClknD), and the second time periods1024correspond to the second half of the clock periods of the clock ClkpF (and the clock ClknF). The first continuous output ramp periods1021and the first continuous output voltage ramps1025correspond to the middle linear portion1007of the first preliminary voltage ramps805and the second portion1003of the first preliminary ramp period801. The second continuous output ramp periods1022and the first continuous output voltage ramps1026correspond to the second middle portion1017of the second preliminary voltage ramps815and the second portion1013of the second preliminary ramp period811. Each of the first and second continuous output voltage ramps1025and1026continuously ramps from the start voltage VstartF to the end voltage VendF within the first and second continuous output ramp periods1021and1022(or the first and second time periods1023and1024). The first continuous ramp periods1021are produced from the first preliminary ramp periods801of the first preliminary voltage ramp signal Vramp1during the first time periods1023, although the first preliminary voltage ramps805continuously ramp not only during but also beyond (e.g., before and after, as shown) the first time periods1023. The second continuous ramp periods1022are produced from the second preliminary ramp periods811of the second preliminary voltage ramp signal Vramp2during the second time periods1024, although the second preliminary voltage ramps815continuously ramp not only during but also beyond (e.g., before and after, as shown) the second time periods1024. Thus, each of the first continuous voltage ramps1025is produced from the middle linear portion1007of the corresponding first preliminary voltage ramp805, and each of the second continuous voltage ramps1026is produced from the middle linear portion1017of the corresponding second preliminary voltage ramp815. In some embodiments, the set of the switches707and708electrically connect the first preliminary output node723to the output node714to produce the first continuous output voltage ramps1025during the first time periods1023, and electrically connect the second preliminary output node724to the output node714to produce the second continuous output voltage ramps1026during the second time periods1024. The above described actions that occur at the times t1, t2and t3inFIG.8are generally the same or similar to the actions that occur at the times t1, t2and t3in FIG.10. However, the end voltage VendF is less or lower than the end voltage VendD ofFIG.8. Therefore, the first and second preliminary voltage ramps805and815cross the end voltage VendF earlier than was the case for crossing the end voltage VendD inFIG.8. As a result, the start voltage VstartF is also less or lower than the start voltage VstartD, and the clock ClkpF and the clock ClknF are earlier relative to the clock ClkpD and the clock ClknD, respectively. As a further result, the continuous ramp periods1021/1022, the time periods1023/1024, and the continuous output voltage ramps1025/1026ofFIG.10are similarly earlier relative to the continuous ramp periods821/822, the time periods823/824, and the continuous output voltage ramps825/826ofFIG.8. Additionally, the middle linear portions1007and1017(of the corresponding first and second preliminary voltage ramps805and815, and from which the first and second continuous output voltage ramps1025and1026are produced) are closer to the start voltage Vstart than are the middle linear portions807and817. Thus, the final portions1008and1018are larger than the final portions808and818, the initial portions1006and1016are smaller than the initial portions806and816, and the final portions1008and1018are larger than the initial portions1006and1016. The example embodiment ofFIGS.7and10assumes that all of the voltage ramps are positive and that the voltage ramps start at a lower fixed voltage level. In other embodiments, however, the circuit can be inverted, with the current sources at the bottom and negative voltage ramps that start at an upper fixed voltage level. For such embodiments,FIGS.7and10represent an inverted schematic and inverted timing diagrams. Whereas the embodiment ofFIG.8takes the first and second continuous output voltage ramps825and826from almost a center portion of the corresponding first and second preliminary voltage ramps805and815, embodiments ofFIG.9take the first and second continuous output voltage ramps925and926from a later portion of the corresponding first and second preliminary voltage ramps905and915that is closer to the end voltage Vend, and embodiments ofFIG.10take the first and second continuous output voltage ramps1025and1026from a later portion of the corresponding first and second preliminary voltage ramps1005and1015that is closer to the start voltage Vstart. Additionally, in some embodiments, the first and second continuous output voltage ramps925and926can be taken from a portion of the corresponding first and second preliminary voltage ramps905and915that is all the way towards the end voltage Vend (such that the final portions908and918are almost nonexistent), or the first and second continuous output voltage ramps1025and1026can be taken from a portion of the corresponding first and second preliminary voltage ramps1005and1015that is all the way towards the start voltage Vstart (such that the final portions1008and1018are almost nonexistent). In other words, the embodiments ofFIGS.8,9and10illustrate that the first and second continuous output voltage ramps825/826,925/926or1025/1026can be taken from any appropriate or desired portion of the corresponding first and second preliminary voltage ramps805/815,905/915or1005/1015by setting the end voltage VendD, VendE or VendF at a corresponding voltage level. Therefore, the most linear portion of the first and second preliminary voltage ramps805/815,905/915or1005/1015(i.e., the portion that is more linear than the rest) can be selected and used to form the first and second continuous output voltage ramps825/826,925/926or1025/1026. Reference has been made in detail to embodiments of the disclosed invention, one or more examples of which have been illustrated in the accompanying figures. Each example has been provided by way of explanation of the present technology, not as a limitation of the present technology. In fact, while the specification has been described in detail with respect to specific embodiments of the invention, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing, may readily conceive of alterations to, variations of, and equivalents to these embodiments. For instance, features illustrated or described as part of one embodiment may be used with another embodiment to yield a still further embodiment. Thus, it is intended that the present subject matter covers all such modifications and variations within the scope of the appended claims and their equivalents. These and other modifications and variations to the present invention may be practiced by those of ordinary skill in the art, without departing from the scope of the present invention, which is more particularly set forth in the appended claims. Furthermore, those of ordinary skill in the art will appreciate that the foregoing description is by way of example only, and is not intended to limit the invention. | 131,206 |
11863192 | DETAILED DESCRIPTION One aspect of the disclosure relates to DFSs. DFSs according to various embodiments may be used in a variety of apparatus, subsystems, systems, modules, ICs, and the like. Without limitation, examples include RF receivers, RF transmitters, and RF transceivers. DFSs are beneficial from an area viewpoint, since the loop filter (charge pump, capacitor, and resistor in an analog implementation) is implemented digitally. A fewer number of circuits include strictly analog or mixed-signal components in the DFS, such as the TDC and the DCO. A DFS offers lower area, higher immunity to semiconductor fabrication process variations, easier programmability, and more rapid migration to new technology nodes than the conventional analog approach to frequency synthesizers. DFSs according to exemplary embodiments employ fractional-N phase-locked loops (PLLs) with residue cancellation. The fractional divider control is realized with a sigma-delta modulator (SDM). The residue cancellation is performed with a digital subtracter at the output of the TDC. In contrast, an analog PLL typically uses a DAC to implement residue cancellation. In the analog system, the linearity and gain of the residue DAC, phase detector, and charge pump have relatively high influence on the performance of analog synthesizers. In DFSs according to various embodiments, fewer parameters, such as the gain and linearity of the TDC, have relatively high impact on DFS performance. Additionally, the gain error of the TDC can be compensated completely in the digital domain. As described below in detail, a measurement of the RMS phase error after the residue cancellation is used to digitally adjust the gain of the residue path to increase or maximize the residue cancellation and minimize the RMS phase error of the DFS. As noted above, another aspect of the disclosure relates to TDCs. With a digital loop filter, as used in exemplary embodiments, the phase error between the reference input signal (refclk) and the feedback clock or signal is converted to a digital output, and used to lock the DFS. This conversion of the error signal to a digital signal is performed in exemplary embodiments by the TDC. TDCs according to exemplary embodiments can be arbitrarily long loops because the delay line is implemented as a ring. The signal propagates down the line, but can wrap around multiple times so that much longer total delays can be realized. Each time the signal wraps around, another latch in a string of latches is set to keep count of how many complete cycles are made. Such TDCs can yield relatively fine steps, e.g. 22 ps in a 40-nm semiconductor fabrication node, even though such fine resolution is used near the locked position. On the other hand, larger phase errors can tolerate a coarser TDC. Accordingly, in exemplary embodiments, a CTDC is used together with a fine TDC (FTDC or F-TDC), which can span the entire 2π range. A vernier technique, which is known to persons of ordinary skill in the art, may be used, as desired, as an enhancement to such TDCs. TDCs according to various embodiments provide a number of benefits. First, they use fully digital circuitry, which results in lower size/circuit area, and increased simplicity. Second, the use of a wrapping around architecture, described below in detail, saves area and clock signal power. In addition, the coarse TDC (CTDC or C-TDC) in exemplary embodiments saves size/circuit area, reduces power consumption, and reduces or minimizes jitter accumulation (versus a design that uses all fine steps to cover the entire 2π range). As noted above, another aspect of the disclosure relates to DCOs. With a digital loop filter, as is used in DFSs according to exemplary embodiments, the digital output of the loop filter (or a signal derived from the output signal of the digital loop filter) controls the oscillator, typically an LC oscillator. In exemplary embodiments, a digital-to-analog converter (DAC) is included in the LC voltage-controlled oscillator (VCO) circuitry. Controlling the VCO's frequency is achieved by varying the capacitance of the LC tank by using the output signal of the DAC. In other words, the digital output signal of the digital loop filter is used to digitally program the value of the capacitance of the LC tank and, hence, the VCO's output frequency. Conventional techniques to digitally control the frequency of an LC oscillator cannot achieve fine frequency resolution by capacitor selection since capacitors would be relatively small and difficult to implement, as noted above. In a conventional implementation, the capacitors that are switched (to vary the capacitance of the LC tank) would be on the order of aF (i.e., 10−15F) range to obtain relatively fine frequency resolution. As described below in detail, the DCO topology according to exemplary embodiments uses two inductors and two sets of capacitors. Such a topology offers relatively wide tuning range and relatively fine frequency steps which, together make reasonable sizes of DCO frequency control words feasible with realizable capacitor size selection. Alternative conventional approach utilize a sigma-delta modulator to drive a switchable capacitor. The ones density of the modulator is then used to implement a fractional value of the capacitor from 0 to 1 times the actual capacitance. The sigma-delta modulation to achieve the effective value of the fractional capacitor uses digital hardware that consumes power, uses additional circuit area, and can introduce switching spurs in the clock output. DCOs according to exemplary embodiments do not employ sigma-delta modulators and, in the locked condition, infrequently toggle a capacitor, that is effectively relatively small (relatively low capacitance), to maintain phase lock of the DFS. FIG.1Ashows a circuit arrangement for a DFS10according to an exemplary embodiment. DFS10employs a negative feedback loop. More specifically, as noted above, TDC1005converts to a digital value the phase difference between reference clock refclk and feedback clock fbclk, provided by multi-modulus divider (MMD)1045. MMD1045divides the nominal (or desired) frequency of the output signal of DFS10(labeled as “LO”) by a number that can be an integer or integer plus fraction. The negative feedback loop causes the MMD output signal to have the same average frequency as the frequency of the reference signal. The negative feedback loop acts to minimize the frequency and phase errors in the output signal of DFS10. The output signal of TDC1005is provided to subtracter1015. An output signal of scaling circuit1055(described below) is provided to another input of subtracter1015. The difference between the two signals, i.e., the output signal of subtracter1015, is provided to digital loop filter1020, which performs digital filtering on the output signal of subtracter1015. In exemplary embodiments, digital loop filter1020may have a desired order, such as first-order filter, second-order filter, etc., as persons of ordinary skill in the art understand. The choice of filter order and the resulting circuitry for a given implementation depends on a variety of factors, such as design specifications, performance specifications, cost, IC or device area, available technology (e.g., semiconductor fabrication technology), target markets, target end-users, etc., as persons of ordinary skill in the art will understand. The filtered signal at the output of digital loop filter1020drives DCO1025. DCO1025includes a DAC1030, which converts the output of digital loop filter1020to program an array of capacitors in the VCO circuitry of DCO1025. The output signal of DCO1025is provided to divider1035. Divider1035divides the frequency of its input signal by a desired value, such as 2 (hence the label “Div 2”) in the example shown, although other values may be used, as desired. The output signal of divider1035constitutes the output signal of DFS10, labeled as “LO.” The output signal of DFS10drives MMD1045, as noted above. Note that, depending on the desired frequency of the output signal of DFS10and the available frequency of refclk, divider1035may be omitted in some embodiments, as persons of ordinary skill in the art will understand. Furthermore, note that MMD1045may be optional in some embodiments. Specifically, if the desired output frequency of DFS10is equal (or nearly equal in a practical implementation) to the reference frequency, MMD1045may be omitted, and the output signal of DFS10fed back to the input of TDC1005. The integer and fractional values for DFS10are provided to SDM1060(e.g., if an overall value of 64.3 is desired, then the integer (N) and fractional (n) values provided to SDM1060are N=64 and n=0.3, respectively). In response, SDM1060generates an output signal sdbits, and a residue signal. The output signal sdbits is provided to delay circuit1050, which delays sdbits by a desired delay value. The delayed signal is used to control MMD1045, i.e., select the desired modulus for MMD1045. In exemplary embodiments, the delay provided by delay circuit1050is selected to match the delay of scaling circuit1055. The residue signal from SDM1060is provided to scaling circuit1055. Scaling circuit1055multiplies the residue signal by a value selected from of x1 through x4 (or other values and/or numbers of values, as desired), which represent scaling values. The scaling values scale the residue value to match the gain of TDC1005. The output of scaling circuit1055is provided to subtracter1015, as noted above. The output of scaling circuit1055is also provided to least-mean-square (LMS) adaptation circuit1040. The output of TDC1005is also provided to LMS adaptation circuit1040. The output of LMS adaptation circuit1040is used to select a scaling value in scaling circuit1055, e.g., one of x1 through x4 in the example shown. As a result, a feedback loop is formed around LMS adaptation circuit1040and scaling circuit1055, where in response to the levels of phase error at the output of TDC1005and the scaled residue from scaling circuit1055, LMS adaptation circuit1040causes changes in the gain (scaling factor) of scaling circuit1055to reduce or minimize the impact of residue on DFS10, i.e., perform residue cancellation. In other words, the level of phase error at the output of TDC1005is used to select a gain (scaling factor) of scaling circuit1055to cause cancellation of the residue or effect of residue. Viewed another way, the gain or scaling factor of scaling circuit1055is selected or set so as to reduce or cancel the phase error attributable to the residue signal. Under ideal locked conditions, the phase error from TDC1005will be exactly equal to the value predicted by the scaled residue, i.e., the output of scaling circuit1055, i.e., the output of TDC1005equals the output of scaling circuit1055, which results in a zero output for subtracter1015. However, in a practical implementation, gain errors in TDC1005cause the output of subtracter1015to be finite, i.e., non-zero. LMS adaptation circuit1040tracks the magnitude of the phase error from TDC1005(the output signal of TDC1005) versus the scaled residue (the output of scaling circuit1055) and in a relatively slow manner (to allow the changes to settle in various circuitry) increments or decrements the gain of scaling circuit1055to drive the difference between the scaled residue and the TDC phase error to zero (or near zero, in a practical implementation). Thus, LMS adaptation circuit uses least-mean-square techniques combined with feedback to drive the output of subtracter1015to zero (or near zero) by changing the gain of scaling circuit1055. In this manner, scaling circuit1055operates as an adapting or adaptive scaling circuit. In some embodiments, the incremental gain change occurs once per phase measurement (i.e., once per cycle of the reference clock, refclk), and is chosen to be relatively small, for instance, less than 1% of the nominal scaling factor or gain of scaling circuit1055. In some embodiments, the adaptation or adaptive functionality of LMS adaptation circuit1040can be enabled or disabled during DFS operation. For example, in some embodiments, to prevent divergence of the LMS adaptation, LMS adaptation circuit1040may be disabled if the TDC phase error (output of TDC1005) is relatively large, indicating that the DFS has not yet achieved phase lock. The output of subtracter1015is provided to residue error circuit1010. When the feedback loop in DFS10is locked, the input to digital loop filter1020should have a zero value. Residue error circuit1010generates an output signal that represents roughly the variance of the jitter (sum of absolute values of the outputs of subtracter1015, obtained, for example, by using an integrate/dump technique), i.e., a measure of the gain match between the residue signal from SDM1060and the gain of TDC1005. The jitter represents quantized jitter of SDM1060. The output signal of residue error circuit1010is provided to jitter monitor circuit1017. By examining the jitter variance at the output of subtracter1015, as measured by residue circuit1010and monitored by jitter monitor circuit1017, a measure of the quality of the reference signal and/or the convergence of the LMS adaptation function, described above, can be obtained. Monitoring by jitter monitor circuit1017can be used by DFS10(or another block or circuit in a system or apparatus that includes DFS10) to determine potential degradation in the LO phase noise without making direct phase noise measurements. Additionally, if DFS10does not implement the LMS adaptation functionality, then the monitored jitter may be used to calibrate residue calibration circuit1005, as described below in connection withFIG.1B. FIG.1Bshows a circuit arrangement for a DFS10according to another exemplary embodiment. DFS10inFIG.1Bis similar to DFS10inFIG.1A, but uses a different technique for residue cancellation. More specifically, referring again toFIG.1B, the output of residue error circuit1010is provided to calibration circuit1065. Calibration circuit1065uses the output of residue error circuit1010(roughly the variance of the jitter) to select a scale or gain for scaling circuit1055so as to cause residue cancellation (reduce or cancel or eliminate the effect of the residue on DFS10). In some embodiments, calibration circuit1065may use information or data included or contained in firmware, such as information determined during design, manufacture, test, and/or operation of DFS10or a device (e.g., an IC) that includes DFS10. Such information or data is subsequently used during operation of DFS10for residue cancellation. FIG.2shows a circuit arrangement for TDC1005according to an exemplary embodiment. TDC1005includes C-TDC1100and F-TDC1105which, together, cover the entire 2π range of phase error values. C-TDC1100covers a range over entire cycles of the reference clock, refclk. F-TDC1105implements a range centered around the lock position, such as SDM1060's quantized jitter remains within the range. The signal refclk drives an input of C-TDC1100. A signal fbdel from delay circuit1110in F-TDC1105drives another input of C-TDC1100. The delay generated by delay circuit1110is one half of the range of values of the output of F-TDC1105. The output of C-TDC1100includes a signal ctdc (having bits 2 through 7 in the example shown, although other values can be used as desired), and an early/late signal. Both output signals of C-TDC1100are provided to control circuit1115. The signal refclk also drives an input of F-TDC1105. The signal fbclk (seeFIG.1A or1B) drives delay circuit1110. A delayed version of signal fbclk is provided as signal fbdel, as noted above. The output of F-TDC1105includes a signal ftdc (having bits 0 through 5 in the example shown, although other values can be used as desired), which is provided to control circuit1115. Using signals ctdc and ftdc and the early/late signal, control circuit1115generates the output signals of TDC1005, which include a tdc signal and a sign bit signal, i.e., signbit. The tdc signal has bits 0 through 11 in the example shown, although other values can be used as desired). The operation of control circuit1115may be better understood by reference toFIG.3, which shows a timing diagram for a TDC according to an exemplary embodiment. More specifically, the diagram shows the ranges of the C-TDC and F-TDC output signals as they relate to the refclk, fbdel, and fbclk signals. The range of values corresponding to early and late are also indicated. The locked condition (or the ideal condition) is indicated at the boundary between the early and late ranges. Thus,FIG.3illustrates that the C-TDC causes a number of phase steps in the range indicated as “C-TDC range” to bring the frequency of the fbclk signal closer to the frequency of the refclk. The F-TDC causes a number of additional phase steps in the range indicated as “F-TDC range” to bring the frequency of the fbclk signal closer to the frequency of the refclk signal and eventually into phase lock. Note that the “C-TDC range” straddles the “F-TDC range.” In other words, the “C-TDC range” is divided into two ranges, one range that is below or before or preceding the “F-TDC range” and another range that is above or after or succeeding the “F-TDC range.” Furthermore, note that in various embodiments the C-TDC phase step or steps (phase step(s) taken by C-TDC1100) are larger than the F-TDC phase steps or steps (phase step(s) taken by C-TDC1105), hence the labels “coarse” TDC (C-TDC) and “fine” TDC (F-TDC), respectively. In some embodiments, the ratio of the C-TDC phase step(s) to the F-TDC phase step(s) is an integer. In some embodiments, the ratio of the C-TDC phase step(s) to the F-TDC phase step(s) is non-integer. FIG.4shows a circuit arrangement for C-TDC1100according to an exemplary embodiment. The refclk and fbdel signals drive the D and clock inputs of D-type flip-flop1210. The output of flip-flop1210constitutes the early/late signal, described above (a binary logic value of 0 indicates that the fbdel signal is early, whereas a binary logic value of 1 indicates that the fbdel signal is late). The refclk and fbdel signals also drive the inputs of control circuit1205. In response, control circuit1205generates a reset signal, which is used to reset synchronous counter1220to an initial count value. Control circuit1205also generates an enable signal for oscillator1215. In response to the enable signal (i.e., when the enable signal is asserted), oscillator1215provides clock signals to synchronous counter1220. More specifically, control circuit1205enables oscillator1215at the rising edge occurrence of refclk (or fbdel). Control circuit1205halts (de-asserts the enable signal) oscillator1215at the rising edge of fbdel (or refclk). The output of synchronous counter1220constitutes the ctdc signal, along with the early/late signal. FIG.5Ashows a circuit arrangement for a conventional TDC. The TDC includes a chain of delay circuits fed by an input signal (e.g., fbclk), a chain of flip-flops fed by a clock signal (e.g., refclk), and a thermometer to binary encoder. The operation of the circuit is known to persons of ordinary skill in the art. In the conventional approach to implementing the TDC shown inFIG.5A, the phase difference (or time difference) between two clock signals can be measured and quantized to a discrete value by passing one clock signal (CLK1) through a delay line and using the second clock signal (CLK2) to control the sampling action of the flip-flops. In essence, the transition in the second clock signal takes a snapshot of the delay element outputs and locates how far into the delay line the first clock signal has propagated. This position can then be encoded into a binary output that represents the relative time delay between the two clock signals. If a relatively large delay range is desired, the straightforward approach is simply to cascade more delay stages and add more flip-flops. Doing so, however, increases the chip area, entails driving more flip-flops by the second clock signal with corresponding extra capacitive loading increased power consumption, and more complicated clock skew management as the second clock signal is distributed to more flip-flops. Instead of extending the length of the delay line and associated flip-flops, one can create a re-circulating delay line and associated flip-flops. Conceptually, when the first clock transition occurs, it is launched into a first delay element in a delay circuit or delay line that includes a number of delay cells or elements. The first clock signal propagates through the delay line and when it reaches the last delay element, an inverted version of the output signal of the last delay element is fed into the first delay element and, simultaneously, a wrap counter records that one round trip has occurred through the delay elements. The first clock signal continues to propagate and wrap around the delay line until the second clock simultaneously samples the wrap count value and all the states of the delay elements. An encoder circuit then combines the flip-flop samples and produces a binary output.FIG.5Bshows one implementation of this concept, as described below in detail. More specifically, a single-ended embodiment of a re-circulating F-TDC is shown inFIG.5B. Initially, a multiplexer (MUX) is set to one position, say, position “0,” and the first clock signal (e.g., fbclk) transition enters the first delay cell. When the first clock signal reaches the last delay cell, the output signal of that delay cell signal is inverted, and the MUX is automatically reconfigured to select the re-circulated signal with another position, say, position “1.” The MUX stays in this position until a second clock signal (e.g., refclk) samples the outputs of the delay cells and the wrap counter. After sampling is completed, a reset signal clears the wrap counter, sets the MUX to position “0,” and sets all delay elements to their reset level (e.g., “0”). Referring toFIG.5B, reference signal refclk drives the clock inputs of D-type flip-flops1275, which are coupled in a cascade fashion or chain. The outputs of flip-flops1275are provided to encoder logic circuit1270. The output of encoder logic circuit1270constitutes the output of F-TDC1105, i.e., the ftdc signal (seeFIG.2). Referring again toFIG.5B, the D inputs of flip-flops1275are driven by the output signal of MUX1255, and delayed versions of that signal. More specifically, the output signal of MUX1255is provided to the D input of the first flip-flop1275. The outputs of a set of delay circuits, coupled in a cascade or chain fashion, drive the respective D inputs of the remaining flip-flops1275. The output of the last delay circuit1110drives an input of inverter1250. The output of inverter1250drives one input of MUX1255, and also a clock input of wrap counter1265. In response, wrap counter1265counts the number of times a signal has propagated through delay circuits1110. The output of wrap counter1265is provided to encoder logic circuit1270. Encoder logic circuit1270combines the wrap count value (output of wrap counter1265) with the states (Q outputs) of flip-flops1275to form a signed binary output word. The states of flip-flops1275are thermometer-to-binary encoded by encoder logic circuit1270if the wrap count is even. If the wrap count is odd, however, then the states of flip-flops1275are inverted in encoder logic circuit1270prior to the thermometer-to-binary conversion in encoder logic circuit1270. The signal fbclk drives a second input of MUX1255. The select signal of MUX1255is provided by MUX control circuit1260. If the select signal of MUX1255has a binary logic 0 value, signal fbclk is provided as the output signal of MUX1255. Conversely, if the select signal has a binary logic 1 value, the output signal of inverter1250is provided as the output signal of MUX1255. MUX control circuit1260generates the select signal using the fbclk signal, the refclk signal, and the output signal of inverter1250. In exemplary embodiments, such as the embodiment shown inFIG.5B, F-TDC1105is of a re-circulating type or operates in a re-circulating manner. The re-circulating operation of F-TDC1105, including the operation of MUX control circuit1260, occurs as follows: Initially, F-TDC1105is reset with the falling edge of refclk, and MUX1255provides the fbclk signal as the output signal (i.e., the select signal has a binary logic 0 value). Initially, the fbclk clock signal propagates through the delay blocks in delay circuit1110(all delay line outputs sequentially change from 0 to 1). When the signal reaches the last delay block, the following occurs: (a) inverter1250provides binary logic 0 to MUX1255; (b) wrap counter1265increments to indicate that one trip through delay circuit1110has occurred; and (c) MUX1255switches to position 1 (provides the output signal of inverter1250), and remains in that position until F-TDC1105is reset. The output of MUX1255then propagates a binary logic zero through delay circuit1110. If a second wrap condition occurs, then wrap counter1265increments, and MUX1255propagates a binary logic 1 value through delay circuit1110. Further wrapping causes wrap counter1265to increment, and binary logic values of 1 and 0 alternately propagate through delay circuit1110. On the rising edge of refclk, all of flip-flops1275and the output value of wrap counter1265are sampled. Encoder logic circuit1270encodes the output value (or count) of wrap counter1265and the output signals of flip-flops1275, and produces a binary word that represents the time (or phase difference) between the two clock edges (fbclk and refclk). On the falling edge of refclk, the entire circuitry in F-TDC1105is reset, and the process continues as described above. FIG.6shows a circuit arrangement for digital loop filter1020according to an exemplary embodiment. The input signal to loop filter1020consists of a signal “a” (which has bits 0 through 15, i.e., a 16-bit signal, although other values may be used, as desired), and the signbit signal (i.e., a signal that indicates the sign of the “a” signal), for instance, as provided by TDC1005(seeFIG.2). Referring again toFIG.6, the signal “a” and the “signbit” signal are provided to a one's complement circuit1305. The output of one's complement circuit1305drives a first input of adder1310, while the signbit signal constitutes the carry-in (ci) input of adder1310. An output of register1325, i.e., signal yout (which, in the example shown, as bits 0 through 15, although other sizes or values may be used, as desired) drives a second input of adder1310. The sum of the inputs to adder1310is provided as signal xout which, in the example shown, has bits 0 through 15, although other sizes or values may be used, as desired. Signal xout drives the input of register1325, and signal refclk clocks register1325. The output of one's complement circuit1305is scaled by scaling circuit1315, which scales the signal by 2N. The output signal of scaling circuit1315constitutes the proportional path signal, and is provided to adder1320. The signbit signal is provided as carry-in (ci) to adder1320. The signal xout (output of adder1310) constitutes the integral path signal, and is also provided to adder1320. The sum output of adder1320drives the input of register1330, which is clocked by signal refclk. The output of register1330constitutes a digital control signal that is used to control DCO1025(seeFIG.1A or1B). Referring again toFIG.6, a control circuit (not shown) detects overflow and underflow situations, and properly sets the output of register1330, as appropriate. More specifically, if the carry out signal for adder1320has a logic 1 signal and the carry in signal for adder1320has a binary logic 0 value, an overflow condition exists. Accordingly, the output of register1330is set to all ones (0×FFFF for the example shown). Conversely, if the carry in of adder1320has a binary logic 1 value, the previous most-significant bit (MSB) of the output of adder1320has a binary 0 logic value, and the new MSB of the output of adder1320has a binary logic value of 1, then an underflow condition (negative number) is detected. Accordingly, the output of register1330is set to all zeros (0x0000 for the example shown). FIG.7shows a diagram of transfer functions of various circuit blocks of a DFS according to an exemplary embodiment. The transfer functions may be used to derive an overall transfer function for DFS10. In the exemplary embodiment shown, block1375represents the transfer function of TDC1005, block1378represents the integral path of the loop filter (digital loop filter1020inFIG.1), block1380represents the proportional path of the loop filter, block1382represents a summer or adder, block1385represents the VCO or DCO, and block1388represents the feedback-path circuitry. Using the transfer functions shown, the overall transfer function may be represented as: ΘOΘR=kPkDKOz-1⌊(1+kIkP)-z-1⌋1+⌊(kI+kP)kDKON-2⌋z-1+⌊1-kPkDKON⌋z-2whereKO=2πKvcoTrefandkD=12πTrefΔTDC and where Kvcorepresents the DCO gain, Korepresents the DCO phase change, kDrepresents the TDC gain, kPrepresents the proportional path gain, kIrepresents the integral path gain, Trefrepresents the period of the reference clock signal, refclk (e.g., 26 ns in the BLE example), and ΔTDCis the nominal phase step size of the F-TDC1105(e.g., 22 ps in the BLE example). FIG.8shows a diagram of transfer functions of various circuit blocks of a DFS according to another exemplary embodiment. More specifically, the figure shows the transfer functions of various blocks in a DFS that includes a SDM and residue cancellation (e.g., as shown inFIG.1A or1B). Referring again toFIG.8, some of the blocks are the same as inFIG.7, i.e.,1375,1378,1380,1382, and1385. Block1400represents the MMD, block1405represents the SDM, and blocks1408and1410represent the processing of the SDM error output to produce the residue. The residue is scaled by block1412. Note that the LMS adaptation technique, which adapts the kDDgain to compensate for the TDC gain variation with process and temperature, is not shown in this diagram to facilitate presentation. Blocks1405,1408, and1410correspond to SDM1060and delay circuit1050inFIGS.1A and1B. Using the transfer functions shown inFIG.8, the overall transfer function may be represented as: ΘOΘR=kPkDKOz-1⌊(1+kIkP)-z-1⌋1+⌊(kI+kP)kDKON-2⌋z-1+⌊1-kPkDKON⌋z-2 Assuming kI=1; kP=32, 64, and 128; and refclk frequency of 38.4 MHz (e.g., an implementation of a DFS for a Bluetooth Low-Energy (BLE) application), the VCO or DCO frequency range (2·Nrefclk) has a range of 4200-5700 MHz, which implies N values of 54-74. Using those values, and assuming Kvcois about 5 kHz/LSB, and given the above formula for kD, a TDC step size of 22.2 ps should be used. FIG.9shows a diagram of transfer functions of various circuit blocks of a DFS according to another exemplary embodiment. The DFS in this example uses a third-order PLL, as indicated by the addition of block1390(compareFIGS.7and9). Block1390is a first-order low-pass filter that is used to reduce the high-frequency ripple from the output of summing block1382to lower the resulting phase noise and spurs at the DCO output, i.e., block1385. The parameter β is varied to change the corner frequency of the low-pass filter, i.e., block1390. InFIG.9, block1378implements the integral path, block1380implements the proportional path, and the two are combined with by summing block1382. Blocks1378,1380,1382, and1390as a group are represented as the loop filter, i.e., digital loop filter1020inFIGS.1A and1B. Using the transfer functions shown, the overall transfer function may be represented as: ΘOΘR=kPkDKOz-1⌊(1+kIkP)-z-1⌋1+⌊-β-2+kPkDKoN(1+kIkD)⌋z-1+⌊1+2β-kPkDKoN⌋z-2-βz-3 In exemplary embodiments, second-order or third-order SDMs may be used, which may have 2, 3, 4, or other values of the number of output levels. As persons of ordinary skill in the art will understand, a number of trade-offs are made in the selection of the design and performance parameters of SDM1060inFIGS.1A and1B. The choice of such parameters and the resulting circuitry for a given implementation depends on a variety of factors, as persons of ordinary skill in the art will understand. Such factors include design specifications, performance specifications, cost, IC or device area, available technology, such as semiconductor fabrication technology, target markets, target end-users, etc. For example, using a third-order SDM results in lower quantization noise below 6.7 MHz (e.g., using the BLE example above), but digital loop filter1020would use an extra pole in its transfer function to reject higher-frequency levels of quantization noise. Using a second-order SDM, on the other hand, would allow for a simpler and wider band-width digital loop filter1020. With respect to output levels, a higher number of output levels, say, 4, would accommodate relatively large dither rejection from SDM1060. Using a lower number, say, 2, on the other hand, would reduce the range of FTDC1105(seeFIG.2), which results in reduced power consumption, reduced circuit area/size, and reduced accumulated jitter. As an illustration, and merely by way of example, for an embodiment that accommodates the BLE parameters and specifications, a second-order SDM1060with a 1-bit output may be used. Such a choice would accommodate relatively high bandwidth for transmit modulation, would reduce or minimize toggling steps of MMD1045(seeFIG.1A or1B), and would reduce or minimize the range of FTDC1105(as opposed to multi-bit SDMs). Such an SDM would have three modes, depending on the values of n (the fractional divide parameter of the DFS). The three modes are as follows: Mode 0: 0.25<n<0.75 Mode 1: n≤0.25 Mode 2: n≤0.75 Using the above modes keeps the fractional part (n) relatively close to the 50% level in order to reduce or minimize spurs and tonal outputs in the output signal (sdbits inFIG.1A or1B) of SDM1060.FIG.10shows operation in Mode 0. In this mode, output signal sdbits of SDM1060toggle between the values N and N+1.FIG.11shows operation in Mode 1. In this mode, output signal sdbits of SDM1060toggle between the values N−1 and N+1.FIG.12shows operation in Mode 2. In this mode, output signal sdbits of SDM1060toggle between the values N and N+2. In order to implement modes 0, 1, and 2, some changes are made to the circuitry and/or operating parameters of SDM1060.FIG.13shows a circuit arrangement for an SDM1060, operating in mode 0, according to an exemplary embodiment. As noted above, SDM1060receives the values of n and N as input signals. The fractional value (n) is provided to adder1060A, which receives at a second input the constant −0.5. The sum at the output of adder1060A drives an input of adder1060B, while a second input of adder1060B receives the output of 1-bit digital-to-digital converter (DDC)1060K, multiplied by −0.5 by scaling circuit1060M. DDC1060K generates at its output the value of +1 or −1, depending on the value of its input signal. The sum at the output of adder1060B drives the input of integrator1060C. The output of integrator1060C constitutes the residue output of SDM1060, and is also provided to adder1060D. The output of DDC1060K, multiplied by −1.0 by scaling circuit1060L, drives another input of adder1060D. The sum at the output adder1060D drives the input of integrator1060F, the output of which drives one input of adder1060G. Another input of adder1060G is driven by the output of pseudo-random binary sequence (PRBS) dither circuit1060E (used to break up periodic cycles or limit cycles in SDM1060to eliminate or reduce spurs or make the input signal of quantizer1060H appear more noise-like), as persons of ordinary skill in the art will understand). The sum at the output of adder1060G drives the input of quantizer1060H (implemented, for example, by using a comparator, as persons of ordinary skill in the art will understand). The output of quantizer1060H is provided to DDC1060K as an input signal. The sum at the output of adder1060G is quantized to a single bit by quantizer1060H and then provided to delay circuit10601. The delayed output of delay circuit10601drives one input of adder1060J. The input value N drives a second input of adder1060J. The sum at the output of adder1060J is provided as the output of SDM1060and is used to drive MMD1045. In the case shown, i.e., mode 0, the output toggles between N and N+1, as noted above. FIG.14shows a circuit arrangement for an SDM1060, operating in mode 1, according to an exemplary embodiment. In this mode, a scaling circuit1060N, with a gain of 0.5, is driven by input signal n, the output of which drives the input of adder1060A. The second input of adder1060A is driven by the value 0. In addition, a scaling circuit1060P scales the output of integrator1060C by 2.0, and the resulting scaled value is provided as the residue output. A scaling circuit1060Q scales the output of delay circuit10601by 2.0 and provides the resulting value to adder1060J. A third input of adder1060J is provided the value of −1.0. FIG.15shows a circuit arrangement for an SDM1060, operating in mode 2, according to an exemplary embodiment. In this mode, scaling circuit1060N has a gain of 0.5, as was the case with mode 1. The second input of adder1060A, however, is driven by the value −0.5. Similar to mode 1, scaling circuit1060P scales the output of integrator1060C by 2.0, and the resulting scaled value is provided as the residue output. Also, similar to mode 1, scaling circuit1060Q scales the output of delay circuit10601by 2.0 and provides the resulting value to adder1060J. The third input of adder1060J is provided the value of 0. As noted above, one aspect of the disclosure relates to DCOs. In exemplary embodiments, a DAC is included in the DCO (seeFIG.1A or1B) to program (or set or configure or adjust) the effective capacitance of the LC tank used in the VCO.FIG.16shows a circuit arrangement of a conventional LC oscillator1600, which includes inductor L, capacitor C, and back-to-back inverters1605and1610. Considering this simple LC tank oscillator in the context of the BLE example mentioned above, BLE modulation uses a frequency deviation of ±250 kHz, or about ±102 ppm. Assuming that 6 bits are used to control the value of capacitor C, a change in the value of the least-significant bit (LSB) would cause about a 7.8 kHz frequency change, i.e., about 3.2 ppm. A 3.2 ppm change in frequency means a ±6.4 ppm in capacitance. Assuming a nominal value of 1 pF for capacitor C, a ±6.4 ppm in capacitance implies a ±6.4 aF step, which is likely not feasible with current fabrication technologies. DCOs according to exemplary embodiments use a different topology than do conventional VCOs (seeFIG.16).FIG.17shows a circuit arrangement of a single-ended DCO1025according to an exemplary embodiment (DAC1030is not shown). DCO1025includes capacitor C. In lieu of a simple inductor, however, DCO1025uses an inductor L coupled in series with capacitor Cx. to realize an effective inductance Leff. In other words, the combination of inductor L and capacitor Cxprovides an effective inductance of Leffwhich, together with capacitor C, forms an LC tank. Inverter1605is back-to-back coupled to inverter1610. Inverter1605and inverter1610are coupled in parallel with capacitor C and with the series-coupled inductor L and capacitor Cx. By changing the values of capacitors C and Cx, the frequency of oscillation of the LC tank can be changed. As noted above, the topology shown offers relatively wide tuning range and relatively fine frequency steps which, together make reasonable sizes of DAC control words for capacitor Cx, feasible with realizable capacitor size selection. In DCO1025, the value of Leffmay be expressed as: Leff≃L(1-1ωo2LCx)orLeff≃L(1-CCx) The step change in capacitor Cxmay be expressed as: ΔCx=ΔLeffLeffCxC(Cx-C) The step change in the output frequency is given by: Δff0=-12ΔLeffLeff The step change in capacitor Cxmay therefore be expressed as: ΔCx=-2Δff0CxC(Cx-C) Assuming that capacitor C has a capacitance of 1 pF and capacitor Cxhas a capacitance of 20 pF, ΔCxwould have a value of about 1.52 fF, which is about 380 times larger than the corresponding step change in the circuit shown inFIG.17. The DCO topology shown inFIG.17would therefore be easier to implement. FIG.18shows a circuit arrangement for controlling the frequency of single-ended DCO1025according to an exemplary embodiment. More specifically, the figure shows DAC1030receiving a set of control signals (from digital loop filter1020, as shown inFIG.1A or1B), and using the set of control signals to vary the capacitances of capacitors C and Cx. DAC1030can drive analog voltages to control or vary the capacitances of capacitors C and Cx, assuming those capacitors are implemented as varactors. Alternatively, rather than using DAC1030, a control circuit that includes logic circuitry and switches to program discrete capacitance values of capacitors C and Cx. In general, capacitors C and Cx, can be realized with a combination of programmable (discrete capacitance step changes) and varactor capacitors in a number of ways, as persons of ordinary skill in the art will understand. The choice of realization for a given implementation depends on a variety of factors, as persons of ordinary skill in the art will understand. Such factors include design specifications, performance specifications, cost, IC or device area, available technology, such as semiconductor fabrication technology, target markets, target end-users, etc. Using the BLE example discussed above, assuming a frequency tuning range of ±10% (±100,000 ppm) and a DCO output signal frequency resolution (or step) of about 3.2 ppm, DAC1030would have to use about 16 bits of signals in the set of control signals. The total number of bits is partitioned between C and Cx, i.e., some of the bits are used to vary the capacitance of capacitor C, and the remaining bits in the set of control bits are used to vary the capacitance of capacitor Cx. In exemplary embodiments, a discontinuity may exist in the overall capacitance provided by capacitors C and Cx. Given that assumption, the capacitance values of capacitors C and Cxare designed to overlap (e.g., using capacitance values of capacitors C and Cxthat are non-radix 2). In addition, capacitor Cxmay be designed so that no fractional divide (as realized by MMD1045(seeFIG.1A or1B)) value of the fractional value (n) causes a change in capacitor C. Thus, for the BLE example, changes in capacitor Cxshould cover the frequency range of at least 38.4 MHz out of 2.45 GHz, or 15,600 ppm. A 2 ppm resolution in the capacitance of capacitor Cximplies 7,800 steps in capacitance value. Thus, 13 bits would be allocated to varying the capacitance value of capacitor Cx. An additional four bits would be allocated to varying the capacitance value of capacitor C.FIG.18shows this configuration. Note, however, that the choice of the total number of bits in the set of control bits, the allocation of bits to capacitor C and capacitor Cx, and other such parameters and the resulting circuitry for a given implementation depends on a variety of factors, as persons of ordinary skill in the art will understand. Such factors include design specifications, performance specifications, cost, IC or device area, available technology, such as semiconductor fabrication technology, target markets, target end-users, etc. Thus, the example shown inFIG.18is merely illustrative, and other DCO realizations may be used, as desired. Instead of single-ended DCOs, in some applications differential mode DCOs may be used, as desired.FIG.19shows a circuit arrangement of a differential mode DCO1025according to an exemplary embodiment (DAC1030is not shown). In this topology, inductor L is realized by using two inductors Laand Lb, coupled in series, as shown. In addition, capacitor Cxis realized by using three capacitors coupled in a Π-configuration (or “pi-configuration” to denote the capital Greek letter pi), which includes capacitors Cxa, Cxb, and Cxc. In the embodiment shown, capacitor Cxbhas a fixed value, and the capacitances of capacitors Cxaand Cxcare varied by DAC1030(not shown), as described above. Note that the resistors represent the parasitic series resistances of inductors Laand Lband/or the effective series resistance of capacitors that realize capacitor Cxto model passive losses in DCO1025. In some situations, the resistors have relatively small values, and may be omitted from the circuit and/or design calculations, as persons of ordinary skill in the art will understand. For the BLE example discussed above, the components have the values shown inFIG.19. Note the dot-convention of the two inductors which, conceptually denotes the direction in which the turns of conductor in the inductors are “wound” (or realized in some manner in an IC, etc.). For the topology inFIG.19, the dot-convention denotes that the turns of conductor in inductor Laare “wound” in the opposite direction of the turns of conductor in inductor Lb(e.g., clockwise versus counterclockwise). Using this configuration, the effective inductance of inductor La, La−effective, may be represented as: La−effective=(La−M)=La(1−k), where M represents the mutual inductance between inductors Laand Lb, and where k represents the coupling coefficient between inductors Laand Lb. Similarly, for inductor Lb, the effective inductance of inductor Lb, Lb−effective, may be represented as Lb−effective=(Lb−M)=Lb(1−k). The dot-convention for inductors Laand Lbmay be changed to arrive an alternative exemplary embodiment for a differential mode DCO.FIG.20shows a circuit arrangement for that topology (DAC1030is not shown). The circuit configuration is similar to the embodiment shown inFIG.19, except that the dot-convention for inductors Laand Lbsignifies that the turns of conductor in inductor Laare “wound” in the same direction as the turns of conductor in inductor Lb. Using this configuration, the effective inductances of inductors Laand Lbmay be represented, respectively, as: La−effective=(La+M)=La(1+k), and Lb−effective=(LbM)=Lb(1+k). From the above description, one may note that the inductance L in the DCO topology shown inFIG.19has a lower value than it does in the DCO topology shown inFIG.20. On the other hand, the DCO topology shown inFIG.20is more immune to interfering signals that would appear as a common-mode signal to the circuitry in the VCO. The above factors may be considered in choosing the topology inFIG.19versus the topology inFIG.20. In addition or instead, however, the choice of topology may be predicated on other parameters or factors, as persons of ordinary skill in the art will understand. Such factors include design specifications, performance specifications, cost, IC or device area, available technology, such as semiconductor fabrication technology, target markets, target end-users, etc., for a given implementation or situation. As noted above, without limitation, DFSs (including TDCs and/or DCOs) according to exemplary embodiments may be used in a variety of applications. Examples include RF receivers, RF transmitters, and RF transceivers.FIG.21shows a circuit arrangement for an RF receiver100, including DFS10, according to an exemplary embodiment. Receiver100receives RF signals via antenna105. The RF signals feed an input of low noise amplifier (LNA)120. LNA120provides low-noise amplification of the RF signals, and provides amplified RF signals to mixer130. Mixer130performs frequency translation or shifting of the RF signals, using a reference or local oscillator (LO) frequency provided by LO125. For example, in some embodiments, mixer30translates the RF signal frequencies to baseband frequencies. As another example, in some embodiments, mixer30translates the RF signal frequencies to an intermediate frequency (IF). Mixer130provides the translated output signal as a set of two signals, an in-phase (I) signal, and a quadrature (Q) signal. The I and Q signals are analog time-domain signals. Analog-to-digital converter (ADC)135converts the I and Q signals to digital I and Q signals. In exemplary embodiments, ADC135may use a variety of signal conversion techniques. For example, in some embodiments, ADC135may use delta-sigma (or sometimes called sigma-delta) analog-to-digital conversion. ADC135provides the digital I and Q signals to signal processing circuitry140. Generally speaking, signal processing circuitry140performs processing on the digital I and Q signals, for example, digital signal processing (DSP). Signal processing circuitry140provides information, such as the demodulated data, to data processing circuitry155via link150. Data processing circuitry155may perform a variety of functions (e.g., logic, arithmetic, etc.). For example, data processing circuitry155may use the demodulated data in a program, routine, or algorithm (whether in software, firmware, hardware, or a combination) to perform desired control or data processing tasks. In some embodiments, data processing circuitry155may perform control of other circuitry, sub-system, or systems (not shown). In some embodiments, data processing circuitry155may provide the data (after processing, as desired, for example, filtering) to another circuit (not shown), such as a transducer, display, etc. In exemplary embodiments, link150may take a variety of forms. For example, in some embodiments, link150may constitute a number of conductors or coupling mechanisms, such as wires, cables, printed circuit board (PCB) traces, etc. Through link150, signal processing circuitry140and data processing circuitry155may exchange information, such as the demodulated data, control information or signals, status signals, etc., as desired. Receiver100includes image reject (IR) calibration circuitry165that may be used to perform image reject calibration, as mentioned above. Receiver100further includes controller160. Controller160uses an output signal160A to control the operation of IR calibration circuitry165. Controller160further uses output signal160B to control the operation of DFS10, e.g., cause DFS10to provide an output signal10A as a test tone to the receiver. The test tone is typically injected into the receive path circuitry at a strategic location. In the exemplary embodiment shown inFIG.21, the test tone output by DFS10is applied at the input of low-noise amplifier (LNA)120. IR calibration circuitry165residing after analog-to-digital converter (ADC)135utilizes the LMS technique (or an alternate the technique) to calibrate the image rejection of the receive path circuitry . . . . As noted above, DFSs according to various embodiments may be used to clock ADC135.FIG.22shows such an arrangement. In this scenario, DFS10provides output signal10A to ADC135in response to control signal160B from controller160. ADC135uses output signal10A of DFS10as a clock signal in order to perform analog-to-digital conversion. As further noted above, DFSs according to various embodiments may be used to perform mixing operations.FIG.23shows such an arrangement. In this embodiment, LO125(seeFIGS.4-5) is omitted. Instead, output signal10A of DFS10is used as an LO signal. DFS10provides output signal10A to mixer130in response to control signal160B from controller160. Output signal10A is used by mixer130to mix an RF signal with output signal10A in order to generate the I and Q (in-phase and quadrature) signals that are provided to ADC135. As noted above, DFSs according to various embodiments may be used in RF transmitters.FIG.24shows a circuit arrangement for an RF transmitter (TX)200, including DFS10, according to an exemplary embodiment. Data processing circuitry155provides a digital signal to digital-to-analog converter (DAC)202. DAC202converts the digital signal to an analog signal and provides the analog signal to mixer204. In response to control signal160B from controller160, DFS10generates output signal10A with a desired frequency (typically in the RF range). Mixer204mixes the output signal of DAC202with output signal10A of DFS10. The resulting output signal204A of mixer204may be provided to a power amplifier (not shown) or be further processed as part of the operations of transmitter200. Note that RF receiver100and RF transmitter200shown in the figures and described above constitute mere examples. As persons of ordinary skill in the art will understand, DFSs according to various embodiments may be used in a variety of RF receivers (e.g., direct conversion, low-intermediate-frequency (low-IF), etc.) and RF transmitters (direct-conversion, offset-PLL, etc.), as desired. Note further that DFSs according to various embodiments may also be used in RF transceivers. For example, by combining the functionality and/or circuitry of RF receivers that include one or more DFSs with the functionality and/or circuitry of RF transmitters that include one or more DFSs, RF transceivers may be realized, as persons of ordinary skill in the art will understand. In some embodiments, one or more DFSs may be shared between the RF receiver and the RF transmitter, as persons of ordinary skill in the art will understand. Furthermore, RF receivers, RF transmitters, and/or RF transceivers including DFSs according to various embodiments may be used in a variety of communication arrangements, systems, sub-systems, networks, etc., as desired.FIG.25shows a circuit arrangement for an RF communication system300according to an exemplary embodiment. System300includes a transmitter200, coupled to antenna105A. Via antenna105A, transmitter200transmits RF signals. The RF signals may be received by receiver100, described above. In addition, or alternatively, transceiver310A and/or transceiver310B might receive (via receiver100) the transmitted RF signals. In addition to receive capability, transceiver310A and transceiver310B can also transmit RF signals. The transmitted RF signals might be received by receiver100, either in the stand-alone receiver, or via the receiver circuitry of the non-transmitting transceiver. Other systems or sub-systems with varying configuration and/or capabilities are also contemplated. For example, in some exemplary embodiments, two or more transceivers (e.g., transceiver310A and transceiver310B) might form a network, such as an ad-hoc network, a mesh network, etc. As another example, in some exemplary embodiments, transceiver310A and transceiver310B might form part of a network, for example, in conjunction with transmitter200. RF receivers and RF transmitters, such as RF receiver100and RF transmitter200described above, may be used in a variety of circuits, blocks, subsystems, and/or systems. For example, in some embodiments, such RF receivers may be integrated in an IC, such as an MCU.FIG.26shows a circuit arrangement for an IC, including RF receiver100that includes one or more DFSs (e.g., as shown inFIGS.21-23), according to an exemplary embodiment. The circuit arrangement includes an IC550, which constitutes or includes an MCU. IC550includes a number of blocks (e.g., processor(s)565, data converter605, I/O circuitry585, etc.) that communicate with one another using a link560. In exemplary embodiments, link560may constitute a coupling mechanism, such as a bus, a set of conductors or semiconductor elements (e.g., traces, devices, etc.) for communicating information, such as data, commands, status information, and the like. IC550may include link560coupled to one or more processors565, clock circuitry575, and power management circuitry or power management unit (PMU)580. In some embodiments, processor(s)565may include circuitry or blocks for providing information processing (or data processing or computing) functions, such as central-processing units (CPUs), arithmetic-logic units (ALUs), and the like. In some embodiments, in addition, or as an alternative, processor(s)565may include one or more DSPs. The DSPs may provide a variety of signal processing functions, such as arithmetic functions, filtering, delay blocks, and the like, as desired. In some embodiments, functionality of parts of receiver100, such as those described above, may be implemented or realized using some of the circuitry in processor(s)565, as desired Referring again toFIG.26, clock circuitry575may generate one or more clock signals that facilitate or control the timing of operations of one or more blocks in IC550. Clock circuitry575may also control the timing of operations that use link560, as desired. In some embodiments, clock circuitry575may provide one or more clock signals via link560to other blocks in IC550. In some embodiments, PMU580may reduce an apparatus's (e.g., IC550) clock speed, turn off the clock, reduce power, turn off power, disable (or power down or place in a lower power consumption or sleep or inactive or idle state), enable (or power up or place in a higher power consumption or normal or active state) or any combination of the foregoing with respect to part of a circuit or all components of a circuit, such as one or more blocks in IC550. Further, PMU580may turn on a clock, increase a clock rate, turn on power, increase power, or any combination of the foregoing in response to a transition from an inactive state to an active state (including, without limitation, when processor(s)565make a transition from a low-power or idle or sleep state to a normal operating state). Link560may couple to one or more circuits600through serial interface595. Through serial interface595, one or more circuits or blocks coupled to link560may communicate with circuits600. Circuits600may communicate using one or more serial protocols, e.g., SMBUS, I2C, SPI, and the like, as person of ordinary skill in the art will understand. Link560may couple to one or more peripherals590through I/O circuitry585. Through I/O circuitry585, one or more peripherals590may couple to link560and may therefore communicate with one or more blocks coupled to link560, e.g., processor(s)565, memory circuit625, etc. In exemplary embodiments, peripherals590may include a variety of circuitry, blocks, and the like. Examples include I/O devices (keypads, keyboards, speakers, display devices, storage devices, timers, sensors, etc.). Note that in some embodiments, some peripherals590may be external to IC550. Examples include keypads, speakers, and the like. In some embodiments, with respect to some peripherals, I/O circuitry585may be bypassed. In such embodiments, some peripherals590may couple to and communicate with link560without using I/O circuitry585. In some embodiments, such peripherals may be external to IC550, as described above. Link560may couple to analog circuitry620via data converter(s)605. Data converter(s)605may include one or more ADCs605A and/or one or more DACs605B. ADC(s)605A receive analog signal(s) from analog circuitry620, and convert the analog signal(s) to a digital format, which they communicate to one or more blocks coupled to link560. Conversely, DAC(s)605B receive digital signal(s) from one or more blocks coupled to link560, and convert the digital signal(s) to analog format, which they communicate to analog circuitry620. Analog circuitry620may include a wide variety of circuitry that provides and/or receives analog signals. Examples include sensors, transducers, and the like, as person of ordinary skill in the art will understand. In some embodiments, analog circuitry620may communicate with circuitry external to IC550to form more complex systems, sub-systems, control blocks or systems, feedback systems, and information processing blocks, as desired. Control circuitry570couples to link560. Thus, control circuitry570may communicate with and/or control the operation of various blocks coupled to link560by providing control information or signals. In some embodiments, control circuitry570also receives status information or signals from various blocks coupled to link560. In addition, in some embodiments, control circuitry570facilitates (or controls or supervises) communication or cooperation between various blocks coupled to link560. In some embodiments, control circuitry570may initiate or respond to a reset operation or signal. The reset operation may cause a reset of one or more blocks coupled to link560, of IC550, etc., as person of ordinary skill in the art will understand. For example, control circuitry570may cause PMU580, and circuitry such as RF receiver10, to reset to an initial or known state. In exemplary embodiments, control circuitry570may include a variety of types and blocks of circuitry. In some embodiments, control circuitry570may include logic circuitry, finite-state machines (FSMs), or other circuitry to perform operations such as the operations described above. Communication circuitry640couples to link560and also to circuitry or blocks (not shown) external to IC550. Through communication circuitry640, various blocks coupled to link560(or IC550, generally) can communicate with the external circuitry or blocks (not shown) via one or more communication protocols. Examples of communications include USB, Ethernet, and the like. In exemplary embodiments, other communication protocols may be used, depending on factors such as design or performance specifications for a given application, as person of ordinary skill in the art will understand. As noted, memory circuit625couples to link560. Consequently, memory circuit625may communicate with one or more blocks coupled to link560, such as processor(s)365, control circuitry570, I/O circuitry585, etc. Memory circuit625provides storage for various information or data in IC550, such as operands, flags, data, instructions, and the like, as persons of ordinary skill in the art will understand. Memory circuit625may support various protocols, such as double data rate (DDR), DDR2, DDR3, DDR4, and the like, as desired. In some embodiments, memory read and/or write operations by memory circuit625involve the use of one or more blocks in IC550, such as processor(s)565. A direct memory access (DMA) arrangement (not shown) allows increased performance of memory operations in some situations. More specifically, DMA (not shown) provides a mechanism for performing memory read and write operations directly between the source or destination of the data and memory circuit625, rather than through blocks such as processor(s)565. Memory circuit625may include a variety of memory circuits or blocks. In the embodiment shown, memory circuit625includes non-volatile (NV) memory635. In addition, or instead, memory circuit625may include volatile memory (not shown), such as random access memory (RAM). NV memory635may be used for storing information related to performance, control, or configuration of one or more blocks in IC550. For example, NV memory635may store configuration information related to RF receiver100and/or to initial or ongoing configuration or control of RF receiver100(including DFS(s) included in RF receiver100), as desired. As noted, DFSs according to various embodiments may also be used in RF transmitters. Such RF transmitters may be included in various electronic circuitry, such as ICs.FIG.27shows a circuit arrangement for an IC500, including an RF transmitter200that includes one or more DFSs, according to an exemplary embodiment. RF transmitter200may be coupled to and operate in conjunction with various blocks and circuitry in IC550, as described above. Various circuits and blocks described above and used in exemplary embodiments may be implemented in a variety of ways and using a variety of circuit elements or blocks. For example, DFS10, TDC1005, MMD1045, subtracter1015, scaling circuit1055, digital loop filter1020, DCO1025, DAC1030, divider1035, SDM1060, delay circuit1050, LMS adaptation circuit1040, residue error circuit1010, jitter monitor circuit1017, C-TDC1100, F-TDC1105, delay circuit1110, control circuit1115, flip-flop1210, control circuit1205, synchronous counter1220, oscillator1215, flip-flops1275, encoder logic circuit1270, MUX1255, inverter1250, wrap counter1265, MUX control circuit1260, one's complement circuit1305, adder1310, register1325, scaling circuit1315, adder1320, register1330, adder1060A, adder1060B, DDC1060K, integrator1060C, adder1060D, integrator1060F, adder1060G, PRBS dither circuit1060E, quantizer1060H, delay circuit10601, adder1060J, scaling circuit1060N, scaling circuit1060P, scaling circuit1060Q, inverter1605, inverter1610, and various blocks shown inFIGS.21-27that contain digital or mixed-signal circuitry may generally be implemented using gates, digital multiplexers (MUXs), latches, flip-flops, registers, finite state machines (FSMs), processors, programmable logic (e.g., field programmable gate arrays (FPGAs) or other types of programmable logic), arithmetic-logic units (ALUs), standard cells, custom cells, custom analog cells, etc., as desired, and as persons of ordinary skill in the art will understand. In addition, analog circuitry or mixed-signal circuitry or both may be included, for instance, power converters, discrete devices (transistors, capacitors, resistors, inductors, diodes, etc.), and the like, as desired. The analog circuitry in the blocks and circuits above may be implemented using bias circuits, decoupling circuits, coupling circuits, supply circuits, current mirrors, current and/or voltage sources, filters, amplifiers, converters, signal processing circuits (e.g., multipliers), detectors, transducers, discrete components (transistors, diodes, resistors, capacitors, inductors), analog MUXs and the like, as desired, and as persons of ordinary skill in the art will understand. The mixed-signal circuitry may include analog-to-digital converters (ADCs), digital-to-analog converters (DACs), etc.) in addition to analog circuitry and digital circuitry, as described above, and as persons of ordinary skill in the art will understand. The choice of circuitry for a given implementation depends on a variety of factors, as persons of ordinary skill in the art will understand. Such factors include design specifications, performance specifications, cost, IC or device area, available technology, such as semiconductor fabrication technology), target markets, target end-users, etc. Referring to the figures, persons of ordinary skill in the art will note that the various blocks shown might depict mainly the conceptual functions and signal flow. The actual circuit implementation might or might not contain separately identifiable hardware for the various functional blocks and might or might not use the particular circuitry shown. For example, one may combine the functionality of various blocks into one circuit block, as desired. Furthermore, one may realize the functionality of a single block in several circuit blocks, as desired. The choice of circuit implementation depends on various factors, such as particular design and performance specifications for a given implementation. Other modifications and alternative embodiments in addition to the embodiments in the disclosure will be apparent to persons of ordinary skill in the art. Accordingly, the disclosure teaches those skilled in the art the manner of carrying out the disclosed concepts according to exemplary embodiments, and is to be construed as illustrative only. Where applicable, the figures might or might not be drawn to scale, as persons of ordinary skill in the art will understand. The particular forms and embodiments shown and described constitute merely exemplary embodiments. Persons skilled in the art may make various changes in the shape, size and arrangement of parts without departing from the scope of the disclosure. For example, persons skilled in the art may substitute equivalent elements for the elements illustrated and described. Moreover, persons skilled in the art may use certain features of the disclosed concepts independently of the use of other features, without departing from the scope of the disclosure. | 68,294 |
11863193 | DETAILED DESCRIPTION A digital PLL receives a reference signal FREFhaving a relatively stable frequency at an input and produces an output signal with a frequency that is a multiple of the reference signal at an output, referred to as FOUT. The digital PLL also locks the phase of the output signal with the input signal. To lock the phases of the two signals, the phase error between the two signals is measured. To measure the phase error, a TDC may be useful. A classical TDC includes a set of inverters arranged in a chain. When the edge of a digital signal occurs (e.g., a rising edge or a falling edge) at the input of the first inverter, that edge propagates through the chain of inverters. Each inverter has a small finite delay between when it receives the edge at its input and when the output of the inverter changes. The outputs of the inverters in the chain may be sampled at a specific time using flops. The samples collected by the flops indicate how far the edge has passed through the chain of inverters. For example, if a chain of seven inverters is sampled and the result of the sampling is 1110000, the edge has passed through the first three inverters but not through the last four (indicated by the three ones followed by four zeros). These samples create a digital code that indicates the phase error between FREFand FOUT, measured by the number of inverter delays. The phase error is useful for phase locking the input signal and the output signal. A digital PLL may include a ring oscillator with an embedded TDC. An embedded TDC uses flops to capture phase information directly from the inverters that compose the ring oscillator. The embedded TDC is coupled to the output of each ring oscillator stage and samples the data at the output with a flop at a specific point in time, responsive to the receipt of a reference signal. The samples collected by the flops create a digital code as described above. The value of the digital code indicates the fractional phase information of the ring oscillator (for example, 2/7, 3/7, 6/7, etc. for a TDC with seven inverters). The fractional phase information is then used to determine a phase error between the input and output signals. In embedded TDC systems, the reference signal that indicates when sampling of the inverters should occur may arrive close to the time of a data transition for one or more flops. If the data is sampled close to the time of a data transition of the flops, a metastability problem may occur, and an old value may be sampled instead of a new value that should have been sampled, or vice versa. Therefore, the flops may collect an invalid digital code. Invalid codes can result in poor jitter performance, slower lock time, and/or an inability for the PLL to remain locked. In some alternative solutions, additional circuitry may be used to mitigate the effects of invalid codes. However, these solutions increase complexity and power consumption. In other alternative solutions, an invalid code may be discarded. Discarding an invalid code prevents the PLL from adjusting the ring oscillator for that cycle, which can affect jitter. This disclosure describes various examples of a digital PLL with an embedded TDC that is configured to decode an invalid code to a closest valid code. In one example, an N-stage ring oscillator with an embedded TDC has N cyclic TDC states. Because the rising and falling edges of the output signals of the inverters in the embedded TDC do not rise and fall instantaneously, short transitionary states may occur. These transitionary states are states where some, but not all, of the edge transitions have occurred. The transitionary states may be decoded, and are considered a valid code. The transitionary states combined with the N cyclic valid TDC states results in 2N valid codes, which are described below. Codes produced other than the 2N valid codes are decoded to one of the 2N valid codes instead of being discarded. In examples herein, a seven-stage ring oscillator is described. However, ring oscillators of other sizes may be used in other examples. FIG.1is a system100including a digital PLL with a ring oscillator and an embedded TDC in accordance with various examples. In one example of system100, components on the left side ofFIG.1may be digital components, while components on the right side ofFIG.1may be analog components. System100includes a phase frequency detector (PFD)102, a digital filter104, and a digital to analog converter (DAC)106. An oscillator108is a component of an embedded TDC system110. Embedded TDC system110includes oscillator108, level shifter112, and TDC113. PFD102includes a first input114, a second input115, a third input116, and an output118. Output118provides phase error119. System100includes Gray counter117, feedback loop120, link122, output124, output126, decoder128, and summation130. Oscillator108provides an output132, which may include seven bits, shown as <6:0> inFIG.1. Level shifter112receives output132and provides outputs134and136. Level shifter112produces an output of seven bits labeled B<6:0> at output134to TDC113, and produces the output bit B<0> at output136. TDC113uses bits B<6:0> to produce a TDC_OUT <6:0> value. Gray counter117uses bit B<0> to determine integer phase information, as described below. Embedded TDC system110receives an input signal at input138of oscillator108, and produces an output frequency signal FOUTat output124. Embedded TDC system110also produces a TDC_OUT value at output126, which is provided to decoder128in some examples. The details of embedded TDC system110are described below with respect toFIGS.3A and3B. In other examples, various components of embedded TDC system110may be within oscillator108or may be located outside of oscillator108. In system100, a counter (such as Gray counter117) increments to represent integer phase information on feedback loop120(e.g., INTEGER_COUNT <7:0>), while TDC113provides fractional phase information. Integer phase information indicates how many cycles the FOUTsignal completes for every cycle of the FREFsignal. For example, if system100is programmed to produce an FOUTsignal that is four times the frequency of input signal FREF, Gray counter117counts the cycles of FOUT. If system100is operating as programmed, four FOUTcycles will complete for every FREFcycle. Gray counter117provides that information along feedback loop120to summation130. Fractional phase information is the captured phase information of FREFfor a single cycle of the FOUTsignal. These two pieces of information are captured separately and merged by summation130to provide integer plus fractional phase information to PFD102at third input116, shown as feedback phase inFIG.1. For example, if FOUTat output124is programmed to be 12.5 times a reference signal FREFat first input114, then for every FREFcycle, the outputs from Gray counter117and TDC113summed together (by summation130) would increment by 12.5 cycles. In this manner, system100may provide an output signal FOUTat a higher frequency than FREF. System100locks the phase of FOUTwith FREF. To lock the phases of FOUTand FREF, the phase error between FOUTand FREFis first measured, and then used to adjust FOUT. In alternative systems, the phase error is measured with a TDC inside PFD102. This TDC, inside PFD102, has a set of buffers or inverters arranged in a chain. After an edge of a reference signal occurs (e.g., a rising edge or falling edge), that edge propagates through the chain of inverters, as described above. Each inverter has a small finite delay. Then, at the time the feedback clock signal arrives at the PFD102, the reference signal propagating through the inverters is sampled using flops within the TDC, with one flop coupled to the output of each inverter. In this alternative system, the samples from the flops may be three ones (111), followed by a number of zeros, because the reference signal had not passed through all of the inverters in the chain, only three of them. The three ones indicate, in the amount of inverter delays, how much phase error exists between the reference signal and the feedback clock signal. The TDC inside PFD102as described above in the alternative systems may be area intensive and consume a large amount of power. The delay of the inverters in the alternative system also needs to be calibrated to know how much time is consumed by each inverter delay. In examples herein, oscillator108is a ring oscillator, which includes a chain of inverters arranged in a ring. A ring oscillator includes an odd number N of inverters in a ring, with an output that oscillates between two values. The output of the last inverter is provided back into the first inverter. Adjusting a voltage or current provided to the inverter may change the delay through the inverter and therefore its frequency. In this example, calibration is not needed as it is in the alternative system described above. The chain of inverters in oscillator108may be used as an embedded TDC in accordance with various examples herein. The TDC is embedded within oscillator108, and therefore is referred to as embedded TDC system110. Here, the TDC113is shown as a separate component inFIG.1for clarity, although the TDC113is actually embedded within oscillator108. The data at the outputs of the inverters within oscillator108may be sampled and used to determine the phase of the signal produced by oscillator108. Therefore, some of the hardware of the oscillator108(e.g., the chain of inverters) may be reused for the time to digital conversion (e.g., TDC113) to measure the phase error of the oscillator signal. Reusing this hardware may reduce area requirements and power consumption in some examples. In examples herein, TDC113produces a TDC_OUT signal at output126that includes a series of bits. This series of bits indicates the phase of the signal produced by oscillator108, as described above. The TDC_OUT signal at output126is provided to decoder128. If decoder128determines that the TDC_OUT signal is a valid code, decoder128provides the valid code to PFD102at third input116, via summation130. Then, PFD102determines the phase from the valid code and compares it to an expected phase of FREFon second input115. PFD102produces a phase error119and provides the phase error119to digital filter104. The phase error is received by DAC106and then provided to oscillator108, where oscillator108is adjusted responsive to the phase error. The adjustment is performed to phase lock the output signal FOUTwith the input signal FREF. If decoder128determines that the TDC_OUT signal is an invalid code, the invalid code is decoded to a valid code as described in examples herein. The valid code is then provided to PFD102by decoder128via summation130, and PFD102computes the phase error119as described above based on the valid code. In some examples, decoder128may be a component of PFD102, or may be within another component of system100. Decoder128is configured to decode the invalid codes using any suitable decoding scheme or algorithm. Decoder128may be implemented in hardware, software, or a combination of the two. In one example, decoder128decodes the codes using a lookup table, as described below with respect toFIG.5. In one example, a non-transitory computer-readable medium is provided. The non-transitory computer-readable medium includes all electronic mediums or media of storage, except signals. The non-transitory computer-readable medium stores executable code. When executed by a processor or controller of an electronic device, the executable code performs the steps described herein to determine validity of a code, and to decode an invalid code. In one example, some or all of the steps described below with respect toFIG.6are performed by a processor or controller executing executable code. In other examples, additional processes described herein are performed by a processor or controller executing executable code. In some examples, the processor or controller may be decoder128. In other examples, the processor or controller may be a component separate from decoder128. FIG.2is a timing diagram200for a TDC in accordance with various examples herein.FIG.2shows a timing diagram for an example system with a 7-stage ring oscillator operating at240MVHz. Timing diagram200includes waveform202, TDC, IN data204, and setup206. Waveform202is a reference frequency FREFthat is provided to TDC113via link122. FREFis a reference signal that goes high (e.g., a rising edge) to indicate that the flops in TDC113should capture the value at their respective inputs (e.g., 0 or 1). For example, a D flip-flop captures the value at the D-input upon receipt of the specified edge (e.g., rising or falling) of the clock. After the rising/falling clock edge, the captured value is available at the Q output of the D flip-flop. However, flops may have metastability problems. Flops have a setup and hold time during operation. In an example, the setup time is a minimum amount of time before the clock's active edge that the data being sampled must be stable for the data to be latched correctly by the flop. A violation of the setup time may cause incorrect data to be captured by the flop, which is known as a setup violation. Hold time is the minimum amount of time after the clock's active edge during which the data must be stable. A hold time violation may cause incorrect data to be latched, which is known as a hold violation. An example setup/hold time is shown inFIG.2as setup206. In this example, setup206is approximately 100 picoseconds. Three states of TDC113are shown in time order, and are labeled N−2, N−1, and N. The 7-stage ring oscillator includes 7 inverters, and therefore TDC113captures 7 bits, which are labeled bit6to bit0, left to right. In this example, N−2 is a cyclic valid state with three ones. Valid states produced when no setup or hold violations occur may be referred to as cyclic valid states. In the illustrated examples, each cyclic valid state has three ones (e.g., for N−2, bits5,4, and3are ones, while bits6,2,1, and0are zeros), although the number of ones and zeros is merely exemplary. Also, N is a cyclic valid state with three ones. In state N, bits6,5, and4are ones, while bits3,2,1, and0are zeros. When the state N−2 changes to state N, bit6changes from 0 to 1, and bit3changes from 1 to 0. The other bits remain the same. That is, 0111000 changes to 1110000. The flops in TDC113are independent, so the bits captured by the flops change independently of one another as well. In this example, there is a short transition state between states N−2 and N. The transition state is labeled state N−1. In the state N−1, bit6has changed from 0 to 1, but bit3has not yet changed from 1 to 0. Therefore, the code for state N−1 has four ones instead of three ones. The codes with four ones are referred to herein as transition states. The transition states may occur between cyclic valid states that have three ones. In the example of a 7-stage ring oscillator, there are seven cyclic valid states with three ones (1110000, 0111000, etc.) and seven transition states with four ones (1111000, 0111100, etc.). As described below, the transition states are short in duration and may violate setup and hold times of the flops. In the example inFIG.2, the transition state N−1 is approximately 70 picoseconds long. The setup206is approximately 100 picoseconds, so the N−1 state may violate the setup and hold times. The rising edge of FREFarrives at time t1in the example inFIG.2. Time t1is close to the window where bits are updating from N−2 state to N state, so metastability may occur. At time t1, either or both of bits6and3may still be in the process of updating. In one example, at time t1, bit6may have not yet updated from 0 to 1, and bit3may have updated from 1 to 0. Therefore, the code that is captured is 0110000 in this example. This code is an invalid code that has only two ones. In examples herein, invalid codes with only two ones are decoded to the closest transition state with four ones. In a 7-stage ring oscillator example, only a maximum of 2 bits change as the data is latched by the flops. With a code of 0110000, bit6and bit3are the bits that undergo a change. Therefore, a code of 0110000 is decoded to 1111000, with bits6and3changed from zeros to ones. A decoder in accordance with examples herein can decode each code with two ones to a corresponding transition code with four ones. In other examples, other types of decoding may be performed. In other examples, other types of codes may be used. As shown in the example ofFIG.2, if the code captured at time t1is 0110000, the rising edge at time t1likely occurred during the transition state N−1, which caused metastability issues due to the length of the transition state (70 picoseconds) compared to the setup/hold time (100 picoseconds). Therefore, the code with two ones is decoded to the closest transition code with four ones. In alternative systems, the metastability window of the flops could be made smaller by decreasing the setup and/or hold times. However, this alternative solution may increase power consumption, area, or require more expensive components. In another alternative solution described above, an invalid code may be discarded. However, this alternative solution may affect jitter. The examples herein take advantage of the sequence of three consecutive ones in the oscillator to decode invalid codes to a valid state. The sequence of three consecutive ones occurs in the sequence of bits in a 7-stage ring oscillator with a 3/7 duty cycle, as described below. FIG.3Ais a circuit diagram300of an embedded TDC in accordance with various examples herein. Circuit diagram300includes a ring oscillator302, a dynamic level shifter304, flip-flops306A to306G (collectively, flops306), and 8-bit Gray counter308. Ring oscillator302is a 7-stage ring oscillator with seven inverters310A to310G (collectively, inverters310). Dynamic level shifter includes a terminal312coupled to a voltage source providing a voltage VDD and a terminal314coupled to ground. A reference frequency FREFis provided to flops306via terminal316. Flops306provide TDC_OUT <6:0> at output318. 8-bit Gray counter308includes an input320, which receives B<0>, and an output322, which produces INTEGER_COUNT <7:0>. Ring oscillator302is oscillator108inFIG.1in one example. Ring oscillator302has a chain of inverters310arranged in a ring, and produces an output that oscillates between two values. The output of the last inverter310G is provided back into the first inverter310A. Also, adjusting a voltage or current provided to an inverter310may change the delay through that inverter310. The output signal from each inverter310is provided to dynamic level shifter304. The signals from the inverter310outputs that are provided to dynamic level shifter304are labeled D<0> to D<6> in this example. The details of dynamic level shifter304are described below with respect toFIG.3B. Seven output bits are provided by dynamic level shifter304to flops306. These output bits are labeled B<0> to B<6>, and represent the bits of the TDC codes sampled from ring oscillator302and stored in flops306. When an edge (e.g., a rising or falling edge) of the signal reference FREFis provided to flops306at a time t1, the flops read the values B<0> to B<6> provided by dynamic level shifter304and store those values in the respective flops306. The values stored in the flops306are provided to output318to create the code TDC_OUT. The TDC_OUT code has seven binary digits in this example. The value of B<0> provided to flop306A is also provided to input320of 8-bit Gray counter308. A Gray counter increments by changing only one bit at a time to change to an adjacent state. 8-bit Gray counter308counts the number of full rotations that an edge has undergone. 8-bit Gray counter308provides an 8-bit INTEGER_COUNT value at output322representing the number of full phase rotations. The INTEGER_COUNT and TDC_OUT values are combined to form the digital phase relationship of the ring oscillator302in an integer plus fractional format (via summation130shown inFIG.1), as described above. INTEGER_COUNT provides the integer phase information, and TDC_OUT provides the fractional phase information. This measured phase is compared to an expected phase, and the phase error is calculated. FIG.3Bis a circuit diagram of a dynamic level shifter304in accordance with various examples herein. Dynamic level shifter304receives the values D<6> to D<0> from the inverter310outputs. Dynamic level shifter304converts those inverter310output samples to the output bits B<6> to B<0>, and then provides those output bits to flops306. Dynamic level shifter304receives the values D<0> to D<6> at the gates of transistors350A to350G, respectively. The values of D<0> to D<6> are either 0 or 1, based on the inverter310outputs. Transistors350A to350G turn on and off based on the values of D<0> to D<6>. The voltage values at nodes352A to352G (represented by S<0> to S<6>, respectively) also change as D<0> to D<6> change. The nodes352A to352G are coupled to the gates of transistors354A to354G, and to transistors356A to356G, where nodes and gates labeled S<0> are coupled together, nodes and gates labeled S<1> are coupled together, etc. For example, node352D (S<0>) is coupled to the gate of transistor354A (S<0>), and also coupled to the gate of transistor356B (S<0>). As the values D<0> to D<6> change based on the inverter310outputs changing, the values of S<0> to S<6> change throughout dynamic level shifter304as well. Dynamic level shifter304performs two operations. First, dynamic level shifter304shifts from the oscillator108's local supply voltage to the predominant digital supply voltage used in system100, and it performs this shift in a power efficient manner. Dynamic level shifter304is dynamic, so it only switches and consumes current when the output changes. Second, dynamic level shifter304converts a string of ones and zeros (D<0> to D<6>) from inverters310to a more readable phase signal. For example, if D<6:0> is sampled at any given time, D<6:0> may be 0101010, then 1101010 (as the first bit changes), then 1001010 (as the second bit changes), etc., because the changes to the inverter310outputs work their way through the inverter delays in the loop. These inverter310outputs are difficult to sample due to the timing of the changes, so the logic of dynamic level shifter304makes each S bit go high (1) during its transition (e.g., from 0 to 1) and stay high (1) for 3 cycles before going low (0). By staying high for three cycles, the bits S<0> to S<6> are easier to capture than reading D<0> to D<6> directly from inverters310. Bits S<6:0> are read from dynamic level shifter304and inverted by inverter358. Inverter358is not shown as being coupled to the other circuitry inFIG.3B, but is coupled to the appropriate nodes to receive the seven bits of S<6:0> at an input of inverter358. In one example, inverter358may represent seven inverters, one for each bit. The output of inverter358produces seven bits B<6:0>, which are then provided to flops306A to306G inFIG.3A. As described above, these bits B<6:0> (which are the inverse of bits S<6:0>) are held high for three cycles so they can be read more easily from flops306. The values stored in the flops306are provided to output318to create the code TDC_OUT as described above with respect toFIG.3A. Therefore, dynamic level shifter304receives the output of inverters310(D<0> to D<6>) and creates a more easily read waveform shown as B<0> to B<6> inFIG.4, which is provided to flops306. FIG.4is a timing diagram400of waveforms B<0> to B<6> provided by dynamic level shifter304to flops306in accordance with various examples herein. In timing diagram400, the y-axis represents voltage in volts, while the x-axis represents time in nanoseconds. Waveforms402,404,406,408,410,412, and414correspond to voltage values over time for bits B<0> to B<6>, respectively. Timing diagram400also includes capture1416and capture2418, which are example times that the values of the bits B<0> to B<6> are read, as described below. Waveforms402,404,406,408,410,412, and414each have a duty cycle of 3/7, or approximately 43%. A duty cycle of 3/7 results from a valid digital code that has three ones and four zeros (e.g., a cyclic valid state). In other words, the bit is high for 3/7 of the time and low for 4/7 of the time, with the 3 high bits being consecutive. Waveforms402,404,406,408,410,412, and414show that there is a delay from one waveform to the next, starting with waveform402and ending with waveform414. The delay corresponds to the delay of a signal moving through ring oscillator302from one inverter310to the next. As an example, the first rising edge of waveform402(corresponding to B<0>) occurs at approximately 34.7 nanoseconds. The first rising edge of waveform404(corresponding to B<1>) occurs at approximately 35.4 nanoseconds. The first rising edge of waveform406(corresponding to B<2>) occurs at approximately 35.9 nanoseconds. The rising edges of the subsequent waveforms408,410,412, and414occur at similar intervals. Timing diagram400has two capture times, capture1416and capture2418. The capture times indicate a time of a rising edge of reference signal FREFreaching the flops306. When a rising edge of reference signal FREFis received, the values B<0> to B<6> are read from flops306. Capture1416occurs at approximately 35.7 nanoseconds. At this time, B<0>, B<1>, and B<6> are high values (1). The other bits (B<2>, B<3>, B<4>, and B<5>) are low (0). Therefore, the code, from B<6> to B<0>, is 1000011 for capture time1416. Therefore, TDC_OUT is 1000011, which is a valid code. Capture2418is another example of a capture time. At approximately 39.4 nanoseconds, capture2418occurs. At this time, bits B<0> and B<6> are high (1). However, this capture time violates the setup/hold time of the flops306corresponding to bits B<1> and B<5>. Bits B<1> and B<5> are transitioning at the time of capture2418. TDC_OUT could be either 1100011, 1100001, 1000001, or 1000011. If TDC_OUT is either 1100001 or 1000011, those are cyclic valid codes with three ones, and those cyclic valid codes are acceptable valid codes. If TDC_OUT is 1100011, that is a transition state with four ones. Transition states are also valid codes. If TDC_OUT is 1000001, that is an invalid code with only two ones. In examples herein, invalid codes with two ones are decoded to the nearest transition state with four ones. Therefore, 1000001 is decoded to the transition state 1100011. A state machine may decode the data and convert it to the proper value in one example. Then, the proper value is compared to the FREFto determine the error in the phase. If the phase error is zero, no change is made to the oscillator. If the phase error is high, the oscillator is running too fast, and the PLL will bring the frequency of the oscillator down to accumulate less phase. If the phase is low, the oscillator is running too slow, and the PLL will bring the frequency of the oscillator up to accumulate more phase. The TDC_OUT code, its correction value and the INTEGER_COUNT is the representation of the phase of the oscillator. FIG.5is a table500of valid and invalid codes for a 7-stage ring oscillator with an embedded TDC in accordance with various examples. Column A includes the valid B<6:0> codes, which includes the cyclic valid codes with three ones, and the transition states with four ones. Column B indicates whether the codes are transitions states or not, indicated with a Y for yes and an N for no. Column C is a list of possible TDC_OUT <6:0> codes for each valid state. Column D is a decoder fractional value that represents the fractional phase value associated with each possible TDC_OUT <6:0> code. Rows1,3,5,7,9,11, and13show the cyclic valid codes with three ones. Rows2,4,6,8,10,12, and14are the valid transition codes with four ones. In these transition code rows, column C indicates the possible codes for each transition state. The TDC_OUT code with two ones that corresponds to each transition state is shown in Column C. If a code with two ones is received, that code is decoded to the transition state in corresponding row in table500. A lookup table, such as table500, may be used in some examples by decoder128to convert the TDC_OUT code to a fractional phase value as shown in Column D. FIG.6is a flow diagram of a method600for metastability correction for a ring oscillator with an embedded TDC in accordance with various examples herein. The steps of method600may be performed in any suitable order. The hardware components described above with respect toFIGS.1,3A, and3Bmay perform method600in some examples. In one example, a processor or controller, such as decoder128, may execute executable code to perform at least some of the steps of method600. Method600begins at610, where a set of flops receive a reference signal, such as FREF. As described above, flops306may receive a reference signal via terminal316. Each flop receives the reference signal, which may be a rising edge or a falling edge. In an example, N flops of a TDC are coupled to N inverters of a ring oscillator, with one flop coupled to the output of each inverter. In some examples, a level shifter resides between the inverters and the flops. The level shifter receives first data samples from the N inverters and provide second data samples to the N flops. Method600continues at620, where responsive to receiving the reference signal, each of the flops captures an output of a different stage of a multi-stage ring oscillator. The stages of the ring oscillators may be inverters in some examples, such as inverters310. Method600continues at630, where the flops provide a code to a decoder, and the code is based at least in part on the outputs of the stages. In examples described above, the code includes N binary bits, which are outputs of the N inverters310. The N binary bits constitute a code that encodes phase information of the oscillator signal. Method600continues at640, where responsive to the code being invalid, decoding the invalid code to a valid code. As described above, some codes are invalid due to metastability issues with the TDC. The invalid codes may include two binary ones (X binary ones) instead of three (X+1) or four (X+2) binary ones in some examples. Other types of valid or invalid codes may be present in other examples. Method600continues at650, where the valid code is provided to a phase frequency detector, such as PFD102. The valid code may be provided to PFD102by decoder128. In some examples, the code may be provided to PFD102from flops306. PFD102used the code to determine if a phase error exists between the oscillator phase and a phase of the reference frequency. The PFD102provides a phase error and the phase locked loop uses the phase error to adjust the phase of the oscillator if the oscillator is out of phase. In examples herein, a ring oscillator's phase predictability is leveraged to correct phase data corrupted by metastability of a TDC. In addition, low power single-ended ring oscillator topologies may be used. Standard digital library flops with low power and large metastability windows may be included to capture TDC data. Examples herein use the inverters of a ring oscillator as a TDC, saving circuit area and power consumption. Also, invalid codes are decoded instead of discarded, which enables the phase to be corrected or verified at each update cycle. Decoding is performed with a simple decoding scheme. The decode logic may be adjusted as the number of bits changes. The term “couple” is used throughout the specification. The term may cover connections, communications, or signal paths that enable a functional relationship consistent with this description. For example, if device A generates a signal to control device B to perform an action, in a first example device A is coupled to device B, or in a second example device A is coupled to device B through intervening component C if intervening component C does not substantially alter the functional relationship between device A and device B such that device B is controlled by device A via the control signal generated by device A. A device that is “configured to” perform a task or function may be configured (e.g., programmed and/or hardwired) at a time of manufacturing by a manufacturer to perform the function and/or may be configurable (or re-configurable) by a user after manufacturing to perform the function and/or other additional or alternative functions. The configuring may be through firmware and/or software programming of the device, through a construction and/or layout of hardware components and interconnections of the device, or a combination thereof. A circuit or device that is described herein as including certain components may instead be adapted to be coupled to those components to form the described circuitry or device. For example, a structure described as including one or more semiconductor elements (such as transistors), one or more passive elements (such as resistors, capacitors, and/or inductors), and/or one or more sources (such as voltage and/or current sources) may instead include only the semiconductor elements within a single physical device (e.g., a semiconductor die and/or integrated circuit (IC) package) and may be adapted to be coupled to at least some of the passive elements and/or the sources to form the described structure either at a time of manufacture or after a time of manufacture, for example, by an end-user and/or a third-party. Circuits described herein are reconfigurable to include the replaced components to provide functionality at least partially similar to functionality available prior to the component replacement. Uses of the phrase “ground” in the foregoing description include a chassis ground, an Earth ground, a floating ground, a virtual ground, a digital ground, a common ground, and/or any other form of ground connection applicable to, or suitable for, the teachings of this description. Unless otherwise stated, “about,” “approximately,” or “substantially” preceding a value means+/−10 percent of the stated value. Modifications are possible in the described examples, and other examples are possible within the scope of the claims. | 34,423 |
11863194 | DETAILED DESCRIPTION The following description is presented to enable one of ordinary skill in the art to make and use the invention and to incorporate it in the context of particular applications. Various modifications, as well as a variety of uses in different applications will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to a wide range of embodiments. Thus, the present invention is not intended to be limited to the embodiments presented, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. In the following detailed description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without necessarily being limited to these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention. The reader's attention is directed to (i) all papers and documents which are filed concurrently with this specification and which are open to public inspection with this specification (the contents of all such papers and documents are incorporated herein by reference) and (ii) all papers and documents which are otherwise incorporated by reference herein (but not physically filed with this specification). All the features disclosed in this specification, (including any accompanying claims, abstract, and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features. Furthermore, any element in a claim that does not explicitly state “means for” performing a specified function, or “step for” performing a specific function, is not to be interpreted as a “means” or “step” clause as specified in 35 U.S.C. Section 112, Paragraph6. In particular, the use of “step of” or “act of” in the claims herein is not intended to invoke the provisions of 35 U.S.C. 112, Paragraph6. The technology disclosed herein is related to that disclosed in the patent applications referenced above and extends and adapts the phononic combs described therein for application to atomic clocks102. In this case, the reference RF oscillator for a prior art atomic package, which is usually embodied as an Oven Controlled Crystal (Xtal) Oscillator (OCXO,) is replaced with the comb-enhanced OCXO100, shown inFIG.1for example. The comb-enhanced OCXO100is coupled with a conventional atomic clock102for improving the clock signal generated thereby. For the comb enhanced OCXO100, SC-cut based oscillator circuits OXCO1and OXCO2are preferably used for (i) producing the driving signal S1(by OXCO1; also see the SC-cut quartz resonator30ofFIG.1bin the OXCO1sustaining circuit) and for locking to a particular comb tooth (by OXCO2; also see the SC-cut quartz resonator60onFIG.1bin the OXCO2sustaining circuit) of the frequency comb S2(see alsoFIG.1a) produced by the driving signal S1in a nonlinear resonator element (see resonator40onFIG.1b, for example) in a NLXO circuit. Using SC-cut quartz resonators for elements30,40and60in the OXCO1, OXCO2and NLXO circuits is preferred due to their lower frequency sensitivity to temperature especially when ovenized. The entire comb-enhanced OCXO100is preferably ovenized to reduce temperature related drifts as opposed to ovenizing the individual resonators30,40and60so that all three resonators are exposed to a common temperature environment. The loop bandwidth (BW) of the comb-enhanced OCXO100is set with a high pass frequency filter (see filter70onFIG.1b) with a corner frequency 1-10 Hz. Thus, only phase noise above about 10 Hz offset frequencies is disciplined (stabilized) by the stability of the selected tooth of the comb. Based on initial measurements of the phase noise reductions provided by phononic comb teeth in quartz nonlinear resonators, the comb-enhanced atomic clock will demonstrate a reduction in phase noise from 10 Hz to 1 MHz offset frequencies by 20-40 dB. The atomic clock102is locked to the comb-enhanced OCXO100using a second servo loop output from the atomic clock with a low pass filter with a corner frequency<1 Hz. This disciplines the comb-enhanced OCXO100for long-term drifts (>1 sec) using the high stability of the atomic transitions. InFIGS.1,3and4the atomic clock is depicted using an atomic transition such as electronic rubidium hyperfine transition for control. However, any other electronic atomic transition could be used such as the hyperfine transition of Cs atoms, transitions in Hg atoms, or optical transitions in Yb atoms, or clocks based on laser cooling or optical lattices to reduce Doppler line broadening and collisions. The corner frequencies shown onFIG.1mentioned above (and onFIGS.3and4) correspond to the −3 dB points where a single pole filter cuts off half of the power at the stated frequency. The filters may be sharper filter than the filtration that a single pole filter would provide. Ideally the roll off or “sharpness” of the filter would be infinite, but a simple two or three pole filter is a better choice than a single pole filter and should be suitable for this application and preferred compared to a single pole filter. An embodiment of the OCXO100ofFIG.1is now described in greater detail with reference toFIG.1bwith supporting data of the frequency comb shown inFIG.1a. FIG.1bshows the basic components of one embodiment of the OCXO100ofFIG.1: a first OCXO(OCXO1) comprising a resonator30, a second OCXO (OCXO2) acting as the sensor or oscillator and electronics comprising a mixer, a phase detector, and a PPL circuit50and resonator60. The first OXCO (OXCO1) includes the first resonator30that generates drive signal S1that is preferably amplified and stabilized with an automatic gain control circuit38and which drives the non-linear resonator40in the NLXO circuit. The resonator30has two metallic electrodes32disposed on opposing sides of a bar of quart material forming the resonator30and connecting it with the OXCO1sustaining circuit. The output S1of resonator30at a frequency fDis applied to resonator40which has a non-linear resonant mode at a frequency fθ. With appropriate modal coupling within resonator40and at drive levels below the nonlinear Duffing bifurcation condition (for which fθis strongly dependent on the amplitude of S1), a frequency comb, S2, is generated by resonator40as shown inFIG.1a. The nonlinear response in the preferably quartz material of the resonator is evidenced by the generation of a frequency comb at fθ, fθ±Δ, fθ±2Δ, fθ±3Δ . . . fθ±nΔ, where Δ is the offset frequency which is equal to fD−fθ. When so driven, resonator40may be characterized as a non-linear resonator element in a non-linear resonator oscillator (NLXO) circuit. The resonator40in the NLXO circuit has two metallic electrodes42disposed one a bar of quartz material forming resonator40, one of which is connected with an output of the automatic gain control circuit38and the other of which provide the output S2(the comb of frequencies) to circuit50. Circuit50is used to select a desired one of the teeth (frequencies) from the comb mentioned above which is then applied to high pass filter70. It has been observed, as shown inFIG.2, that for particular teeth of the comb (see the left hand side of the comb ofFIG.2) and for a range of drive frequencies, the output frequencies of a tooth can be independent (or substantially independent) of the drive frequency as evidenced by infinite slopes (or substantially infinite slopes) of the plots of the drive frequencies versus the comb output frequencies. An infinite slope is characterized by a vertical representation on of a tooth frequency onFIG.3. While an infinite slope might be ideal in order for the output frequencies of the teeth to be independent (or substantially independent) of the drive frequency, a slope greater than 1 is an improvement in terms of making the output frequency of a selected tooth less dependent on the drive frequency fDthan just using the drive frequency fDof resonator30as a clock. So the selected tooth preferably has a slope whose absolute value is greater than 1. And more preferably, the selected tooth has highest slope in the waterfall plot of frequency of the tooth versus changes in the frequency of the drive signal fDdepicted byFIG.2. In most oscillators, the far-out phase noise is determined by the electronic noise in the sustaining circuit. This noise will not be present on the modes of resonator not used within a sustaining; thus, by locking a second OCXO (OCXO2) to a selected one (an nthtooth) of these teeth within a PLL and using the error signal of the PLL to correct for relative changes in frequency between OCXO (OCXO2) and S2, the output frequency of second OCXO (OCXO2) can be stabilized to a level provided by the selected nthtooth of the comb. Using a feedback tuning signal to the varactor in OCXO (1), one can tune the frequency of the drive signal S1to the highest slope and lowest noise condition. For ease of illustration, only the higher frequency teeth are specifically shown inFIG.1awhile teeth on either side of the drive frequency are identified onFIG.2, it being understood that the comb typically appears on both sides of the drive frequency fD, for example, at frequencies shown onFIG.2. For the frequency drive frequency vs comb frequency response of resonator40shown byFIG.2, the output frequency of second VCXO (VCXO2) might well be stabilized to a level provided by the first tooth to the left of the frequency of the drive oscillator (fD) at a frequency in the range of 100.1658-100.1666 MHz because the slope (its first derivative) of the drive frequency vs. the comb frequency of this tooth in that range is 2.96. The slope is even steeper (closer to infinite) when the drive oscillator (fD) is in a frequency in the range of 100.1660-100.1662 MHz Ideally, the slope (its first derivative) of the drive frequency vs. the comb frequency should be as large as possible (and the slope is infinite when the comb frequency response depicted byFIG.2is exactly vertical). The “Undetermined” region is due to the fact that the slope was too large to make a slope calculation with the equipment used for these measurements. The slope of the drive frequency fDis not surprisingly equal to +1. It is angled slightly to the left inFIG.2. As the slopes of the teeth get closer to absolutely vertical (moving in a clock-wise direction onFIG.2from fD) they approach an infinite slope (where the tooth frequency is desirously independent of the drive frequency). As the slope of the teeth move past vertical (again rotating in a clock-wise direction), the slope values become negative. So long as the slope of a selected tooth has an absolute value greater than one, it has a desirable lower dependency on the drive frequency fD. If the slope of a selected tooth increases still further, that results in a further improvement in terms of being insensitive to noise associated with the drive frequency fD. The output102-1of the servo lock built into the atomic clock102is applied to low pass filter72, the outputs of the two filters70,72are applied in the implementation ofFIG.1avia a summing junction or amplifier74to a tuning capacitor64in the OXCO2sustaining circuit to adjust the frequency at which resonator60resonates. That signal from resonator60forms the Stabilized Output frequency of the improved atomic clock120modified by the comb enhanced OXCO circuit100. The resonator60has two metallic electrodes62disposed on a bar of quartz material forming resonator60, one of which is connected with the tuning capacitor64and the other of which provides the Stabilized Output. The reader will note that the scales of the Drive Frequency is very narrow compared to the scale of the Measured Comb Frequency ofFIG.2, so a slope of1(of fD) is close to, but not quite, vertical in this graph. On either side of an infinite slope inFIG.2, the slopes of the teeth can have either positive or negative values. Since ideally the absolute value of the first derivative of the drive frequency versus the frequency of a tooth in the comb should have value greater than 1 (and more preferably much greater than 1 and, even more preferably, infinite), operating the clock ofFIG.1utilizing a NLXO generating a tooth having a slope of either 2.96 or a slope marked “Undetermined” (for the depicted drive frequencies fD) would be the best option for a NLXO producing a comb as shown inFIG.2. A reduction in the sensitivity of particular comb teeth to the drive frequency fDvariations is attained if the first derivative of the drive frequency versus the frequency of a selected tooth of the comb has an absolute value greater than 1. The selected tooth should also have sufficient amplitude (see alsoFIG.2a) to be easily selected by the circuitry provided for tooth selection. InFIG.2the slopes listed in the upper row for the various teeth were measured of a range corresponding to the longer double-headed arrow, while the slopes listed in the lower row for the various teeth were measured of a range corresponding to the shorter doubled-headed arrow. This reduction in the sensitivity of particular comb teeth to the drive frequency variations is also shown inFIG.2awhere a 400 Hz FM modulation is added to the drive frequency. The increase in the signal-to-noise (S/N) ratio for the first tooth to the left of the drive frequency shows that this comb tooth can reduce noise on the drive signal. The inventors of the presently disclosed technology have observed that the high frequency jitter of first VCXO (VCXO1) can be substantially reduced using this technique. Since a large portion of the phase noise of an oscillator is due to noise within the sustaining circuit (VCXO1the embodiment ofFIG.1), this noise will not be present in resonator40. Thus, the infinite slope region (left side of the comb inFIG.3) of a comb will tend to filter out the electronic noise. The above design can be implemented in a quartz MEMS process in which VCXO (1), resonator (1), resonator (2), and VCXO (2) are all integrated with the PLL on a common semiconductor (S1, for example) substrate using quartz piezoelectric resonators. This will provide a chip-scale oscillator with dimensions of roughly≤20 mm3(a single quartz MEMS TCXO has been demonstrated with dimensions of 2×3 mm2, see R. L. Kubena, et al., “A Fully Integrated Quartz MEMS VHF TCXO,” 2017 IEEE Frequency Control Symposium, Besancon, Fr., pp. 68-71, July 2017, which is hereby incorporated by reference). In addition, the components can be ovenized for additional stability over temperature leading to a comb-enhanced OCXO. Finally, although quartz resonators have demonstrated high-Q combs with these unique features, other MEMS resonators formed of materials such as S1or AIN could be utilized instead so as long as they demonstrate the desired nonlinear and modal coupling effects. Additional embodiments are shown inFIGS.3and4. In the embodiment shown inFIG.3, the atomic clock is used to discipline the non-linear element of the NLXO to afford on overall long-term stability of the output. By applying a voltage to a varactor diode in series or parallel with the non-linear resonator (similar to that used for temperature compensating a TCXO), the resonator's resonances can be shifted relative to the drive signal to stabilize (lock) the comb teeth frequencies to the atomic transitions to stabilize short term stability of the output. InFIG.4the atomic package feedback is also applied to the nonlinear resonator as inFIG.3, but the second OCXO2and the PLL lock is replaced with a high-Q filter100-1(also identified as element51ofFIGS.41and4b) to filter out all the teeth frequencies from the comb except for the selected tooth which preferably has the lowest phase noise (high S/N ratio and low sensitivity to input frequency fluctuations as described above with reference toFIGS.2and2a). In this embodiment, the output is not produced by OCXO2, which is disciplined by the comb, but rather by the comb itself. This simplifies the circuit and eliminates noise generated by the PLL, but does not does not produce as clean a signal at the output as second embodiment (seeFIG.3) based on the attenuation of the filter and does not allow tailoring the bandwidth of the feedback loop around the comb for more optimal performance. FIGS.3aand3bdepict possible implementations the embodiment ofFIG.3whileFIGS.4aand4bdepict possible implementations the embodiment ofFIG.4. The reader will note that in the implementation ofFIG.1b, the usual output102-1of the atomic clock102is applied to low pass filter72while the output of circuit50(used to select a desired one of the teeth) is applied to high pass filter70,. The outputs of the two filters70,72are applied, in the implementation ofFIG.1b, via a summing junction or amplifier74to a tuning capacitor (a varactor diode, for example)64in the OXCO2sustaining circuit to adjust the frequency at which resonator60resonates. In the implementations ofFIGS.3aand3b, the output of the filters70and72are handled somewhat differently . . . the output of filter70is applied to the tuning capacitor64of the OXCO2sustaining circuit (without an intervening summing junction) while the output of filter72is applied to a different tuning capacitor, namely either a tuning capacitor39arranged between the OCXO1and NLXO circuits (seeFIG.3a) or a tuning capacitor37in the OCXO1circuit (seeFIG.3b). In the embodiment ofFIG.4the second OCXO2and the PLL lock is replaced with the high-Q filter100-1so the filter70is no longer utilized. As in the implementations ofFIGS.3aand3b, the output of filter72is applied to a tuning capacitor, either a tuning capacitor39arranged between the OCXO1and NLXO circuits (seeFIG.4a) or a tuning capacitor37in the OCXO1circuit (seeFIG.4b). Other permutations to these ideas will now be apparent to those skilled in the art, such as using the atomic transitions to servo back to OCXO1to shift the drive frequency of the comb relative to the nonlinear resonator's modal frequencies for reducing the long-term drift of the output frequency. Having now described the invention in accordance with the requirements of the patent statutes, those skilled in this art will understand how to make changes and modifications to the present invention to meet their specific requirements or conditions. Such changes and modifications may be made without departing from the scope and spirit of the invention as disclosed herein. For example, it should now be apparent that the disclosed technology may be used to stabilize other reference oscillators than those associated with atomic clocks. The foregoing Detailed Description of exemplary and preferred embodiments is presented for purposes of illustration and disclosure in accordance with the requirements of the law. It is not intended to be exhaustive nor to limit the invention to the precise form(s) described, but only to enable others skilled in the art to understand how the invention may be suited for a particular use or implementation. The possibility of modifications and variations will be apparent to practitioners skilled in the art. No limitation is intended by the description of exemplary embodiments which may have included tolerances, feature dimensions, specific operating conditions, engineering specifications, or the like, and which may vary between implementations or with changes to the state of the art, and no limitation should be implied therefrom. Applicant has made this disclosure with respect to the current state of the art, but also contemplates advancements and that adaptations in the future may take into consideration of those advancements, namely in accordance with the then current state of the art. It is intended that the scope of the invention be defined by the Claims as written and equivalents as applicable. Reference to a claim element in the singular is not intended to mean “one and only one” unless explicitly so stated. Moreover, no element, component, nor method or process step in this disclosure is intended to be dedicated to the public regardless of whether the element, component, or step is explicitly recited in the Claims. No claim element herein is to be construed under the provisions of 35 U.S.C. Section 112, as it exists on the date of filing hereof, unless the element is expressly recited using the phrase “means for . . . ” and no method or process step herein is to be construed under those provisions unless the step, or steps, are expressly recited using the phrase “comprising the step(s) of. . . . ”. Modifications, additions, or omissions may be made to the systems, apparatuses, and methods described herein without departing from the scope of the invention. The components of the systems and apparatuses may be integrated or separated. Moreover, the operations of the systems and apparatuses may be performed by more, fewer, or other components. The methods may include more, fewer, or other steps. Additionally, steps may be performed in any suitable order. As used in this document, “each” refers to each member of a set or each member of a subset of a set. | 21,525 |
11863195 | DETAILED DESCRIPTION FIG.1is a diagram of an analog-to-digital converter (ADC) device100according to an embodiment of the invention. As shown inFIG.1, the ADC device100comprises a digital-to-analog converter (DAC) circuit105, a comparator circuit110, a successive approximation register (SAR) decision circuit115which operates based on the SAR algorithm, an oscillator circuit120having at least one delay unit, and a processing circuit125. The ADC device100for example (but not limited) is an asynchronous SAR ADC circuit device, and it is used for perform an ADC conversion operation to convert an input voltage signal Vin in analog domain into a digital output signal (or a digital code) DOUT having multiple output bits such as n bits in digital domain so as to generate and output the digital output signal DOUT at its output terminal; the input voltage signal Vin which is feed into the input terminal of the ADC device100. In practice, the DAC circuit105for example (but not limited) comprises a subtractor1051and a DAC unit1052. The DAC circuit105has an input and an output, its input is coupled to the input voltage signal Vin to receive such input voltage signal Vin, and its output is coupled to the comparator circuit110. The comparator circuit110has an input coupled to the DAC circuit105, an output coupled to the SAR decision circuit115, and a clock input coupled to a clock signal CLK generated by the oscillator circuit120. The oscillator circuit120is coupled to the comparator circuit110, and it is used for generating the clock signal CLK according to the reset signal S_RST and a delay of the delay unit which is comprised by the oscillator circuit120. For example (but not limited), it may be a ring oscillator which may comprise a NOR gate logic1201and at least one delay unit such as delay units D1and D2. The delay unit D1is arranged to delay the clock signal with a delay time/amount TCOMP, and the delay unit D2is arranged to delay the output of delay unit D1with a delay time/amount TDAC. The delay time/amount TCOMP is associated with a time period consumed by the voltage comparison operation of comparator circuit110, and the delay time/amount TDAC is associated with an adjustable time period of DAC settling which can be controlled by the processing circuit125. The clock signal CLK, generated by the oscillator circuit120, is provided for the comparator circuit110and the processing circuit125. The processing circuit125is coupled to the oscillator circuit120and the SAR decision circuit115, and it for example (but not limited) comprises a delay stage circuit1251and a delay-locked-loop (DLL) control circuit1252. The delay stage circuit1251for example comprises a plurality of flip-flops FF1, FF2, . . . , and FFn which are connected in series. The DLL control circuit1252for example comprises a first flip-flop FFA, a second flip-flop FFB, an inverter INV1, and an up/down counter1253. A time period of an ADC conversion operation provided by the ADC device100may comprise multiple bit conversion/decision cycles such as n cycles respectively associated with n bits, and during each bit conversion cycle the SAR decision circuit115is arranged to determine the content of a corresponding bit according to the comparison signal CMPO currently outputted by the comparator circuit110and then to update the decision signal S_D based on the determine corresponding bit. The delay stage circuit1251is arranged for sequentially generating the multiple bit conversion signals S<1>, S<1>, . . . , and S<n> (i.e. S<1:n>) to the SAR decision circuit115according to the reset signal S_RST and the clock signal CLK, so that the SAR decision circuit115can start a corresponding bit decision in response to a signal transition occurring in a specific bit conversion signal. For example, the SAR decision circuit115may start a bit decision of the most significant bit (MSB) in response to a signal transition (e.g. a rising edge) occurring in the bit conversion signal S<1>. Similarly, the SAR decision circuit115may start a bit decision of the least significant bit (LSB) in response to a signal transition (e.g. a rising edge) occurring in the bit conversion signal S<n>. That is, the multiple bit conversion signals S<1>, S<1>, . . . , and S<n> are respectively associated with the multiple different bits of the decision signal S_D. In addition, the bit conversion signal S<n> is further transmitted to the DLL control circuit1252. The DLL control circuit1252is coupled to the delay stage circuit1251and the oscillator circuit120, and it is used for generating at least one guard signal by delaying a last bit conversion signal, i.e. the bit conversion signal S<n>, for at least one time, comparing the at least one guard signal with the reset signal S_RST to generate a control signal S_C, and for controlling the adjustable delay time/amount TDAC generated by the delay unit D2of the oscillator circuit120according to the control signal S_C. The at least one guard signal follows the multiple bit conversion signals S<1>, S<1>, . . . , and S<n>. By using the delay stage circuit1251and DLL control circuit1252, the processing circuit125can control the delay unit D2generating/adjusting the enough and appropriate delay time/amount TDAC to make each bit conversion cycle be enough and appropriate even though the ring oscillator circuit120may actually run too fast or too slow due to the process variation, the voltage variation, and/or the temperature variation made to any component (s) comprised within the ADC device100. The corresponding operations are detailed later. For the ADC conversion operation to convert the input voltage signal Vin into the digital signal DOUT having n bits, the DAC circuit105is used for generating the DAC voltage signal V_DAC (i.e. an analog signal) according to the input voltage signal Vin and the decision signal S_D (a digital decision signal) transmitted from the SAR decision circuit115. The digital decision signal S_D for example comprises multiple bits such as n bits. The DAC unit1052is arranged to perform the DAC operation to convert the digital decision signal S_D into an analog decision voltage signal V_D and output the analog decision voltage signal V_D into a second input of the subtractor1051. The subtractor1051has a first input coupled to the input voltage signal Vin, the second input coupled to the analog decision voltage signal V_D, and an output, and it is used to subtract the analog decision voltage signal V_D from the input voltage signal Vin to generate the DAC voltage signal V_DAC at its output as the DAC circuit's105analog output into the comparator circuit110. The comparator circuit110is used to compare the DAC voltage signal V_DAC with a reference level such as a ground level (but not limited) to generate the comparison signal CMPO into the SAR decision circuit according to the DAC voltage signal V_DAC. That is, the comparison signal CMPO can indicate whether the level of the DAC voltage signal V_DAC is greater than the reference level or not. Then, based on the SAR algorithm, the SAR decision circuit115updates the bit (s) of the decision signal S_D (i.e. digital output signal DOUT) based on the information indicated by the comparison signal CMPO. The SAR decision circuit115may sequentially decide or determine each bit of the decision signal S_D (i.e. digital output signal DOUT) from the MSB bit to the LSB bit. For instance (but not limited), the decision signal S_D initially may have n bits ‘0’, and the SAR decision circuit115at the first bit conversion cycle is arranged to determine the MSB bit of the decision signal S_D to generate/update the decision signal S_D. If the comparison signal CMPO indicates that the level of the DAC voltage signal V_DAC is greater than the reference level, then the SAR decision circuit115determines that the MSB bit is equal to ‘1’, and the decision signal S_D is updated as ‘1’ followed by (N−1) bits ‘0’. Each time when determining a different bit, the SAR decision circuit115updates the outputted decision signal S_D. Based on the feedback circuit structure inFIG.1, after the n bit conversion cycles are sequentially finished, the ADC conversion operation can output the decision signal S_D as the digital output signal DOUT converted from the input voltage signal Vin. In this embodiment, the comparator circuit110performs the comparison operation each time when receiving a signal transition of the clock signal CLK generated from oscillator circuit120; that the clock signal CLK triggers the comparator circuit110. For the clock signal CLK, when the oscillator circuit120receives an incoming signal transition (e.g. a falling edge) of the reset signal S_RST, the oscillator circuit120starts to run or oscillate to generate the clock signal CLK with multiple signal edges at different timings. In this embodiment, the delay units D1and D2may be regarded as a specific delay unit since the delay units D1and D2are connected in series. The NOR gate logic1201has a first input terminal, a second input terminal, and an output terminal. The first input terminal is coupled to the reset signal S_RST to receive the reset signal S_RST, the second input terminal is coupled to an output of the delay unit (s) (D1and D2) to receive a delayed signal, and the output terminal is used to generate an output signal as the clock signal CLK to the input of the delay unit (s) (D1and D2). The delay unit (s) (D1and D2) is/are used to apply specific delay amount (s) upon the output signal CLK of the NOR gate logic1201to generate the delayed signal back to the second input terminal of the NOR gate logic1201to form the ring oscillator circuit's120structure. Further, for example, the plurality of flip-flops FF1, FF2, . . . , and FFn connected in series are respectively used for generating the multiple bit conversion signals S<1>, S<1>, . . . , and S<n> (i.e. S<1:n>). Each flip-flop for example is a D-type flip-flop which has a clock input to receive the clock signal CLK, a data input (‘D’), a reset/clear input (‘RST’) to receive the reset signal S_RST, an output (‘Q’), and an inverted output (Q, not shown inFIG.1). Each D-type flip-flop's clock input is coupled to the clock signal CLK generated by the oscillator circuit120, and its reset/clear input is coupled to the reset signal S_RST. The data input of the first one D-type flip-flop FF1is coupled to a high logic level such as a supply voltage level Vdd, and the output of the first one flip-flop FF1is coupled to the data input of a next-stage (or next one) D-type flip-flop FF2. The data input of an intermediate one D-type flip-flop is coupled to the output of a previous-stage D-type flip-flop, and the output of the intermediate one flip-flop is coupled to the data input of a next-stage (or next one) D-type flip-flop. The data input of the last one D-type flip-flop FFn is coupled to the output of a previous-stage D-type flip-flop, and the output of the last one flip-flop FFn is coupled to the DLL control circuit1252. In addition, the outputs (‘Q’) of all the D-type flip-flops FF1-FFn are respectively used as the multiple bit conversion signals which are outputted to form the signal S<1:n> having N bits and transmitted to the SAR decision circuit115. The number of the D-type flip-flops is equal to n. Further, the first flip-flop FFA and second flip-flop FFB are for example (but not limited) D-type flip-flops. The data input of the first flip-flop FFA is coupled to the output of the last one D-type flip-flop FFn in the delay stage circuit1251, its clock input is coupled to the clock signal CLK, its reset input is coupled to the reset signal S_RST, and its data output is used for generating a first guard signal S<n+1> and is coupled to the data input of the second flip-flop FFB. The data input of the second flip-flop FFB is coupled to the first guard signal S<n+1>, its clock input is coupled to the clock signal CLK, its reset input is coupled to the reset signal S_RST, and its data output is used for generating a second guard signal S<n+2> and is coupled to the up/down counter1253through the inverter INV1. The input of the inverter INV1is coupled to the first guard signal S<n+1>, and its output is coupled to the up/down counter1253. The inverter INV1is used to invert the first guard signal S<n+1> to generate an inverted first guard signalS<n+1>to the up/down counter1253. The up/down counter1253is coupled to the reset signal S_RST, the inverted first guard signalS<n+1>, and the second guard signal S<n+2>, and it is used for comparing the reset signal S_RST with the inverted first guard signalS<n+1>to generate a first decision signal (not shown inFIG.1), comparing the reset signal S_RST with the second guard signal S<n+2> to generate a second decision signal (not shown inFIG.1), generating the control signal S_C to control the delay TDAC generated by the delay unit D2of the oscillator circuit120according to the first decision signal and the second decision signal. FIG.2is a diagram showing a scenario example of an ADC conversion time according to an embodiment of the invention. InFIG.2, the ADC conversion time can be allowed when the timing that the DAC circuit105has been settled is prior to the timing that the clock signal CLK goes to a high level at the next time and the multiple bit conversion cycles end and their level go to the high level before the reset signal S_RST switched from the low level to the high level. For example, for determining the MSB bit of the decision signal S_D, at time tA, the reset signal S_RST switched from the high level to the low level, and this event triggers the ring oscillator circuit120to run. In this situation, a rising edge occurs in the clock signal CLK, and also a rising edge occurs in the first bit conversion signal S<1> and it indicates that the bit conversion cycles for the MSB bit of decision signal S_D starts. The rising edge occurring in the clock signal CLK at time tA triggers the comparator circuit110inFIG.1to execute/perform the voltage comparison operation for one time for the current bit conversion cycle, i.e. MSB's bit conversion cycle. Then, at time tB, when the comparator circuit110finishes the voltage comparison operation, the comparator circuit110outputs the comparison signal CMPO to the SAR decision circuit115and issues or transmits the read signal RDY with the high level to the delay unit D1. In this situation, a rising edge occurs in the read signal RDY. For one or each bit conversion cycle, the delay unit D1is configured to provide the delay amount TCOMP, which may be configured as a particular delay amount initially, and the delay unit D2is configured to provide the delay amount TDAC which is adjustable and controlled by the processing circuit125. The delay unit D1is arranged to delay the rising edge of clock signal CLK with the delay amount TCOMP, and the read signal RDY causes that a transition, i.e. a falling edge, occurs in the clock signal CLK at time tC after the read signal RDY is sent to the delay unit D1and the delay of delay unit D1is elapsed. In this situation, at time tC, the voltage level provided by the DAC circuit105starts a transition. For example, it may be gradually lower down during the time period from time tC to time tE. The time tE means the timing that the bit conversion cycle of a different bit of the decision signal S_D starts. In addition, the falling edge occurring in the clock signal CLK at time tC will cause that a falling edge occurs in the clock signal CLK at time tD. It is needed to finish the settling of DAC circuit105before a rising edge occurs in the clock signal CLK at time tE, and the delay unit D1controlled by the processing circuit125is arranged to provide the delay amount TDAC to make and generate an appropriate and enough DAC settling time for the DAC circuit105. The bit conversion signals S<1>, S<2>, S<3>, . . . , and S<n> respectively indicate the bit conversion cycles of the different bits of the decision signal S_D, and their rising edges respectively indicate that the bit conversion cycles sequentially start one by one. It is required to make the LSB's bit conversion cycle, last performed, indicated by the signal S<n> starts prior to the time tF. That is, a specific margin is generated between the time tF and the end of an allowed whole time period of the ADC conversion time (i.e. a rising edge of the reset signal S_RST). Also, the start of LSB's bit conversion cycle may indicate that the ADC conversion operation completes. In this embodiment, to satisfy the requirement of the allowed ADC conversion time as well as precisely generate the converted digital code signal, the processing circuit125is arranged for providing and using the guard signal(s) such as S<n+1> (orS<n+1>) and/or S<n+2>, controlling the delay (or delay amount) TDAC generated by the delay unit D2of the oscillator circuit120to make a start of the first bit conversion signal S<1> associated with the first bit (e.g. the MSB) be close to a start (i.e. falling edge) of the reset signal S_RST and make a start of the at least one guard signal such as S<n+1> approach an end (i.e. rising edge) of the reset signal S_RST but not later than the end of the reset signal S_RST. The processing circuit125equivalently is arranged to control the delay TDAC so as to keep and make the end of the reset signal S_RST be later than a transition of the guard signal S<n+1> but not later than a transition of the guard signal S<n+2>. In addition, the processing circuit125is arranged for adjusting the delay TDAC generated by the delay unit D2of the oscillator circuit120in response to a process variation, a voltage variation, or a temperature variation. For example (but not limited), the ring oscillator circuit120may operate and run too fast or too slow due to the process variation, the voltage variation, and/or the temperature variation made to any component (s) comprised within the ADC device100. This may cause that the start timings of the bit conversion cycles become too fast or too slow, and this affects the performance of ADC device100. To solve or mitigate the problems caused by the process variation, the voltage variation, and/or the temperature variation, the processing circuit125employs the two guard signals S<n+1> and S<n+2> to simulate or emulate two additional bit conversion cycles sequentially following the bit conversion cycles of the above-mentioned n bits, and controls or tunes the delay TDAC of delay unit D2to adjust the timings of the events of the two signal transition occurrences of the two additional bit conversion cycles, so as to make the rising edge of reset signal S_RST be at a timing between the event of a signal transition occurrence of a first additional bit conversion cycle and the event of a signal transition occurrence of a second additional bit conversion cycle. By doing this, the processing circuit125can make sure that time periods of the bit conversion cycles of the true bit conversion signals S<1>-S<n> can be enough and appropriate. This can effectively provide enough and appropriate bit conversion cycles for the n bits respectively. FIG.3is a diagram of a different scenario example of the ADC conversion time period according to another embodiment of the invention. In this scenario example, it is assumed that the ring oscillator circuit120may operate and run too fast. InFIG.3, when a transition such as a rising edge occurs at time t1in the last bit conversion signal S<n> corresponding to the LSB bit, this indicates that the ADC conversion operation is finished. In this example, a transition (such as a falling edge) of the inverted first guard signalS<n+1>(i.e. a rising edge of the first guard signal S<n+1>) occurs at time t2, and a transition (such as a rising edge) of the second guard signal S<n+2>) occurs at time t3. The timing of the end (i.e. the rising edge) of reset signal S_RST, occurring at time t4, is later than the time t3. Thus, in this situation, when the inverted first guard signalS<n+1>is switched into the low level at time t2and the reset signal S_RST is at the low level, the up/down counter1253compares the two signals, and the first decision signal generated by the up/down counter1253may be at the high level to indicate that the two signals are at the same level. For example, the first decision signal may be at the high level to indicate the same level and may be at the low level to indicate that the two signals are at different levels. Similarly, when the second guard signal S<n+2> is switched into the high level at time t3and the reset signal S_RST is at the low level, the up/down counter1253compares the two signals, and the second decision signal generated by the up/down counter1253may be at the low level to indicate that the two signals are at different levels. For example, the second decision signal may be at the high level to indicate the same level and may be at the low level to indicate that the two signals are at different levels. In this example, the first decision signal (indicating the fast speed, ‘UP’) at the high level and the second decision signal (indicating the slow speed, ‘DN’) at the low level can be indicate that the oscillator circuit120runs too fast, and thus the processing circuit125can adjust the control signal S_C to tune the delay amount TDAC to lower down the operation speed of the oscillator circuit120. FIG.4is a diagram of a different scenario example of the ADC conversion time period according to another embodiment of the invention. In this scenario example, it is assumed that the ring oscillator circuit120may operate and run too slow. InFIG.4, when a transition such as a rising edge occurs at time t5in the last bit conversion signal S<n> corresponding to the LSB bit, this indicates that the ADC conversion operation is finished. In this example, the timing of the end (i.e. the rising edge) of reset signal S_RST occurs at time t6. A transition (such as a falling edge) of the inverted first guard signalS<n+1>(i.e. a rising edge of the first guard signal S<n+1>) occurs at time t7which is later than time t6. No transitions occur in the inverted first guard signalS<n+1>or in the second guard signal S<n+2> before the time t6. In this example, since the inverted first guard signalS<n+1>is at the high level before time t6and the reset signal S_RST is at the low level, the up/down counter1253compares the two signals, and the first decision signal generated by the up/down counter1253may be at the low level to indicate that the two signals are at different levels. In addition, since the second guard signal S<n+2> is at the low level and the reset signal S_RST is at the low level, the up/down counter1253compares the two signals, and the second decision signal generated by the up/down counter1253may be at the high level to indicate that the two signals are at the same level. In this example, the first decision signal (indicating the fast speed, ‘UP’) at the low level and the second decision signal (indicating the slow speed, ‘DN’) at the high level can be indicate that the oscillator circuit120runs too slow, and thus the processing circuit125can adjust the control signal S_C to tune the delay amount TDAC to increase or raise up the operation speed of the oscillator circuit120. It should be noted that in the example ofFIG.4the processing circuit125may determine that the oscillator circuit120is slow even through the rising edge of the last bit conversion cycle S<n> occurs before the rising edge of the reset signal S_RST since an enough margin time period should be guaranteed and in this example the margin time may be not enough. However, this is not intended to be a limitation of the invention. FIG.5is a diagram of a different scenario example of the ADC conversion time period according to another embodiment of the invention. In this scenario example, it is assumed that the ring oscillator circuit120may operate and run adequately. InFIG.5, when a transition such as a rising edge occurs at time t8in the last bit conversion signal S<n> corresponding to the LSB bit, this indicates that the ADC conversion operation is finished. In this example, a transition (such as a falling edge) of the inverted first guard signalS<n+1>(i.e. a rising edge of the first guard signal S<n+1>) occurs at time t9, and the timing of the end (i.e. the rising edge) of reset signal S_RST occurs at time t10. No transitions occur in the second guard signal S<n+2> before the time t10. In this example, when the inverted first guard signalS<n+1>is switched into the low level at time t9and the reset signal S_RST is at the low level, the up/down counter1253compares the two signals, and the first decision signal generated by the up/down counter1253may be at the high level to indicate that the two signals are at the same level. In addition, since the second guard signal S<n+2> is at the low level and the reset signal S_RST is at the low level before time t10, the up/down counter1253compares the two signals, and the second decision signal generated by the up/down counter1253may be at the high level to indicate that the two signals are at the same level. In this example, the first decision signal (indicating the fast speed, ‘UP’) at the high level and the second decision signal (indicating the slow speed, ‘DN’) at the high level can be indicate that the oscillator circuit120runs adequately and appropriately, and thus the processing circuit125is arranged to not adjust or tune the delay amount TDAC in this situation. To make readers more clearly understand the spirits of the invention,FIG.6is provided.FIG.6is a schematic flowchart diagram of the adjusting operation of ADC device100inFIG.1according to an embodiment of the invention. Provided that substantially the same result is achieved, the steps of the flowchart shown inFIG.6need not be in the exact order shown and need not be contiguous, that is, other steps can be intermediate. Steps are detailed in the following:Step S605: Start;Step S610: Provide the SAR ADC device100;Step S615: Use the oscillator circuit120of SAR ADC device100to generate the clock signal according to the reset signal S_RST and the delay of the delay unit;Step S620: Sequentially generate multiple bit conversion signals associated with multiple different bits of the decision signal of SAR ADC device100;Step S625: Generate at least one guard signal which follows the multiple bit conversion signals;Step S630: Compare the at least one guard signal with the reset signal S_RST to adjust the delay generated by the delay unit of the oscillator circuit120; andStep S635: End. Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims. | 26,997 |
11863196 | DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS The making and using of the presently preferred embodiments are discussed in detail below. It should be appreciated, however, that the present invention provides many applicable inventive concepts that can be embodied in a wide variety of specific contexts. The specific embodiments discussed are merely illustrative of specific ways to make and use the invention, and do not limit the scope of the invention. In the following detailed description, reference is made to the accompanying drawings, which form a part hereof and in which are shown by way of illustrations specific embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present invention. For example, features illustrated or described for one embodiment can be used on or in conjunction with other embodiments to yield yet a further embodiment. It is intended that the present invention includes such modifications and variations. The examples are described using specific language, which should not be construed as limiting the scope of the appending claims. The drawings are not scaled and are for illustrative purposes only. For clarity, the same or similar elements have been designated by corresponding references in the different drawings if not stated otherwise. According to embodiments, a sigma-delta ADC for use in a microphone includes an on-demand DAC, wherein at least one of the DAC elements is always on, and a plurality of the remaining DAC elements are “on-demand,” wherein the DAC element is progressively enabled according to a magnitude of the analog input signal presented to the ADC. When the magnitude of the analog input signal is low, one or more of the on-demand DAC elements will be disabled. Disabling the DAC element can include switching off a buffer of the DAC element, or placing the buffer into a low power mode. Disabling the DAC element can also include coupling an output of the DAC element to a common node, so that the thermal noise of the DAC element is not transferred to the output of the DAC, and thus to the output of the ADC, and to the microphone. Enabling the DAC element can include switching on the buffer of the DAC element, or placing the buffer into a normal power mode. Enabling the DAC element can also include decoupling the output the DAC element from the common node, so that the DAC element is a functional element of the DAC. The number of DAC elements that are progressively enabled can correspond to the magnitude of the analog input signal such that a lower magnitude will cause a lower number of DAC elements to be enabled, and a higher magnitude will cause a greater number of DAC elements to be enabled. According to embodiments, for small signals the ADC will operate in a single bit operational mode, wherein the inner DAC elements (which can also be described as “cells” or “DAC cells”) will be toggled. When the input signal is increased, more on-demand DAC elements can be connected to fulfill full-scale specification requirements and to keep the ADC loop stable, as will be described in further detail below. The DAC elements can be driven in a “split-buffer” configuration, allowing not only the DAC element to be disabled, but also the power consumed by buffers of the DAC element to be saved (buffer switched off or placed into a low power mode). Preload conditions can be also applied to the DAC element to diminish dynamic artifacts. The inner DAC elements and the DAC element buffers can be also combined to reduce mismatch effects, which may be important for minimizing ultrasound artifacts in some embodiments. In some embodiments, DAC elements that are not enabled can also be kept switching to maintain dynamic loading conditions without injecting noise into an integrator virtual ground node, which is described below in further detail. FIG.1is a block diagram of an exemplary sigma-delta ADC100. ADC100receives an analog input signal Vin_ana and provides a digital output signal Out_digi. The analog input signal Vin_ana is received by a positive input of summer102. The output of summer102is coupled to a loop filter104, which may include an integrator and other filtering and noise-shaping circuitry. The output of loop filter104is coupled to a quantizer106, which maps the analog signal at the output of loop filter104into a one-bit digital signal. The one-bit digital signal is also received by DAC108A, which can comprise a one-bit DAC. The output of DAC108is coupled to a negative input of summer102. FIG.2Ais a block diagram of a digital microphone150including a micro-electro-mechanical system (MEMS) device152for converting sounds waves into an analog signal, and an Application-Specific Integrated Circuit (ASIC)154for receiving the analog signal and for biasing MEMS device152through a matching set of MEMS and ASIC pads162, according to an embodiment. MEMS device152can comprise a silicon variable capacitance device, comprising stationary BACKPLATE1and BACKPLATE2structures, and a movable MEMBRANE structure, in an embodiment. ASIC154comprises a plurality of analog, digital, and mixed signal components, including a programmable gain amplifier (PGA)160for amplifying the signal provided by MEMS device, and provides the Vin_ana analog input signal in the digital microphone implementation. ASIC154also includes a sigma-delta ADC200, according to an embodiment, which is described in further detail below. The output of ADC200is coupled to additional digital signal processing components in block164, which is in turn coupled to a 1-bit Pulse-Density Modulation (PDM) interface component170, which is in turn coupled to the DATA, SELECT, and CLOCK pins of ASIC154. Digital signal processing block164is also coupled to a digital core block166, which can additional memory and computation circuitry. Storage block168includes calibration coefficients to the digital core block166. ASIC154can also include a power mode detector172coupled to the digital core block166and the 1-bit PDM interface component170. ASIC154also includes a MEMS bias charge pump156for biasing MEMS device152, and a plurality of voltage regulators158coupled to the GROUND and VDD ASIC pins, and coupled to various ASIC blocks and components such as ADC200, digital signal processing block164, and the 1-bit PDM interface component. FIG.2Bincludes a block diagram of the sigma-delta ADC200in the left portion of the figure.FIG.2Balso includes a diagram of a 4-bit DAC108B in the right portion of the figure. DAC108B comprises a plurality of DAC cells in location206B and location206C that are progressively enabled in response to a digital input code (1, 2, 3, . . . 14, 15), including improvements directed to noise reduction and/or power savings with respect to prior art sigma-delta ADCs, according to an embodiment. An individual DAC cell is represented as a circle inFIG.2B. For some digital input codes all of the DAC cells are enabled. For example, in the top row all available DAC cells are enabled (corresponding to code {15, 0}) and thus the entire row of DAC cells are represented by circles. For other digital input codes a subset of the DAC cells are enabled. For example, in the bottom row only one of the available DAC cells is enabled (corresponding to code {7,8}) and thus only one DAC cell is represented by a circle. ADC200receives the analog input signal Vin_ana (from PGA160shown inFIG.2A) and provides the digital output signal Out_digi (at the DATA pin shown inFIG.2A). The analog input signal Vin_ana is received by a positive input of summer102. The output of summer102is coupled to a loop filter104, which may include an integrator and other filtering and noise-shaping circuitry. The output of loop filter104is coupled to a quantizer106, which maps the analog signal at the output of loop filter104into a multi-bit digital signal. The multi-bit digital signal is also received a first decoder202and a second decoder204. The first decoder202can comprise an optional binary-to-thermometer code decoder, and the second decoder204can comprise DAC logic for controlling switches in a plurality of on-demand DAC elements, which will be described in further detail below, particularly with respect to the descriptions ofFIG.5andFIG.6. The input of DAC108A, which can comprise a four-bit DAC in some embodiments, is modified according to embodiments and also described in further detail below with respect to the descriptions ofFIG.5andFIG.6. Alternatively, DACs having other resolutions can also be used. The output of DAC108is coupled to a negative input of summer102, according to an embodiment. In other embodiments, summer102may be merged with loop filter104. FIG.2Bshows a diagram representing how the DAC elements are activated with respect to DAC input codes in DAC108B, as previously described. In embodiments, each DAC element includes a buffer for providing a high impedance input to the DAC element, a capacitor, and a switch matrix to the capacitor. The input to each DAC element receives a single bit of the input digital code, and the output of each DAC element is selectively coupled together to sum the capacitor charge to an output summing node, which, in turn, provides the converted analog output voltage. As described herein, “enabling” a DAC element means placing the buffer of the DAC element into a normal power mode or otherwise turning on the buffer, and/or coupling the DAC element to the output summing node of the DAC. Correspondingly, “disabling” a DAC elements means placing the buffer of the DAC element into a low power mode or otherwise turning off the buffer, and/or decoupling the DAC element from the output summing node of the DAC. The buffer, capacitor, and switch matrix of each DAC element in DAC108B is shown and described in further detail below, for example with respect toFIG.4. FIG.2Bshows at least one always-on DAC element in location206A, and a plurality of on-demand DAC elements in locations206B and206C. In the example ofFIG.2B, five always-on DAC elements are shown, and the remaining DAC elements are on-demand DAC elements. InFIG.2B, the number of DAC elements that are enabled are shown as an inverted pyramid of DAC elements, from a minimum of one to a maximum of sixteen (for a four-bit DAC). The DAC elements are arranged in three locations: an always-on location206A, and two on-demand locations206B and206C. In an embodiment, locations206B and206D are symmetrically arranged on either side of location206A in a plan view of the DAC layout. In an embodiment, all of the buffers of the DAC elements in the always-on location206A are turned on, or in a normal power mode. However, not all of the DAC elements are necessarily enabled, depending upon the magnitude of the analog input signal. Thermometer codes210are shown for each level of the analog input signal, from code {7,8} through code {0,15}. For example, in response to thermometer codes {7,8}, which corresponds to a minimum level input signal (about 75 μA in an embodiment), only one DAC element is enabled. In another example, in response to thermometers codes {0,15}, which corresponds to a maximum level input signal (about 105 μA in an embodiment), all of the DAC elements are enabled. For intermediate thermometer codes, corresponding to intermediate input signal levels, additional DAC elements are progressively enabled corresponding to the level of the input signal. FIG.3is graph300of the number of DAC cells that are enabled (in the sense of being decoupled from the DAC output), which increases with the magnitude of an analog input signal. The number of DAC cells to be used is represented by trace302for one DAC cell for a minimum input analog signal, to a maximum number of DAC cells for the maximum input analog signal. While trace302is shown as representing a linear staircase relationship between the number of DAC cells to be used and the magnitude of the analog input signal, other progressive relationships can also be used. FIG.4is a schematic of DAC100B, which is coupled to integrator412. DAC100B was described above with respect toFIG.2Bas including a plurality of DAC elements, wherein each DAC element includes a buffer and a capacitor, wherein the capacitor is coupled to a switch matrix. Integrator412(which can be a part of loop filter104shown inFIG.2B) is described in further detail below. DAC100B includes a plurality of progressively enabled buffers A1, A2, A3, and A4, and symmetrical progressively enabled buffers A12, A13, A14, and A15, and a plurality of always-on buffers A6, A7, A8, A9, A10, and A11. In an embodiment, the outputs of the progressively enabled buffers A1, A2, A3, A4, A12, A13, A14, and A15are coupled to the outputs of the symmetrical progressively enabled buffers. For example, the output of buffer A3is coupled to the output of buffer A13. In another example, the output of buffer A2is coupled to the output of buffer A14. In an embodiment, the outputs of the always-on buffers are coupled together. Five always-on buffers are shown inFIG.4, but the number of always-on buffer can be changed in other embodiments. At a minimum DAC100B comprises at least one always-on buffer. DAC100B also comprises a block including a plurality of progressively enabled 4-bit DAC elements and a switch matrix in block414. Block414comprises portions of the individual DAC elements, which can comprise a capacitor in some embodiments. Block414also comprises a plurality of switches coupled to the both ends of the capacitor for selectively coupling the capacitor either to a common node, wherein the thermal noise component is prevented from adding to the output signal of the ADC integrator412, or to the input of ADC integrator412. The details of block414are shown and described in further detail below, particularly with respect to drawingFIGS.5and6. Integrator412is part of loop filter104, which will be explained in further detail below. FIG.5is a top level block diagram of a differential ADC500, according to an embodiment. ADC500includes a reference voltage generator502, a low pass filter504coupled to the output of reference voltage generator502, and a plurality of DAC channels600(described in further detail below with respect toFIG.6) coupled to circuit nodes616and618. The reference voltage generator502can comprise a bandgap reference voltage generator in an embodiment. Low pass filter504can comprise either dedicated logic circuitry or software instructions for configuring a microprocessor. In an embodiment, circuit nodes616and618are the positive and negative inputs, respectively, of integrator412. A MEMS device (such as MEMS device152shown inFIG.2A) provides an analog differential signal Vinp and Vinm coupled to a first set of switches508controlled by two non-overlapping clock phases ϕ1and ϕ2to selectively transfer the analog differential signal to the capacitors CDAC3and CDAC4, respectively, or to capacitors CDAC4and CDAC3, respectively. Capacitors CDAC3and CDAC4are also coupled to circuit nodes616and618. A second set of switches510is controlled by the two non-overlapping clock phases ϕ1and ϕ2to selectively couple the negative output of integrator412to the positive input through a first integration capacitor Cint1or a direct connection. A third set of switches512is controlled by the two non-overlapping clock phases ϕ1and ϕ2to selectively couple the positive output of integrator412to the negative input through a second integration capacitor Cint2or a direct connection. The outputs of integrator412are coupled to quantizer106, previously described with respect toFIG.2A, through additional integrator and/or filter stages, in some embodiments. The output of quantizer106provides the digital output DAC_VAL[1:m] of ADC500, which is the digital converted signal corresponding to the analog differential signal Vinp and Vinm. The DAC_VAL[1:m] digital signal, which can comprise a multi-bit digital signal in an embodiment, is fed back to the DAC channels600through an optional binary-to-thermometer code converter, which corresponds to the first decoder202shown inFIG.2A. The DAC_VAL[1:m] digital signal is also fed back to the DAC channels600through a DAC logic block, which corresponds to the second decoder204shown inFIG.2A. Second decoder204generates the Dis_cap[1:m] and Dis_buff[1:m] control signals. The first decoder202and second decoder204can also comprise either dedicated logic circuitry or software instructions for configuring a microprocessor. In operation, ADC500converts the analog differential signal Vinp and Vinm into a digital output code. However, as previously described, at least some of the DAC elements in DAC channels600can be selectively enabled to provide power savings and to provide a reduction in thermal noise appearing in the digital output code. In some embodiments the selective enablement can be performed progressively corresponding to the magnitude of the input analog signal. The power savings is provided by selectively disabling or lowering the power of buffers in the on-demand DAC elements within DAC channels600. The power reduction is provided by selectively coupling a capacitor in the DAC element to a common node, so that the thermal noise contribution of the capacitor cannot be include in the digital output code. In some embodiments the power savings and noise reduction are combined in a single DAC element. In some embodiments, only the noise reduction is used in a single DAC element. Power reduction and noise reduction are thus two different aspects of the DAC described herein, according to embodiments. The noise is reduced because the capacitor (or DAC element) is disconnected from the output of the DAC, such that the thermal noise of the capacitor does not appear in the signal output. The power savings is realized from shutting down the buffer and stopping the switching and the charging and discharging of the capacitor. FIG.6is a schematic diagram of a single DAC channel600, according to an embodiment. Single DAC channel600includes a buffer602, which corresponds to any of the buffers A1, A2, A3, . . . A13, A14, and A15shown inFIG.4, and DAC channel604, which corresponds to an individual channel portion of block414shown inFIG.4. Buffer602is coupled between a power source such as VDD, and ground, in an embodiment, and receives a Vref input voltage and provides a buffered VR output voltage. Buffer602comprises a PMOS current mirror606including a ix sized input transistor, an mx sized output transistor, and a ix sized output transistor. The input of current mirror606receives an Iref input bias current, and provides two output currents having values of m*Iref, and Iref. PMOS transistor P1, which operates as a source follower that buffers the Vref voltage, provides a current sink and is selectively coupled to receive an (m+1)*Iref output current or Iref output current from current mirror606depending upon the state of switch607. Switch607is controlled by the Dis_buff[i] control signal. If switch607is closed, buffer602is in a normal power mode and the currents from both output transistors of current mirror606are sunk through PMOS transistor P1, and in switch607is open, buffer602is in a low power mode and only the smaller of the two currents provides by current mirror606is sunk through PMOS transistor P1. In some embodiments a single output current mirror may be used such that buffer is either completely disabled or enabled based on the state of the Dis_buff[i] control signal. In the low power mode, buffer602presents a “weak” reference voltage (VR) to the input of DAC channel604. DAC channel604comprises a first set of switches608coupled to DAC capacitors CDAC1and CDAC2. In an embodiment, DAC capacitors CDAC1and CDAC2have the same capacitance value. DAC capacitors CDAC1and CDAC2are in turn coupled to a second set of switches610. The second set of switches are coupled to a third set of switches612, which are in turn coupled to circuit nodes616and618. Circuit nodes616and618are coupled to an integrator as will be explained in further detail below. The first set of switches608is controlled by two non-overlapping clock phases ϕ1and ϕ2to selectively couple the VR buffered reference voltage to either DAC capacitors CDAC1or CDAC2, or to ground. In some embodiments, the first set of switches608can be completely turned off, for example with one or more on-demand DAC elements. However, for the always-on DAC elements, the first set of switches are kept switching to provide a constant dynamic load at the VR node (for mismatch reasons). The second set of switches610is controlled by the Dis_cap[i] control signal to selectively couple the output of DAC capacitors CDAC1and CDAC2to a common mode voltage source vcm, or to the third set of switches612. When the second set of switches610is closed, the thermal noise generated by the DAC capacitors is routed to the common mode voltage source vcm, such that the noise will not appear in the output of the ADC. When the second set of switches610is open, DAC channel604is operating in a normal mode of operation (enabled). The third set of switches is controlled by the d1and d2control signals to selectively couple, based on the output of the DAC, the outputs of DAC capacitors CDAC1and CDAC2to circuit nodes616and618, respectively, or to circuit nodes618and616, respectively. If the DAC capacitors are routed to the common mode voltage source vcm, then all of the switches in the third set of switches612are opened. Control signals d1and d2are generated by a DAC control block614, which receives two input signals DAC_VAL[i] and Dis_cap[i]. The source of these input signals is explained in above with respect to the description ofFIG.5. In an embodiment, DAC control block614comprises either dedicated logic circuitry or software instructions for configuring a microprocessor. FIG.7is a flow chart of a method700of operating a DAC, according to an embodiment, comprising coupling at least one always-on DAC element to an output node of the DAC at step702; and progressively coupling a plurality of on-demand DAC elements to the output node of the DAC corresponding to progressively increasing input digital codes received by the DAC at step704. Method700further comprises progressively decoupling the plurality of on-demand DAC elements from a common node of the DAC in embodiments. The common node vcm is shown inFIG.5. Decoupling DAC capacitors CDAC1and CDAC2, previously described, requires the action of the second set of switches510to electrically decouple (electrically insulate) the DAC capacitors from the common node. The common node vcm is a separate node that is different from the DAC ground node, in an embodiment. Method700further comprises progressively enabling a buffer of each of the plurality of on-demand DAC elements in embodiments, wherein enabling the buffer of each of the plurality of on-demand DAC elements comprises switching the buffer from an OFF mode of operation to an ON mode of operation, or wherein enabling the buffer of each of the plurality of on-demand DAC elements comprises switching the buffer from a low power mode of operation to a high (normal) power mode of operation. Switch507, previously described and shown inFIG.5, selectively places the buffer in the low power mode of operation or the normal mode of operation. Example embodiments of the present invention are summarized here. Other embodiments can also be understood from the entirety of the specification and the claims filed herein.Example 1. An analog-to-digital converter (ADC) includes a loop filter having an input for receiving an analog input signal; a quantizer having an input coupled to an output of the loop filter, and an output for providing a digital output signal; and a digital-to-analog converter (DAC) having an input coupled to an output of the quantizer, and an output coupled to the loop filter, wherein the DAC includes at least one always-on DAC element, and a plurality of on-demand DAC elements.Example 2. The ADC of Example 1, wherein the plurality of on-demand DAC elements is configured for progressive enablement corresponding to an increase in the analog input signal.Example 3. The ADC of any of the above examples, wherein the plurality of on-demand DAC elements includes a plurality of internally-switched buffers.Example 4. The ADC of any of the above examples, wherein the plurality of on-demand DAC elements are selectively coupled to a common node of the DAC.Example 5. The ADC of any of the above examples, wherein the at least one always-on DAC element includes a first buffer that is configured to remain in a first mode, and wherein each of the plurality of on-demand DAC elements includes a second buffer that is configured to be selectively switched between the first mode and a second mode.Example 6. The ADC of any of the above examples, wherein the at least one always-on DAC element is arranged in a first portion of the DAC, and wherein the plurality of on-demand DAC elements includes a plurality of symmetrically-paired on-demand DAC elements arranged in second and third portions of the DAC adjacent to the first portion of the DAC.Example 7. The ADC of any of the above examples, wherein the at least one always-on DAC element and each of the plurality of on-demand DAC elements include a buffer configured for receiving a reference voltage; a capacitor; a switch matrix coupled between the capacitor and the buffer.Example 8. The ADC of any of the above examples, wherein the at least one always-on DAC element includes a plurality of always-on DAC elements, and wherein each buffer of the plurality of always-on DAC elements are coupled together.Example 9. The ADC of any of the above examples, wherein an output of a buffer in the plurality of on-demand DAC elements is coupled to an output of a symmetrically-paired buffer in the plurality of on-demand DAC elements.Example 10. The ADC of any of the above examples, wherein each buffer in the plurality of on-demand DAC elements includes a switch configured for selecting between two biasing currents.Example 11. A digital-to-analog converter (DAC) include a plurality of DAC elements, wherein each DAC element has an input for receiving a digital input signal; and a switch matrix coupled to the plurality of DAC elements having an output for providing an analog output signal, wherein the plurality of DAC elements includes at least one always-on DAC element, and a plurality of on-demand DAC elements.Example 12. The DAC of Example 11, wherein the plurality of on-demand DAC elements is configured to be progressively enabled.Example 13. The DAC of any of the above examples, wherein each DAC element includes a buffer; and a capacitor coupled to the buffer.Example 14. The DAC of any of the above examples, wherein the at least one always-on DAC elements includes a plurality of always-on elements, and wherein each always-on element is coupled together.Example 15. The DAC of any of the above examples, wherein one of the plurality of on-demand DAC elements is coupled to a symmetrically-arranged on-demand DAC element.Example 16. A method of operating a digital-to-analog converter (DAC) includes coupling at least one always-on DAC element to an output node of the DAC; and progressively coupling a plurality of on-demand DAC elements to the output node of the DAC corresponding to progressively increasing input digital codes received by the DAC.Example 17. The method of Example 16, further comprising progressively decoupling the plurality of on-demand DAC elements from a common node of the DAC.Example 18. The method of any of the above examples, further comprising progressively enabling a buffer of each of the plurality of on-demand DAC elements.Example 19. The method of any of the above examples, wherein enabling the buffer of each of the plurality of on-demand DAC elements includes switching the buffer from an OFF mode of operation to an ON mode of operation.Example 20. The method of any of the above examples, wherein enabling the buffer of each of the plurality of on-demand DAC elements includes switching the buffer from a low power mode of operation to a high power mode of operation. While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is therefore intended that the appended claims encompass any such modifications or embodiments. | 28,775 |
11863197 | DETAILED DESCRIPTION OF THE INVENTION FIG.1is a schematic block diagram of an embodiment of a communication system100that includes a plurality of computing devices12, one or more servers22, one or more databases24, one or more networks26, a plurality of analog to digital converters (ADCs)28, a plurality of sensors30, and a plurality of loads32. Generally speaking, an ADC28is configured to convert an analog signal31into a digital signal. In some examples, such an analog signal may be provided from and/or correspond a signal associated with a sensor30, or generally speaking, a load32(e.g., such as which is consumptive of current, voltage, and/or power, and/or such as which produces a current, voltage, and/or power signal). Also, in some examples, note that any one of the computing devices12includes a touch screen with sensors30, a touch & tactic screen that includes sensors30, loads32, and/or other components. A sensor30functions to convert a physical input into an output signal (e.g., an electrical output, an optical output, etc.). The physical input of a sensor may be one of a variety of physical input conditions. For example, the physical condition includes one or more of, but is not limited to, acoustic waves (e.g., amplitude, phase, polarization, spectrum, and/or wave velocity); a biological and/or chemical condition (e.g., fluid concentration, level, composition, etc.); an electric condition (e.g., charge, voltage, current, conductivity, permittivity, eclectic field, which includes amplitude, phase, and/or polarization); a magnetic condition (e.g., flux, permeability, magnetic field, which amplitude, phase, and/or polarization); an optical condition (e.g., refractive index, reflectivity, absorption, etc.); a thermal condition (e.g., temperature, flux, specific heat, thermal conductivity, etc.); and a mechanical condition (e.g., position, velocity, acceleration, force, strain, stress, pressure, torque, etc.). For example, piezoelectric sensor converts force or pressure into an eclectic signal. As another example, a microphone converts audible acoustic waves into electrical signals. There are a variety of types of sensors to sense the various types of physical conditions. Sensor types include, but are not limited to, capacitor sensors, inductive sensors, accelerometers, piezoelectric sensors, light sensors, magnetic field sensors, ultrasonic sensors, temperature sensors, infrared (IR) sensors, touch sensors, proximity sensors, pressure sensors, level sensors, smoke sensors, and gas sensors. In many ways, sensors function as the interface between the physical world and the digital world by converting real world conditions into digital signals that are then processed by computing devices for a vast number of applications including, but not limited to, medical applications, production automation applications, home environment control, public safety, and so on. The various types of sensors have a variety of sensor characteristics that are factors in providing power to the sensors, receiving signals from the sensors, and/or interpreting the signals from the sensors. The sensor characteristics include resistance, reactance, power requirements, sensitivity, range, stability, repeatability, linearity, error, response time, and/or frequency response. For example, the resistance, reactance, and/or power requirements are factors in determining drive circuit requirements. As another example, sensitivity, stability, and/or linear are factors for interpreting the measure of the physical condition based on the received electrical and/or optical signal (e.g., measure of temperature, pressure, etc.). Any of the computing devices12may be a portable computing device and/or a fixed computing device. A portable computing device may be a social networking device, a gaming device, a cell phone, a smart phone, a digital assistant, a digital music player, a digital video player, a laptop computer, a handheld computer, a tablet, a video game controller, and/or any other portable device that includes a computing core. A fixed computing device may be a computer (PC), a computer server, a cable set-top box, a satellite receiver, a television set, a printer, a fax machine, home entertainment equipment, a video game console, and/or any type of home or office computing equipment. An example of the computing devices12is discussed in greater detail with reference to one or more ofFIG.2. A server22is a special type of computing device that is optimized for processing large amounts of data requests in parallel. A server22includes similar components to that of the computing devices12with more robust processing modules, more main memory, and/or more hard drive memory (e.g., solid state, hard drives, etc.). Further, a server22is typically accessed remotely; as such it does not generally include user input devices and/or user output devices. In addition, a server may be a standalone separate computing device and/or may be a cloud computing device. A database24is a special type of computing device that is optimized for large scale data storage and retrieval. A database24includes similar components to that of the computing devices12with more hard drive memory (e.g., solid state, hard drives, etc.) and potentially with more processing modules and/or main memory. Further, a database24is typically accessed remotely; as such it does not generally include user input devices and/or user output devices. In addition, a database24may be a standalone separate computing device and/or may be a cloud computing device. The network26includes one more local area networks (LAN) and/or one or more wide area networks WAN), which may be a public network and/or a private network. A LAN may be a wireless-LAN (e.g., Wi-Fi access point, Bluetooth, ZigBee, etc.) and/or a wired network (e.g., Firewire, Ethernet, etc.). A WAN may be a wired and/or wireless WAN. For example, a LAN may be a personal home or business's wireless network and a WAN is the Internet, cellular telephone infrastructure, and/or satellite communication infrastructure. In an example of operation, computing device12communicates with ADCs28, that are in communication with a plurality of sensors30. In some examples, the sensors30and/or ADCs28are within the computing device12and/or external to it. For example, the sensors30may be external to the computing device12and the ADCs28are within the computing device12. As another example, both the sensors30and the ADCs28are external to the computing device12. In some examples, when the ADCs28are external to the computing device, they are coupled to the computing device12via wired and/or wireless communication links. The computing device12communicates with the ADCs28to; (a) turn them on, (b) obtain data from the sensors30, loads32, one or more analog signals31, etc. individually and/or collectively), (c) instruct the ADC28on how to process the analog signals associated with the sensors30, loads32, one or more analog signals31, etc. and to provide digital signals and/or information to the computing device12, and/or (d) provide other commands and/or instructions. In an example of operation and implementation, a computing device12is coupled to ADC28that is coupled to a senor30. The sensor30and/or the ADC28may be internal and/or external to the computing device12. In this example, the sensor30is sensing a condition that is particular to the computing device12. For example, the sensor30may be a temperature sensor, an ambient light sensor, an ambient noise sensor, etc. As described above, when instructed by the computing device12(which may be a default setting for continuous sensing or at regular intervals), the ADC28is configured to generate a digital signal and/or information associated with the sensor30and to provide that digital signal and/or information to the computing device12. FIG.2is a schematic block diagram of an embodiment of a computing device12(e.g., any of the computing devices12inFIG.1). The computing device12includes a core control module40, one or more processing modules42, one or more main memories44, cache memory46, an Input-Output (I/O) peripheral control module52, one or more I/O interfaces54, one or more ADCs28coupled to the one or more I/O interfaces54and one or more loads32, optionally one or more digital to analog converters (DACs)29one or more I/O interfaces54, one or more input interface modules56, one or more output interface modules58, one or more network interface modules60, and one or more memory interface modules62. In some examples, the computing device12also includes a component processing module48. In an example of operation and implementation, such a component processing module48is implemented to facilitate operations associated with video graphics that may include any one or more of video graphics, display, a touch screen, a camera, audio output, audio input, and/or any other one or more computing device components, etc. A processing module42is described in greater detail at the end of the detailed description of the invention section and, in an alternative embodiment, has a direction connection to the main memory44. In an alternate embodiment, the core control module40and the I/O and/or peripheral control module52are one module, such as a chipset, a quick path interconnect (QPI), and/or an ultra-path interconnect (UPI). Each of the main memories44includes one or more Random Access Memory (RAM) integrated circuits, or chips. For example, a main memory44includes four DDR4 (4th generation of double data rate) RAM chips, each running at a rate of 2,400 MHz. In general, the main memory44stores data and operational instructions most relevant for the processing module42. For example, the core control module40coordinates the transfer of data and/or operational instructions from the main memory44and the memory64-66. The data and/or operational instructions retrieve from memory64-66are the data and/or operational instructions requested by the processing module or will most likely be needed by the processing module. When the processing module is done with the data and/or operational instructions in main memory, the core control module40coordinates sending updated data to the memory64-66for storage. The memory64-66includes one or more hard drives, one or more solid state memory chips, and/or one or more other large capacity storage devices that, in comparison to cache memory and main memory devices, is/are relatively inexpensive with respect to cost per amount of data stored. The memory64-66is coupled to the core control module40via the I/O and/or peripheral control module52and via one or more memory interface modules62. In an embodiment, the I/O and/or peripheral control module52includes one or more Peripheral Component Interface (PCI) buses to which peripheral components connect to the core control module40. A memory interface module62includes a software driver and a hardware connector for coupling a memory device to the I/O and/or peripheral control module52. For example, a memory interface62is in accordance with a Serial Advanced Technology Attachment (SATA) port. The core control module40coordinates data communications between the processing module(s)42and the network(s)26via the I/O and/or peripheral control module52, the network interface module(s)60, and a network card68or70. A network card68or70includes a wireless communication unit or a wired communication unit. A wireless communication unit includes a wireless local area network (WLAN) communication device, a cellular communication device, a Bluetooth device, and/or a ZigBee communication device. A wired communication unit includes a Gigabit LAN connection, a Firewire connection, and/or a proprietary computer wired connection. A network interface module60includes a software driver and a hardware connector for coupling the network card to the I/O and/or peripheral control module52. For example, the network interface module60is in accordance with one or more versions of IEEE 802.11, cellular telephone protocols, 10/100/1000 Gigabit LAN protocols, etc. The core control module40coordinates data communications between the processing module(s)42and input device(s)72via the input interface module(s)56and the I/O and/or peripheral control module52. An input device72includes a keypad, a keyboard, control switches, a touchpad, a microphone, a camera, etc. An input interface module56includes a software driver and a hardware connector for coupling an input device to the I/O and/or peripheral control module52. In an embodiment, an input interface module56is in accordance with one or more Universal Serial Bus (USB) protocols. The core control module40coordinates data communications between the processing module(s)42and output device(s)74via the output interface module(s)58and the I/O and/or peripheral control module52. An output device74includes a speaker, etc. An output interface module58includes a software driver and a hardware connector for coupling an output device to the I/O and/or peripheral control module52. In an embodiment, an output interface module56is in accordance with one or more audio codec protocols. This disclosure presents novel analog to digital converter (ADC) designs, architectures, circuits, etc. that provide much improved performance in comparison to prior art ADCs. Various aspects, embodiments, and/or examples of the invention (and/or their equivalents) that may be used to perform analog to digital conversion of signals provide very high resolution digital format data. Certain examples of such analog-to-digital conversion is performed based on sensing an analog current signal associated with a sensor, a load, etc. or any source of an analog signal. In many examples provided herein, a load32is employed as the element having an associated analog signal that is sensed and converted to a digital signal. Generally speaking, such a load32may be any of a variety of types of sources, devices, systems, etc. that has an associated analog signal that may be sensed and converted to a digital signal including a sensor, a computing device, a circuit, etc. within any type of application context including industrial, medical, communication system, computing device, etc. In addition, various aspects, embodiments, and/or examples of the invention (and/or their equivalents) that may be used to perform analog to digital conversion of signals may be implemented in accordance with providing both drive and sense capabilities such that a signal is driven from the ADC28to the load32to facilitate sensing of the analog signal associated with the load32. In some examples, the signal is driven from the ADC28to energize the load32and to facilitate its effective operation. Consider an example in which the load32is a sensor30. In such an example, the signal provided from the ADC28is operative to provide power to the sensor30and also simultaneously to sense the analog signal associated with the sensor30simultaneously via a single line. Alternatively, note that certain examples may operate such that the load32is provided power or energy from an alternative source. In such instances, the ADC28need not specifically be implemented to provide power or energy to the load32but merely to sense the analog signal associated with the sensor30. In some examples, a sensing signal is provided from the ADC28to the load32such that detection of any change of the sensing signal is used and interpreted to determine one or more characteristics of the analog signal associated with the load32. In certain examples, the providing of the sensing signal from the ADC28to the load32and the sensing of the analog signal associated with the load32are performed simultaneously via a single line that couples or connects the ADC28to the load32. FIG.3is a schematic block diagram showing various embodiments301,302,303, and304of analog to digital conversion as may be performed in accordance with the present invention. In the upper left portion of the diagram, with respect to reference numeral301, and analog AC signal is shown. Note that the analog AC signal may or may not have a DC offset. Consider an example in which the DC offset is X volts, and consider a sinusoidal analog AC signal oscillates and varies between a maximum of +Y volts to a minimum of —Y volts as a function of time based on a particular frequency of the analog AC signal. Note that this example of an analog AC signal is not exhaustive, and generally speaking, such an analog AC signal may have any variety of shapes, frequencies, characteristics, etc. Examples of such analog signals may include any one or more of a sinusoidal signal, a square wave signal, a triangular wave signal, a multiple level signal (e.g., has varying magnitude over time with respect to the DC component), and/or a polygonal signal (e.g., has a symmetrical or asymmetrical polygonal shape with respect to the DC component). Note also that such an analog signal may alternatively have only a DC component with no AC component. Note that any of the respective implementations of an ADC has described herein, or their equivalents, is also operative to detect an analog signal having only a DC component. Note that a totally non-varying analog signal having only a DC component, after undergoing analog-to-digital conversion, would produce a digital signal having a constant digital value as a function of time. That is to say, such a discrete-time signal generated based on a DC signal. In the upper right hand portion of the diagram, with respect reference numeral302, the analog AC signal shown with respect to reference numeral301is shown as undergoing analog-to-digital conversion in accordance with generating a digital signal. Generally speaking, the resolution and granularity of such a digital signal may be of any desired format including performing analog-to-digital conversion based on a range spanning any number of desired levels and generating a digital signal having any number of desired bits, N, where N is a positive integer. This particular example shows generation of additional signal in accordance with a range having 8 levels such that the digital signal includes 3 bits. For example, consider an analog AC signal having no DC offset and varying between a range spanning +Y/−Y volts, then that range is divided into 8 respective sub-range is, and when the value of the analog AC signal crosses from one sub-range into another sub-range as a function of time, then the value of the digital signal correspondingly changes as a function of time. With respect to reference numeral302, a digital representation of the analog AC signal shown with respect to reference numeral301is shown as a function of time. In the lower left-hand portion of the diagram, with respect to reference numeral303, a transfer function of a three bit ADC is shown with respect to a Z volt reference. As the magnitude of the analog AC signal varies as a function of time, a corresponding digital value is generated based on where the magnitude of the analog AC signal is within the range from zero to a Z volt reference. Note that this particular example shown with respect to reference numeral303is shown as varying between zero and a Z volt reference. In another example, such a transfer function may be implemented based on using −Y volts as a baseline such that, along the horizontal axis, 0 corresponds to −Y volts, and Z is twice the magnitude of Y (e.g., Z=2×MAG[Y]). For example, consider the analog AC signal shown with respect to reference301as being an analog AC signal having no DC offset and varying between a range spanning +Y/−Y volts, then the Z volt reference could correspond to Y (or alternatively some value greater than Y to facilitate detection of the analog AC signal bearing outside of a particular or expected range), then such an 8 level, 3 bit digital signal may be generated such as shown with respect to reference numeral302. In the lower right hand portion of the diagram, with respect to reference numeral304, and ADC28is shown as being coupled or connected to a load32. The ADC28is configured to sense an analog signal associated with the load32and to generate a digital signal based thereon. Note that the ADC28may be implemented to facilitate both drive and sense capabilities such that the ADC28is configured to drive an analog current and/or voltage signal to the load32while concurrently or simultaneously sensing the analog signal associated with the load32. In alternative examples, the ADC28is also operative to perform simultaneous driving and sensing of the analog signal associated with the load32when the load32is energized from another source such as from a battery, an external power source, etc. Note that the ADC28includes capability and functionality to perform sensing only or alternatively, to perform both drive and sense. In some examples, the ADC28is configured to perform sensing only of an analog signal (e.g., having AC and/or DC components) associated with the load32. In other example, the ADC28is configured to drive an analog current and/or voltage signal to the load32while concurrently and/or simultaneously sensing an analog signal (e.g., having AC and/or DC components) associated with the load32. For example, the ADC28is configured to provide power to or energize the load32while also concurrently and/or simultaneously sensing an analog signal (e.g., having AC and/or DC components) associated with the load32. Also, in certain alternative examples, the ADC28is also operative to perform simultaneous driving and sensing of the analog signal associated with the load32when the load32is energized from another source such as from a battery, an external power source, etc. Various aspects, embodiments, and/or examples of the invention (and/or their equivalents) include an ADC that is operative to sense an analog current signal. The ADC is implemented to convert the sensed analog current signal into a very high resolution digital format of a desired resolution (e.g., of a certain sampling rate, resolution or number of bits, etc.). FIG.4is a schematic block diagram of an embodiment400of an analog to digital converter (ADC) in accordance with the present invention. In this diagram, and ADC is connected to are coupled to a load32via a single line such that the ADC is configured to provide a load signal412via that single line and simultaneously to detect any effect414on that load signal via a single line. In certain examples, the ADC is configured to perform single line drive and sense of that load signal412, including any effect414thereon, a that single line. Note that certain of the following diagrams show one or more processing modules24. In certain instances, the one or more processing modules24is configured to communicate with and interact with one or more other devices including one or more of ADCs, one or more components implemented within an ADC (e.g., filters of various types including low pass filters, bandpass filters, decimation filters, etc., gain or amplification elements, digital circuits, digital to analog converters (DACs) of varying types include N-bit DACs, analog to digital converters (ADCs) of varying types include M-bit ADCs, etc. Note that any such implementation of one or more processing modules24may include integrated memory and/or be coupled to other memory. At least some of the memory stores operational instructions to be executed by the one or more processing modules24. In addition, note that the one or more processing modules24may interface with one or more other devices, components, elements, etc. via one or more communication links, networks, communication pathways, channels, etc. (e.g., such as via one or more communication interfaces of the device, such as may be integrated into the one or more processing modules24or be implemented as a separate component, circuitry, etc.). Also, within certain of the following diagrams, there is a demarcation shown between the analog domain and the digital domain (e.g., showing the portion of the diagram that operates in the analog domain based on continuous-time signaling, and the portion of the diagram that operates in the digital domain that operates in the digital domain based on discrete-time signaling). Moreover, within certain of the following diagrams, there is a demarcation shown between the load domain and the ADC domain (e.g., Showing the connection or coupling between a load and/or an analog signal that is being sensed and the ADC that is sensing the analog signal, which may be associated with the load). In certain examples, an ADC is connected to or coupled to a load via a single line. Also, such an ADC may be implemented to perform simultaneous driving and sensing of a signal via that single line that connects or couples to the load. For example, such an ADC is operative to drive an analog signal (e.g., current and/or voltage) of the load32. With respect to implementations that operate in accordance with sensing analog current signals, such an ADC is operative to sense current signals within an extremely broad range including very low currents (e.g., currents below the 1 pico-amp range, within the 10 s of pico-amps range, below the 1 nano-amp range, within the 10 s of nano-amps range, below the 1 micro-amp range, within the 10 s of micro-amps range, etc.) and also up to relatively much larger currents (e.g., currents in the 10 s milli-amps range, 100 s milli-amps range, or even higher values of amps range, etc.). In some examples, such as with respect to detecting currents that are provided from a photodetection or photodiode component, such an ADC is operative to sense current signals below the 1 pico-amp range, currents within the 100 s of micro-amps range, etc. Also, in some examples, when using appropriately provisioned components (e.g., higher current, higher power, etc.), much higher currents can also be sensed using architectures and topologies in accordance with an ADC as described herein. For example, such an ADC implemented based on architectures and topologies, as described herein, using appropriately provisioned components be operative to sense even higher currents (e.g., is of amps, 10 s of amps, or even higher values of amps range, etc.). In addition, such an ADC may be implemented to provide for extremely low power consumption (e.g., less than 2 μW). Such an ADC may be particularly well-suited for low-power applications such as remote sensors, battery operated applications, etc. The architecture and design of such an ADC requires very few analog components. this provides a number of advantages and improve performance over prior art ADCs including very little continuous static current being consumed. In certain examples, such an ADC is described herein provides for a 10× lower power consumption in comparison to prior art ADC technologies. Such extremely low power consumption implementations may be particularly well-suited for certain applications such as bio-medical applications including sensing of vital signs on the patient, low current sensors, remote sensors, etc. In addition, note that while such an ADC as described herein provides for significant improvement in a reduction in power consumption in comparison to prior art ADCs (e.g., including prior art ADCs such as successive approximation resolution (SAR) ADCs, D-sigma modulator ADCs, pipe-line ADCs, etc.), such an ADC is described herein may be implemented as a general-purpose ADC in any of a variety of applications. Moreover, the bandwidth of analog signals that may be sensed using such an ADC is described herein is extremely broad, ranging from DC up to and over 10 MHz. In certain particular examples, such an ADC has described herein is implemented for very low frequency measurements, such as from DC up to 1 kHz. Note also that an ADC as described herein may be designed and tailored particularly for a desired digital signal resolution to be generated based on a particular bandwidth to be sampled. In general, there may be a trade-off between bandwidth and power consumption within a particularly designed ADC. Consider an example in which a very high resolution digital signal is desired for a relatively low sampling bandwidth versus another example in which a relatively low resolution digital signal is desired for a relatively high sampling bandwidth. For example, consider a particularly designed ADC to provide a digital signal having 16-bit resolution for a sampling bandwidth below 100 kHz, then such an ADC may be implemented to consume less than 1 μW of energy. Such an ADC may be appropriately designed to meet criteria for a particular application. Consider an example in which a 24-bit digital signal is desired for a relatively low sampling bandwidth from DC up to 100 kHz. Consider another example in which a 12 bit digital signal desired for a relatively higher sampling bandwidth from DC up to 1 MHz. In comparing these two examples, as the sampling bandwidth is extended higher and higher, the ADC will consume more current and thereby be more power consumptive. Depending on the particular application at hand, a relatively low sampling bandwidth may be acceptable for the particular application at hand, and very significant power consumption savings may be achieved. Generally speaking, a trade-off in design implementation may be viewed as higher resolution/lower sampling bandwidth/lower power consumption versus higher resolution/higher sampling bandwidth/higher power consumption. In addition, note that many of the examples of an ADC included herein operate based on sensing a current signal as opposed to a voltage signal. In addition, when the ADC is implemented in an application to sense a voltage signal, an appropriately implemented voltage to current transforming element, such as the trans-impedance amplifier that is operative to transform voltage to current, or vice versa, may be implemented to generate a current signal from a voltage signal in any particular desired application. In any of the various diagrams, note that such a load32may be of any of a variety of types including electrode, a sensor, a transducer, etc. Generally speaking, such a load32may be any of a variety of types of components. Examples of such components may include any one or more of sources, devices, systems, etc. that has an associated analog signal that may be sensed and converted to a digital signal including a sensor, a computing device, a circuit, etc. within any type of application context including industrial, medical, communication system, computing device, etc. Also, note that such a load32as depicted within any diagram herein may be energized or powered based on the signal provided from the ADC or alternatively powered by another source such as a battery, external power source, etc. For example, consider the lower left-hand portion of the diagram and need demarcation between the load domain and the ADC domain, such that the load32is connected to the ADC via a single line. In certain examples, the ADC is implemented to facilitate single-line sense functionality such that a load signal412-1is provided to the load32for sensing only, and any effect414-1on that load signal is sensed and detected by the ADC. In such an example is this, power is provided to the load32from an external source. Referring again to the top portion of the diagram, the ADC is connected to are coupled to a load32via a single line such that the ADC is configured to provide a load signal412via that single line and simultaneously to detect any effect414on that load signal via a single line. For example, the load signal412is an analog current signal. An analog capacitor, C, is implemented to be charged in accordance with the load signal412. Note that such an analog capacitor may alternatively be a load capacitance from the load32itself, such that a separate analog capacitor, C, is not needed when the load32itself provides a sufficient load capacitance. In an example of operation and implementation, a load voltage, Vload, is generated based on any effect414on that load signal charging the capacitor. This load voltage, Vload, serves as an input voltage, Vin, to one of the inputs of a comparator that also receives a reference signal, Vref (e.g., a voltage reference signal). Note that the reference signal, Vref, may be internally generated, provided from an external source, provided from a processing module24, etc. The comparator compares the input voltage, Vin, to the reference signal, Vref, and outputs a signal that is based on any difference between the input voltage, Vin, to the reference signal, Vref, that gets processed by a digital circuit410to generate a digital output (Do) 1 signal that may be viewed as being a digital stream of 0 s and/or 1 s at a clock rate (CLK) at which the digital circuit410is clocked. For example, consider that the input voltage, Vin, is greater than the reference signal, Vref, then the comparator output signal would be positive (e.g., such as a positive rail or power supply voltage of the ADC). Alternatively, consider that the input voltage, Vin, is less than or equal to the reference signal, Vref, then the comparator output signal would be negative (e.g., such as a negative rail or power supply voltage of the ADC). In another example, consider that the input voltage, Vin, is greater than the reference signal, Vref, then the comparator output signal would be positive or negative (e.g., such as a positive or negative rail or power supply voltage of the ADC). Alternatively, consider that the input voltage, Vin, is less than or equal to the reference signal, Vref, then the comparator output signal would be zero (e.g., such as a ground voltage potential). Generally speaking, the combined operation of the comparator and the digital circuit410may be viewed as performing the analog to digital conversion of a signal that is the difference (e.g., and error voltage, Ve) between the input voltage, Vin, and the reference signal, Vref (e.g., Ve=Vref−Vin) to generate a digital signal of a particularly desired resolution, which may be viewed as M bits, where M is a positive integer greater than or equal to 1. A processing module24is operative to process the Do 1 to generate a digital output (Do) 2. Note that the processing module24may be implemented in any of a variety of examples to perform any desired digital signal processing on the Do 1 to generate the Do 2. Examples of such digital signal processing may be increasing the output resolution (e.g., consider Do 1 having a resolution of M bits and Do 2 having a resolution of N bits, where N and M are both positive integers, where M is a positive integer greater than or equal to 1, and N is greater than M), performing filtering on the Do 1 to generate the Do 2 (e.g., such as low pass filtering or bandpass filtering based on certain parameters such as a particular frequency cut off for low pass filtering or a particular frequency range for bandpass filtering). The processing module24provides the Do 2 to an N-bit digital to analog converter (DAC)420. In some examples, the N-bit DAC420has a resolution of N<8 bits. This N-bit DAC420, based on the Do 2 provided from the processing module24, forces and output current to the load32that follows or tracks the load signal412due to the operation of the comparator that compares the input voltage, Vin, to the reference signal, Vref, and, in conjunction with the digital circuit410, generates Do 1. From certain perspectives, considering the Do 1 and the Do 2, the Do 1 may be viewed as a digital signal corresponding to the unfiltered load current signal including quantization noise, and the Do 2 may be viewed as another digital signal corresponding to a filtered load current signal. In this diagram, the positive input of the comparator is driven by the reference signal, Vref. The load voltage, Vload, will follow the reference signal, Vref, based on the comparator output signal that corresponds to the difference or error between the input voltage, Vin, and the reference signal, Vref. In many examples, the difference between the input voltage, Vin, and the reference signal, Vref, is very small (e.g., approaching 0, very close to 0, or actually 0) based on the Delta-sigma modulation operation of the comparator and the digital circuit410. For example, when there is any difference between the input voltage, Vin, and the reference signal, Vref, the ADC adapts/modifies the output current from the N-bit DAC420to match the current of the load so that difference or error between the input voltage, Vin, and the reference signal, Vref, will be forced to 0. Note that the comparator and the digital circuit410may be implemented using one or more other components and other examples while still providing the same overall functionality of the ADC. The following diagram shows some alternative possible examples of how the comparator and the digital circuit410may be implemented. Note that this implementation of an ADC includes very few number of analog components. For example, there may be instances in which no capacitors required whatsoever given that the load32inherently includes sufficient load capacitance to generate the load voltage, Vload. In certain implementations, the comparator is implemented by a component that performs analog to digital conversion of the load voltage, Vload, directly thereby further reducing the number of analog components within the ADC. Given the small number of analog components, such an ADC consumes little or no continuous static power thereby facilitating very low power consumption. The only static current being consumed is by the N-bit DAC420. This N-bit DAC420drives and output current that is same as the sensed load current thereby tracking or following the load current. Therefore, within implementations in which the load current is small, so will the corresponding output current from the N-bit DAC420be small. The smaller the current provided from the N-bit DAC420, which is based on the sensed load current, the lower the power consumption of the ADC. Note that there are certainly alternative implementations of an ADC that will consume some static current, such as when an M-bit analog to digital converter (ADC) is used or some other component that is implemented to perform the analog-to-digital conversion of the signal Vin to Do 1. Also, note that the amount of power consumed by the DAC, particularly the digital power consumed by the DAC, scales with the clock rate, CLK. Note also that applications that are implemented to perform sensing of ADC signal, such as sensing ADC current signal, the clock frequency can be extremely low (e.g., within the range of 1 kHz to 100 kHz) thereby providing for a very small digital power consumption. FIG.5is a schematic block diagram showing alternative embodiments501,502,503, and504of various components may be implemented within an ADC in accordance with the present invention. Considering reference numeral501, a comparator operates in cooperation with the digital circuit410as described above such that the combined operation of the comparator and the digital circuit410may be viewed as performing the analog to digital conversion of a signal that is the difference (e.g., Ve) between the input voltage, Vin, and the reference signal, Vref (e.g., Ve=Vref−Vin) to generate a digital signal of a particularly desired resolution, which may be viewed as M bits, where M is a positive integer greater than or equal to 1. However, note that comparator and the digital circuit410may be implemented using any of a variety of other means while still facilitating proper operation of an ADC. With respect to reference numeral502, a digital comparator, which may alternatively be described as a clock (or dynamic) comparator structure (latched comparator) is shown. This singular device performs the operation of both a comparator and the digital circuit410within a single device. For example, the digital comparator is clocked at a particular clocking frequency (CLK) and outputs a stream of 1 s and/or 0 s based on the comparison of Vref and Vin. In comparison to a comparator that operates continuously and that will output one of two values, such as either a high signal or low signal, continually as a function of time, a digital comparator outputs a 1 or 0 at each clock cycle based on the comparison of Vref and Vin in accordance with generating the Do 1 (e.g., 1 when Vref>Vin and 0 when Vref<=Vin, or vice versa). Note also that by only clocking such a digital comparator at certain intervals, a higher accuracy and lower power consumption can be achieved in comparison to a comparator that operates continuously. With respect to reference numeral503, the output of the comparator is provided to a sample and hold circuit (S&H)510. Generally speaking, a S&H510holds, locks, or freezes its value at a constant level for a specified minimum period of time. This signal may be viewed as interpreted as a digital stream of 1 s and/or 0 s at the clocking frequency (CLK) in accordance with generating the Do 1. Note that such a S&H510may be implemented in a variety of ways including a circuit that stores electric charge and a capacitor and also employs one or more switching elements such that the circuit stores electric charge is built up over each of certain intervals, and the switching element connects the output of the circuit that stores electric charge to the output at certain in the boroughs such as the clocking frequency (CLK) in accordance with generating the Do 1. With respect to reference numeral504, the comparator and the digital circuit410are replaced with a sigma-delta comparator, such as a one bit ADC, followed by a flip-flop circuit (FF)520. The sigma-delta comparator provides a high or low signal to the FF520based on comparison of Vref and Vin, and the FF520outputs a 1 or 0 at each clock cycle such as the clocking frequency (CLK) based on the comparison of Vref and Vin in accordance with generating the Do 1. Generally speaking, note that the implementation of a comparator and the digital circuit410as shown within any of the diagrams herein may be alternatively implemented in a variety of different ways including those shown within this diagram and/or their equivalents. FIG.5Bis a schematic block diagram showing alternative embodiments505aand505bof servicing differential signaling using ADCs in accordance with the present invention. In addition to servicing and sensing single-ended lines and generating digital signals based thereon using ADCs as described herein, note that servicing and sensing of signals may also be performed. For example, with respect to reference numeral505a, a first instantiation of an ADC28and the second instantiation of an ADC28are each respectively coupled via a respective single line to a different perspective load32. Two respective load voltages, Vload1 and Vload2, are respectively received by the first and second instantiations of an ADC28. Note that the first and second instantiation of an ADC28may be the same or may be different. Each respective instantiation of an ADC28in this example is operative to service and sense a respective single-ended line. Together, the first and second instantiations of an ADC28are operative to sense a differential signal that is based on the two load voltages, Vload1 and Vload2, and to generate a corresponding digital signal based thereon. In certain examples a processing module24is implemented to combine a first digital signal that is based on Vload1 and that is generated by the first instantiation of an ADC28and a second digital signal that is based on Vload2 and that is generated by the second instantiation of an ADC28to generate a resultant digital signal that corresponds to the differential voltage between the two load voltages, Vload1 and Vload2 (e.g., Vdiff=Vload1−Vload2, or Vdiff=Vload2−Vload1). As another example, with respect to reference numeral505b, a differential load32-1is serviced such that the two signal lines corresponding to the differential signaling provided by the differential load32-1are respectively provided to a first instantiation of an ADC28and a second instantiation of an ADC28. Similarly, a processing module24may be implemented to generate a resulting digital signal that corresponds to the differential voltage associated with the differential load32-1. The first instantiation of an ADC28in the second instantiation of an ADC28operate cooperatively to provide a load signal1112and to detect any effect1114on the load signal that is based on the differential load32-1. A capacitor, C, is also implemented across the differential signal lines of the differential load32-1. Note that any example, embodiment, etc. of any ADC described herein that is operative to sense an analog signal via a single line may be implemented within the first instantiation and the second instantiation of an ADC28in either of these examples corresponding to reference numerals505aand505band/or their equivalents. In an example of operation and implementation, an ADC (e.g., consider the ADC ofFIG.4) includes a capacitor that is operably coupled to a load and configured to produce a load voltage based on charging by a load current and a digital to analog converter (DAC) output current. In some examples, the ADC is coupled to the load via a single line. The ADC also includes a comparator. When enabled, the comparator operably coupled and configured to receive the load voltage via a first input of the comparator, receive a reference voltage via a second input of the comparator, and compare the load voltage to the reference voltage to generate a comparator output signal. The ADC also includes a digital circuit that is operably coupled to the comparator. When enabled, the digital circuit operably coupled and configured to process the comparator output signal to generate a first digital output signal that is representative of a difference between the load voltage and the reference voltage. The ADC also includes one or more processing module operably coupled to the digital circuit and to memory, which may be included within the ADC or external to the ADC. When enabled, the one or more processing modules is configured to execute the operational instructions to process the first digital output signal to generate a second digital output signal that is representative of the difference between the load voltage and the reference voltage, wherein the second digital output signal includes a higher resolution than the first digital output signal. The ADC also includes an N-bit digital to analog converter (DAC) that is operably coupled to the one or more processing modules. When enabled, the N-bit DAC operably coupled and configured to generate the DAC output current based on the second digital output signal. Note that N is a positive integer. The DAC output current tracks the load current, and the load voltage tracks the reference voltage. Also, in some examples, the one or more processing modules, when enabled, is further configured to process the first digital output signal in accordance with performing band pass filtering or low pass filtering to generate the second digital output signal that is representative of the difference between the load voltage and the reference voltage. In alternative examples, the comparator includes a sigma-delta comparator, and the digital circuit includes a clocked flip flop. In even other examples, a digital comparator includes both the comparator and the digital circuit (e.g., the digital comparator is operative to perform the functionality of both the comparator and the digital circuit). When enabled, the digital comparator operably coupled and configured to receive the load voltage via a first input of the comparator, receive a reference voltage via a second input of the comparator, and compare the load voltage to the reference voltage to generate the first digital output signal that is representative of the difference between the load voltage and the reference voltage. In addition, in certain examples, the ADC includes a decimation filter coupled to the one or more processing modules. When enabled, the decimation filter is operably coupled and configured to process the second digital output signal to generate another digital output signal having a lower sampling rate and a higher resolution than the second digital output signal. Alternative to or in addition to, another decimation filter is coupled to the digital circuit. When enabled, the other decimation filter the operably coupled and configured to process the first digital output signal to generate another digital output signal having a lower sampling rate and a higher resolution than the first digital output signal. FIG.6is a schematic block diagram of another embodiment600of an ADC that includes one or more decimation filters in accordance with the present invention. This diagram has similarities with respect toFIG.4with at least one difference being that a decimation filter1and/or a decimation filter2are implemented to process the Do 1 and the Do 2. For example, a decimation filter may be implemented to process a digital signal thereby lowering the sample rate thereof and increasing the output resolution. Consider a digital signal having a 12 bit resolution and a 100 kHz sampling rate. In one example, a decimation filter may operate to increase the resolution of that digital signal to be 24-bit resolution with a lower sampling rate of 50 kHz. In another example, decimation filter may operate to increase the resolution of that digital signal to be 18-bit resolution with a lower sampling rate of 75 kHz. Generally speaking, any desired transformation of sampling rate and output resolution may be made performed using one or more decimation filters in accordance with any of the various examples of ADCs as described herein. In certain examples, only a decimation filter1is included thereby processing the Do 1 to generate the Do 2. In other examples, both a decimation filter1is included thereby processing the Do 1 to generate the Do 2 and a decimation filter2is included thereby processing the Do 2 to generate a Do 3 (e.g., Do 3 having a lower sampling rate and increased output resolution in comparison to the Do 2). FIG.7is a schematic block diagram showing alternative embodiments701,702, and703of one or more decimation filters and/or processing modules that may be implemented to perform digital domain processing within an ADC in accordance with the present invention. With respect to reference numeral701, a processing module24may be implemented to perform any of a variety of different digital signal processing operations on the Do 1 to generate the Do 2 such as decimation filtering, low pass filtering, bandpass filtering, etc. However, note that such an implementation of the output signals, such as Do 1 and the Do 2 may be implemented in different configurations as desired in particular applications. For example, with respect to reference numeral702, a decimation filter1and a decimation filter2may be implemented. As described above, only a decimation filter1may s included thereby processing the Do 1 to generate the Do 2. In other examples, both a decimation filter1is included thereby processing the Do 1 to generate the Do 2 and a decimation filter2is included thereby processing the Do 2 to generate a Do 3 (e.g., Do 3 having a lower sampling rate and increased output resolution in comparison to the Do 2). With respect to reference numeral703, the processing module24is configured to control the operation of the decimation filter1and decimation filter2. For example, the processing module24is configured to the manner in which decimation filtering may be performed by the decimation filter1and/or decimation filter2(e.g., including the manner of conversion of digital signal resolution, the modification of sampling rate, etc.). Note that any of the respective implementations shown within this diagram may be implemented within any other of the appropriate diagrams of an ADC as described herein. FIG.8is a schematic block diagram of another embodiment800of an ADC in accordance with the present invention. This diagram is similar to that ofFIG.4with at least one difference being that the capacitor, C, is replaced by an integrator. The integrator is implemented as an operational amplifier with a feedback capacitor, C. The use of the operational amplifier in place of only the capacitor, C, may be used for applications that are tailored to serve greater power than that ofFIG.4. Generally speaking, the feedback capacitor, C, implemented in cooperation with the operational amplifier serves a similar purpose of the capacitor, C, inFIG.4of being charged based on the load current and the output current from the N-bit DAC420thereby generating the Vin to be provided to the comparator and compared with Vref. In an example of operation and implementation, an ADC (e.g., consider the ADC ofFIG.800) includes an operational amplifier (op amp) that is operably coupled to a load via a first op amp input. Also, a capacitor is operably coupled to the first op amp input and an op amp output. When enabled, the op amp is operably coupled and configured to generate an output voltage at the op amp output that corresponds to a load voltage that is based on charging of the capacitor by a load current and a digital to analog converter (DAC) output current. In some examples, the ADC is coupled to the load via a single line. The ADC also includes a comparator that is operably coupled to the op amp. When enabled, the comparator operably coupled and configured to receive the output voltage via a first input of the comparator, receive a reference voltage via a second input of the comparator, and compare the load voltage to the reference voltage to generate a comparator output signal. The ADC also includes a comparator a digital circuit that is operably coupled to the comparator. When enabled, the digital circuit is operably coupled and configured to process the comparator output signal to generate a first digital output signal that is representative of a difference between the load voltage and the reference voltage. The ADC also includes a comparator one or more processing modules operably coupled to the digital circuit and to memory, which may be included within the ADC or external to the ADC. When enabled, the one or more processing modules is configured to execute the operational instructions to process the first digital output signal to generate a second digital output signal that is representative of the difference between the load voltage and the reference voltage. Note that the second digital output signal includes a higher resolution than the first digital output signal. The ADC also includes an N-bit digital to analog converter (DAC) that is operably coupled to the one or more processing modules. When enabled, the N-bit DAC operably coupled and configured to generate the DAC output current based on the second digital output signal. Note that N is a positive integer. Also, the DAC output current tracks the load current, and the load voltage tracks the reference voltage. In some examples, the one or more processing modules, when enabled, is further configured to process the first digital output signal in accordance with performing band pass filtering or low pass filtering to generate the second digital output signal that is representative of the difference between the load voltage and the reference voltage. In some examples, the comparator includes a sigma-delta comparator, and the digital circuit includes a clocked flip flop. Also, in some other examples, a digital comparator includes both the comparator and the digital circuit (e.g., the digital comparator is operative to perform the functionality of both the comparator and the digital circuit). When enabled, the digital comparator operably coupled and configured to receive the load voltage via a first input of the comparator, receive a reference voltage via a second input of the comparator, and compare the load voltage to the reference voltage to generate the first digital output signal that is representative of the difference between the load voltage and the reference voltage. In addition, in certain examples, the ADC includes a decimation filter coupled to the one or more processing modules. When enabled, the decimation filter is operably coupled and configured to process the second digital output signal to generate another digital output signal having a lower sampling rate and a higher resolution than the second digital output signal. Alternative to or in addition to, another decimation filter is coupled to the digital circuit. When enabled, the other decimation filter the operably coupled and configured to process the first digital output signal to generate another digital output signal having a lower sampling rate and a higher resolution than the first digital output signal. FIG.9is a schematic block diagram of another embodiment900of an ADC in accordance with the present invention. This diagram has certain similarities with one or more of the previous diagrams with at least one difference being that a comparator and the digital circuit410, or a functionally equivalent component to the comparator and the digital circuit410, is replaced by a low resolution analog to digital converter (ADC), specifically, an M-bit ADC910, where M is a positive integer greater than or equal to 1. In certain particular examples, M is a positive integer within the range of 1-4 (e.g., 1, 2, 3, or 4). Also, in certain particular examples, N of the N-bit DAC420is less than or equal to M. In certain specific examples, N<8 bit resolution. For example, if N=4, then M=3, 2, or 1. The Do 2 may be viewed as a high-resolution digital signal (N bit resolution) compared to the Do 1 (M bit resolution), such that M<N. In addition, in some examples, the Do 2 is a modified version of the Do 1 after having undergone any desired digital signal processing within the processing module24. Note that the M-bit ADC910is operative to generate the Do 1 as being an error signal that corresponds to a difference between Vin and Vref and having a resolution of M bits and that is output based on the clocking rate, CLK. For example, the Do 1 is a digital signal that corresponds to corresponds to an error signal, Ve, such that Ve=Vref−Vin or Vin−Vref. The use of such an M-bit ADC910provides many performance improvements for certain applications including a reduction of quantization noise and an increase of the output resolution of the ADC, particularly with respect to the Do 1. For example, instead of Do 1 being a single bit resolution digital signal (e.g., a digital stream of 1 s and/or 0 s), the Do 1 in this diagram is a digital signal having a higher resolution (e.g., of 2, 3, or 4 bits). In some examples, the Do 1 is then provided to the processing module24, and the processing module24is configured to perform any desired digital signal processing operation on the Do 1 to generate the Do 2 (e.g., increase the output resolution and lower the sampling rate, perform low pass filtering, perform bandpass filtering, etc.). In this diagram, note that the Do 1 may be passed directly to the N-bit DAC420such that the Do 1 is used to drive the N-bit DAC420. However, in certain examples, the Do 2 is used to drive the N-bit DAC420such as when it is a filtered and/or digital signal processed version of the Do 1. In an example of operation and implementation, an ADC (e.g., the ADC ofFIG.900) includes a capacitor that is operably coupled to a load and configured to produce a load voltage based on charging by a load current and a digital to analog converter (DAC) output current. In some examples, the ADC is coupled to the load via a single line. The ADC also includes an M-bit analog to digital converter (ADC). When enabled, the M-bit ADC operably coupled and configured to receive the load voltage, receive a reference voltage, and compare the load voltage to the reference voltage and generate a first digital output signal that is representative of a difference between the load voltage and the reference voltage. The ADC also includes a processing module operably coupled to the digital circuit and to memory, which may be included within the ADC or external to the ADC. When enabled, the processing module is configured to execute the operational instructions to process the first digital output signal to generate a second digital output signal that is representative of the difference between the load voltage and the reference voltage. Note that the second digital output signal includes a higher resolution than the first digital output signal. The ADC also includes an N-bit digital to analog converter (DAC) that is operably coupled to the processing module. When enabled, the N-bit DAC is operably coupled and configured to generate the DAC output current based on the second digital output signal. Note that the DAC output current tracks the load current, and the load voltage tracks the reference voltage. N is a first positive integer, and M is a second positive integer greater than or equal to 1. In some examples, N is greater than M. In other examples, N is the first positive integer that is less than or equal to 8, and M is the second positive integer that is greater than or equal to 1 and less than or equal to 4. In even other examples, the one or more processing modules, when enabled, is further configured to process the first digital output signal in accordance with performing band pass filtering or low pass filtering to generate the second digital output signal that is representative of the load voltage. In addition, in certain examples, the ADC includes a decimation filter coupled to the one or more processing modules. When enabled, the decimation filter is operably coupled and configured to process the second digital output signal to generate another digital output signal having a lower sampling rate and a higher resolution than the second digital output signal. Alternative to or in addition to, another decimation filter is coupled to the digital circuit. When enabled, the other decimation filter the operably coupled and configured to process the first digital output signal to generate another digital output signal having a lower sampling rate and a higher resolution than the first digital output signal. FIG.10is a schematic block diagram of another embodiment1000of an ADC in accordance with the present invention. This diagram is similar to the previous diagram with at least one difference being that the capacitor, C, is replaced by an integrator. The integrator is implemented as an operational amplifier with a feedback capacitor, C. The use of the operational amplifier in place of only the capacitor, C, may be used for applications that are tailored to serve greater power than that of the previous diagram. Generally speaking, the feedback capacitor, C, implemented in cooperation with the operational amplifier serves a similar purpose of the capacitor, C, in the previous diagram of being charged based on the load current and the output current from the N-bit DAC420thereby generating the Vin to be provided to the comparator and compared with Vref. In addition, with respect to all of these examples of an ADC, the ADC operates by providing an output current to the load32to cancel out the load current. This may be viewed as providing an output current that is equal to and opposite polarity to the load current. Again, note that such an ADC may be implemented not only to sense an analog signal associated with the load32but also to provide power and/or energy to the load32within implementations where the load32is not energized via another source. In some examples, this providing of power and/or energy from the ADC to the load32is performed simultaneously via a single line via which the ADC senses and analog signal associated with the load32. Also, such an ADC may be implemented to perform sensing only of an analog signal associated with the load32without providing power and/or energy to the load32. FIG.11is a schematic block diagram of an embodiment1100of an ADC that is operative to process an analog differential signal in accordance with the present invention. This diagram shows an implementation of an ADC operating on a differential load32-1such that the ADC provides a load signal1112to the differential load32-1and also detects any effect1114on that load signal. In this diagram, a capacitor, C, is connected to the differential lead lines of the differential load32-1. Also, the N of the N-bit DAC420is replaced with a differential N-bit DAC1120, wherein N is a positive integer. The N of the N-bit DAC420is operative to generate a differential output current signal that is provided to the differential load32-1based on the Do 2. A differential signal may be viewed as being composed of two respective voltages corresponding to the two differential signal lines, Vp and Vn (e.g., sometimes referred to as a positive voltage, Vp, is a negative voltage, Vn). In this diagram, a common mode (CM) analog circuit1105is implemented to convert the differential signal to a single-ended signal. For example, the CM analog circuit1105is operative to generate an input voltage, Vin, such that Vin=(Vn+Vp)/2. In some examples, note that the CM analog circuit1105, the comparator, and the digital circuit410are all be implemented within a singular component or device that is operative to process a differential signal and to generate the Do 1 based thereon. FIG.12is a schematic block diagram of another embodiment1200of an ADC that is operative to process an analog differential signal in accordance with the present invention. This diagram has certain similarities with the previous diagram with at least one difference being that the CM analog circuit1105, the comparator, and the digital circuit410, or a functionally equivalent component to CM analog circuit1105, the comparator, and the digital circuit410, is replaced by a low resolution analog to digital converter (ADC), specifically, a differential M-bit ADC1210, where M is a positive integer greater than or equal to 1. In certain particular examples, M is a positive integer within the range of 1-4 (e.g., 1, 2, 3, or 4). Also, in certain particular examples, N of the differential N-bit DAC1120is less than or equal to M. In certain specific examples, N<8 bit resolution. For example, if N=4, then M=3, 2, or 1. The Do 2 may be viewed as a high-resolution digital signal (N bit resolution) compared to the Do 1 (M bit resolution), such that M<N. In addition, in some examples, the Do 2 is a modified version of the Do 1 after having undergone any desired digital signal processing within the processing module24. In certain examples, note that the differential M-bit ADC1210is operative to generate the Do 1 as being an error signal that corresponds to a difference between Vin (such that Vin=(Nv+Vp)/2) and Vref and having a resolution of M bits and that is output based on the clocking rate, CLK. For example, the Do 1 is a digital signal that corresponds to corresponds to an error signal, Ve, such that Ve=Vref−Vin or Vin−Vref. In other examples, note that the differential M-bit ADC1210is operative to generate the Do 1 as being an error signal that corresponds to a difference between the differential input voltage signal, Vin_diff, that is composed of Vn and Vp and a differential reference signal, Vref_diff (e.g., Vref_diff being a differential signal that is composed two different reference voltages, such as Vref1 and Vref2, and having a resolution of M bits and that is output based on the clocking rate, CLK. For example, the Do 1 is a digital signal that corresponds to corresponds to an error signal, Ve_diff, that corresponds to the difference between the two differential signals, Ve_diff=Vref_diff−Vin_diff or Vin_diff−Vref_diff. The use of such a differential M-bit ADC1210provides many performance improvements for certain applications including a reduction of quantization noise and an increase of the output resolution of the ADC, particularly with respect to the Do 1. For example, instead of Do 1 being a single bit resolution digital signal (e.g., a digital stream of 1 s and/or 0 s), the Do 1 in this diagram is a digital signal having a higher resolution (e.g., of 2, 3, or 4 bits). In some examples, the Do 1 is then provided to the processing module24, and the processing module24is configured to perform any desired digital signal processing operation on the Do 1 to generate the Do 2 (e.g., increase the output resolution and lower the sampling rate, perform low pass filtering, perform bandpass filtering, etc.). In this diagram, note that the Do 1 may be passed directly to the differential N-bit DAC1120such that the Do 1 is used to drive the differential N-bit DAC1120. However, in certain examples, the Do 2 is used to drive the differential N-bit DAC1120such as when it is a filtered and/or digital signal processed version of the Do 1. FIG.13is a schematic block diagram of another embodiment1300of an ADC that is operative to process an analog differential signal in accordance with the present invention. This diagram has certain similarities to certain of the previous diagrams that operate based on differential signaling with at least one difference being that the capacitor, C, that was connected between the differential signal lines of the load32-1is replaced by a differential integrator with two respective feedback capacitors, C. The differential integrator is implemented as an operational amplifier with two respective feedback capacitors, C, and is operative to generate a differential input signal is based on Vn and Vp. The use of the operational amplifier in place of only the capacitor, C, two respective feedback capacitors, C may be used for applications that are tailored to serve greater power than that of the previous diagram. Generally speaking, the two respective feedback capacitors, C, implemented in cooperation with the differential operational amplifier serve a similar purpose of the capacitor, C, that was connected between the differential signal lines of the load32-1in the previous diagram of being charged based on the differential load current and the differential output current from the differential N-bit DAC1120thereby generating the Vin to be provided to the comparator and compared with Vref. Note that the CM analog circuit1105, the comparator, and the digital circuit410may alternatively be replaced with a differential M-bit ADC1210such as in accordance with the previous diagram. FIG.14Ais a schematic block diagram of an embodiment1401an ADC that is operative to perform voltage measurement in accordance with the present invention. This diagram has some similarities with the previous diagrams with at least one difference being that the load32is replaced by the load voltage32-1, which may be a voltage of any of a number of devices including the load32. Examples of such a load voltage32-1include any of the voltage of an electrode, sensor, transducer, etc. Another difference within this diagram is that a resistor, R, is placed in line with the single line that connects her couples the ADC that is operative to perform voltage measurement and the load voltage32-1. For example, the load voltage32-1, when dropping across the resistor, R, to generate the input voltage, Vin, will provide a current signal that will charge the capacitor, C, that is provided to one of the inputs of the comparator. Generally speaking, a load voltage32-1can be measured by inserting a resistor, R, between the load voltage32-1and the ADC so as to facilitate conversion of the load voltage32-1to a current, Iin, that is equal to the difference between the load voltage32-1, Vload, and Vin, such that Iin=(Vload−Vin)/R. note also that a prince impedance circuitry may alternatively be implemented that is operative to convert a voltage to a current signal such that the current signal may be sensed by an ADC as described herein. FIG.14Bis a schematic block diagram of an embodiment1402an transimpedance amplifier that may be implemented within an ADC that is operative to perform voltage measurement in accordance with the present invention. The trans-impedance circuitry includes a buffer, operational amplifier, etc. having a first input coupled to the ground potential, and a second input coupled to a node that is sourcing or sinking current, such as the node connected to the N-bit DAC420. An impedance (shown as an R or generically a Z, which may have inductive and/or capacitive reactants components) is also coupled from the second input to the output of the buffer, operational amplifier, etc. A current, I, that flows through the impedance generates an output voltage, V, that is based on the impedance times the current, I (e.g., V=R×I or Z×I). Such a trend impedance amplifier, or any appropriate circuit or component that is operative to perform voltage to current signal conversion, or vice versa, may be used in place of the resistor shown within the previous diagram. FIG.15is a schematic block diagram showing an embodiment1500of digital domain filtering within an ADC in accordance with the present invention. This diagram shows an alternative implementation to having a processing module24implemented to receive him perform any desired digital signal processing on the Do 1 and to generate the Do 2. Specifically, a filter1510is implemented to process the Do 1 to generate the Do 2. Note that the filter1510may be of any desired type of digital filter. In certain examples, bandpass filtering or low pass filtering is performed by the filter1510to filter out high-frequency quantization noise within the Do 1 in accordance with generating the Do 2. Possible examples of a low pass filter or low pass filter operation may be implemented based on an accumulator or in integrator. For example, consider an application tailored for detecting a DC analog signal, or for detecting an analog signal having a frequency within the voice frequency bands such as 20 kHz to 100 kHz, then appropriate low pass filtering or bandpass filtering is performed by the filter1510to filter out high-frequency quantization noise within the Do 1 in accordance with generating the Do 2. In certain examples, note that a processing module24may be in communication with the filter1510such that the particular filtering to be performed by the filter1510is configurable based on control signaling from the processing module24. For example, consider the filter1510to be a configurable or selectable filter that includes one or more options of bandpass filtering or low pass filtering. The processing module24is configured to select a first type of filtering to be performed at or during a first time and a second type of filtering to be performed at or during a second time, and so on. FIG.16is a schematic block diagram showing an embodiment1600of digital domain filtering using cascaded filters within an ADC in accordance with the present invention. This diagram shows digital signal processing based on a cascade of N and pass filters or N low pass filters. In a particular example, N=10. The gain elements, K1 through KN, are amplification constants that are used to stabilize the feedback loop from any digital output signal that is generated by the respective cascade of N filter (e.g., filter1through filter N) that provide the digital input control signal to the N-bit DAC420. The different respective game factors operate to stabilize the feedback that is provided to the N-bit DAC420. Note that this implementation is operative to provide a number of different respective digital output signals, shown as Do 1, Do 2 through Do N as corresponding to the respective outputs from the respective cascade of N filter (e.g., filter1through filter N). Note that any one or more decimation filters may also be implemented to perform decimation filtering of the digital output signals, shown as Do 1, Do 2 through Do N as corresponding to the respective outputs from the respective cascade of N filter (e.g., filter1through filter N). FIG.17is a schematic block diagram showing another embodiment1700of digital domain filtering using configurable/adjustable cascaded filters within an ADC in accordance with the present invention. This diagram is similar to the previous diagram with at least one difference being that one or more processing modules24is coupled or connected to each of the respective gain elements (K1 through KN) and the respective cascade of N filter (e.g., filter1through filter N). The one or more processing modules24is configured to adjust a gains of the respective gain elements (K1 through KN) and mean particular characteristics by which filtering is performed by the respective cascade of N filter (e.g., filter1through filter N). For example, the one or more processing modules24is configured to select a first set of gains for the respective gain elements (K1 through KN) and a first type of filtering to be performed by the respective cascade of N filter (e.g., filter1through filter N) at or during a first time and a second set of gains for the respective gain elements (K1 through KN) and a second type of filtering to be performed by the respective cascade of N filter (e.g., filter1through filter N) at or during a second time. FIG.18is a schematic block diagram showing an embodiment1800of one or more processing modules implemented to perform digital domain filtering within an ADC in accordance with the present invention. This diagram includes one or more processing modules24that is operative to perform the filtering pictorially illustrated within the previous diagram. For example, one or more processing modules24may be implemented perform any desired digital signal processing of any of the respective digital output signals, shown as Do 1, Do 2 through Do N including the digital signal processing pictorially described with respect to the previous diagram. In this diagram, the one or more processing modules24itself for themselves performs the digital signal processing. In the previous diagram, separate and distinct digital signal processing components are implemented, ending one or more processing modules24of that diagram are operative to control and configure the manner in which those digital signal processing components operate. In addition, alternative examples of an ADC may be implemented using a non-linear N-bit DAC that operates based on a non-linear function. For example, a non-linear N-bit DAC is operative to provide an output current based on the non-linear function of the digital input signal provided to it. Such a non-linear function may be described also as a non-linear companding function such that companding corresponds to a non-linear response of the ADC based on the signal it receives and/or senses. In such a non-linear N-bit DAC, the output current is a non-linear function of the input. Considering one possible example of an ADC that includes a non-linear N-bit DAC, the digital output signal (e.g., the Do 1 and/or the Do 2 signal) that is generated by such an ADC is a non-linear function of the analog signal that it is sensing. Consider an ADC that includes a non-linear N-bit DAC and operates based on a logarithmic function when sensing a current signal, then the digital output signal (e.g., the Do 1 and/or the Do 2 signal) is a logarithmic function of the input current. Such an ADC that includes a non-linear N-bit DAC may be referred to as a companding ADC. Generally speaking, such an ADC that provides for a non-linear response when generating a digital output signal based on the analog signal that it is sensing may be referred to as a companding ADC. Note that such a companding ADC may also be implemented to perform simultaneous driving and sensing of a signal via that single line that connects or couples to the load. For example, such an ADC is operative to drive an analog signal (e.g., current and/or voltage) of a load32. With respect to implementations that operate in accordance with sensing analog current signals, such a companding ADC is also operative to sense current signals within an extremely broad range including very low currents (e.g., currents below the 1 pico-amp range, within the 10 s of pico-amps range, below the 1 nano-amp range, within the 10 s of nano-amps range, below the 1 micro-amp range, within the 10 s of micro-amps range, etc.) and also up to relatively much larger currents (e.g., currents in the 10 s milli-amps range, 100 s milli-amps range, or even higher values of amps range, etc.). In some examples, such as with respect to detecting currents that are provided from a photodetection or photodiode component, such a companding ADC is also operative to sense current signals below the 1 pico-amp range, currents within the 100 s of micro-amps range, etc. Also, in some examples, when using appropriately provisioned components (e.g., higher current, higher power, etc.), much higher currents can also be sensed using architectures and topologies in accordance with a companding ADC as described herein. For example, such a companding ADC implemented based on architectures and topologies, as described herein, using appropriately provisioned components are be operative to sense even higher currents (e.g., is of amps, 10 s of amps, or even higher values of amps range, etc.). In addition, note that various implementations of such a companding ADC may be implemented to cover a number of decades orders of magnitude. For example, consider a companding ADC that is implemented to detect current signals radiating from the 10 s of pico-amps to ones of milli-amps. Such a companding ADC would cover a dynamic range of 7-8 decades or 7-8 orders of magnitude. Within such an example, such a very broad dynamic range may be divided using a log scale into the 7-8 decades, such that there are a few data points within each particular decade. Note also that there is a trade-off regarding the resolution of the digital output signal (e.g., the Do 1 and/or the Do 2 signal) that is generated by such a companding ADC and range of current signals that may be sensed. For example, when the dynamic range of signals to be sensed by such a companding ADC is relatively large, then there can be limitations on sensing very low currents with a high degree of accuracy. Generally speaking, the broader the dynamic range of signals to be sensed, then a higher resolution of the digital output signal (e.g., the Do 1 and/or the Do 2 signal) provides for a higher degree of accuracy, particularly when sensing very low currents. Consider an example in which currents within a dynamic range of 10 s of pico-amps to 100 s of micro-amps is to be sensed (e.g., within a photodetection or photodiode component), then generating a digital output signal using a certain number of bits (e.g., a resolution of 12 bits) may be insufficient to cover the entire range. Within such a particular example, increasingly resolution of the digital output signal (e.g., to a resolution of 16 bits) can help facilitate sensing of signals with higher resolution and also assist sensing very low currents with a high degree of accuracy. Several the following diagrams have similarities to the prior diagrams with at least one difference being that a non-linear N-bit DAC1920is implemented to generate the current that is output to a load that matches or tracks the current of the load. Similarly, as described with respect to other examples of an ADC, the companding ADCs of these subsequent diagrams also operate by providing an output current to the load32to cancel out the load current. This may be viewed as providing an output current that is equal to and opposite polarity to the load current. Note also that such a companding ADC may be implemented not only to sense an analog signal associated with the load32but also to provide power and/or energy to the load32within implementations where the load32is not energized via another source. In some examples, this providing of power and/or energy from the companding ADC to the load32is performed simultaneously via a single line via which the companding ADC senses and analog signal associated with the load32. Also, such a companding ADC may be implemented to perform sensing only of an analog signal associated with the load32without providing power and/or energy to the load32. Generally speaking, with respect to such non-linear N-bit DACs, such as the non-linear N-bit DAC1920, the output current provided there from is a non-linear function of the Do 2. Therefore, the Do 2 itself is also an inverse function of the load current, given that the output current from the non-linear N-bit DAC1920is operative to match or track the current of the load (e.g., being equal and opposite of the current of the load thereby minimizing the error signal that is based on the difference between Vref and Vin). FIG.19is a schematic block diagram of an embodiment1900of an ADC that includes a non-linear N-bit digital to analog converter (DAC) in accordance with the present invention. This diagram is similar to certain of the previous diagrams (e.g.,FIG.4) that include a comparator and a digital circuit410that generates the Do 1 that is provided to the processing module24. The processing module24processes the Do 1 to generate the Do 2. Also, an analog capacitor, C, is connected to a node that couples the load32to the companding ADC (e.g., an ADC that includes a non-linear N-bit digital to DAC, an ADC that provides for a non-linear response when generating a digital output signal based on the analog signal that it is sensing). However, in this diagram, a non-linear N-bit DAC1920is implemented to generate the current signal that is provided to the node that connects or couples the companding ADC to the load32to match and track the current signal of the load. Many of the subsequent diagrams include similar components and operate similarly with at least one difference being that they operate as companding ADCs such that they provide for a non-linear response when generating a digital output signal based on the analog signal that it is sensing. Many of the diagrams include a non-linear N-bit DAC1920is implemented in place of the N-bit DAC420. FIG.20is a schematic block diagram of another embodiment2000of an ADC that includes a non-linear N-bit DAC in accordance with the present invention. This diagram is similar toFIG.8with a difference being that a non-linear N-bit DAC1920is implemented in place of the N-bit DAC420. FIG.21is a schematic block diagram of another embodiment2100of an ADC that includes a non-linear N-bit DAC in accordance with the present invention. This diagram is similar toFIG.9with a difference being that a non-linear N-bit DAC1920is implemented in place of the N-bit DAC420. FIG.22is a schematic block diagram of another embodiment2200of an ADC that includes a non-linear N-bit DAC in accordance with the present invention. This diagram is similar toFIG.10with a difference being that a non-linear N-bit DAC1920is implemented in place of the N-bit DAC420. FIG.23is a schematic block diagram of an embodiment2300of an ADC that includes a non-linear N-bit DAC that is operative to process an analog differential signal in accordance with the present invention. This diagram is similar toFIG.11with a difference being that a differential non-linear N-bit DAC2320is implemented in place of the differential N-bit DAC1120. FIG.24is a schematic block diagram of another embodiment2400of an ADC that includes a non-linear N-bit DAC that is operative to process an analog differential signal in accordance with the present invention. This diagram is similar toFIG.12with a difference being that a differential non-linear N-bit DAC2320is implemented in place of the differential N-bit DAC1120. FIG.25is a schematic block diagram of an embodiment2500an ADC that includes a non-linear N-bit DAC and that is operative to perform voltage measurement in accordance with the present invention. This diagram is similar toFIG.14Awith a difference being that a non-linear N-bit DAC1920is implemented in place of the N-bit DAC420. For example, implementing an appropriate element in-line between the companding ADC and a load voltage32-1(e.g., a resistor, R, a trans-impedance circuitry, and/or any appropriate complement to convert voltage to current, etc.) facilitates the conversion of the load voltage32-1to a load current that may be detected using such a companding ADC. In such an example, the non-linear N-bit DAC1920within the companding ADC operates based on a function of Do 2. In an example that includes a resistor, R, implemented non-linear N-bit DAC1920, the Do 2 itself is an inverse function of the load voltage32-1divided by R (e.g., function of Vload/R). Certain of the following diagrams show the use of one or both of a PNP transistor (alternatively, Positive-Negative-Positive Bipolar Junction Transistor (BJT)) or an NPN transistor (alternatively, Negative-Positive-Positive BJT) to implement the non-linear conversion function. For example, the use of one or both of a PNP transistor or NPN transistor may be used to implement a logarithmic conversion function. In addition, certain of the following diagrams operate using a N-bit DAC420-1that provides an output voltage signal to be received by the base of an NPN transistor or a PNP transistor. In such examples, one or more of an NPN transistor or a PNP transistor is implemented to provide the current that matches or tracks the load current. Certain examples operate by sourcing current, and others operate by sinking current. Even other examples operate by providing both functionality of sourcing current and sinking current as may be required to match or track the load current. FIG.26Ais a schematic block diagram of an embodiment2601an ADC that includes a PNP transistor (alternatively, Positive-Negative-Positive Bipolar Junction Transistor (BJT)) implemented to source current in accordance with the present invention. Generally speaking, a BJT is a type of transistor including three terminals, a base (B), a collector (C), and an emitter (E). Such a BJT includes two semiconductor junctions that share a thin doped region in between them. Considering an NPN transistor, a thin p-doped region is implemented in between two n-type semiconductor regions thereby forming the two semiconductor junctions. Considering an PNP transistor, a thin n-doped region is implemented in between two p-type semiconductor regions thereby forming the two semiconductor junctions. With respect to such a transistor, the collector current, Ic, as a function of the voltage between the base (B) and emitter (E) is as follows: IC=IS(eqVBEkT-1), where, based on the Shockley diode equation or the diode law,Isis the reverse bias saturation current (alternatively referred to as scale current);VBEis the voltage across the semiconductor junction;VTis the thermal voltage, kT/q, which is the Boltzmann constant, k, times temperature, T, divided by electron charge, q. As such, the value of VBEis the output voltage of the N-bit DAC420-1, which operates based on a full-scale voltage shown as Vfull_scale, such that the N-bit DAC420-1is operative to provide an output voltage up to and including the full-scale voltage shown as Vfull_scale. Given that VBEis the output voltage of the N-bit DAC420-1, then it is also the conversion of the Do 2 to an analog signal. Therefore, the Do 2 is a an inverse function of the above equation showing the collector current, L, as follows: Do2=VBE≈kTqln(ICIS) The full-scale voltage shown as Vfull_scale is a reference voltage for the N-bit DAC420-1, which also operates to control the full-scale output current.FIG.28BandFIG.28Cshow examples by which a temperature independent full-scale reference circuit may be implemented. Referring again toFIG.26A, this diagram shows a PNP transistor implemented to source current to a node that connects to the load32to match and track the load current. FIG.26Bis a schematic block diagram of an embodiment2602an ADC that includes an NPN transistor (alternatively, Negative-Positive-Positive BJT) implemented to sink current in accordance with the present invention. This diagram shows an NPN transistor implemented to sink current from a node that connects to the load32to match and track the load current. FIG.27is a schematic block diagram of an embodiment2700an ADC that includes both a PNP transistor implemented to source current and an NPN transistor implemented to sink current in accordance with the present invention. This diagram shows both a PNP transistor implemented to source current to a node that connects to the load32to match and track the load current and also an NPN transistor implemented to sink current from a node that connects to the load32to match and track the load current. In cooperation with one another, both the PNP transistor and the NPN transistor can operate either to sink or source current as may be needed to match and track the load current. FIG.28Ais a schematic block diagram of an embodiment2801an ADC that includes diodes implemented to source and/or sink current in accordance with the present invention. This diagram shows the two diodes implemented and controlled using switches, such as being controlled by the processing module24, to provide for sinking or sourcing current to or from the node that connects to the load32to match and track the load current. FIG.28Bis a schematic block diagram of an embodiment2802a PNP transistor diode configuration operative to generate a full scale voltage signal in accordance with the present invention. In addition, note that one way to have a temperature independent full-scale reference current is to use a PNP or NPN diode configuration to generate the full-scale voltage (Vfull_scale) based on an applied reference current Iref. This is to form a current mirror. The output bipolar transistor current to the load is a mirror copy of the reference current, Iref, which is scaled by the voltage value provided by the N-bit DAC420-1. The reference current is applied to the collector of the PNP (or NPN) and the base is connected to the collector to form a diode configuration. The base voltage of the PNP is the full-scale voltage (Vfull_scale) that is applied to the N-bit DAC. Such a configuration for a PNP transistor is shown with respect toFIG.28B. Such a configuration for an NPN transistor is shown with respect toFIG.28B. FIG.28Cis a schematic block diagram of an embodiment2803an NPN transistor diode configuration operative to generate a full scale voltage signal in accordance with the present invention. Such implementations of a companding ADC using one or more NPN transistors, PNP transistors, and/or diodes provide a number of advantages over prior art ADCs. For example, they may be operated using extremely low power. Also, they operate to provide direct conversion of a digital output (e.g., Do 2) that is logarithmically proportional to the input current. Moreover, using an appropriate implementation, such as that described to provide a temperature independent full-scale reference current, such a companding ADC is temperature independent as opposed to the prior art ADCs, which are temperature dependent. Also, the accuracy and operation of such a companding ADC is independent of the IS current of the bipolar transistor [reverse bias saturation current (alternatively referred to as scale current)], which can have very wide tolerance across components. Certain of the following diagrams show the use of one or both of a P-channel or P-type metal-oxide-semiconductor field-effect transistor (MOSFET) (alternatively, PMOS transistor) or an N-channel or N-type metal-oxide-semiconductor field-effect transistor (MOSFET) (alternatively, NMOS transistor) to implement the non-linear conversion function. For example, the use of one or both of a PMOS transistor or an NMOS transistor may be used to implement a logarithmic conversion function. In addition, certain of the following diagrams operate using a N-bit DAC420-1that provides an output voltage signal to be received by the gate of an NMOS transistor or a PMOS transistor. In such examples, one or more of an NMOS transistor or a PMOS transistor is implemented to provide the current that matches or tracks the load current. Certain examples operate by sourcing current, and others operate by sinking current. Even other examples operate by providing both functionality of sourcing current and sinking current as may be required to match or track the load current. FIG.29Ais a schematic block diagram of an embodiment2901an ADC that includes a P-channel or P-type metal-oxide-semiconductor field-effect transistor (MOSFET) (alternatively, PMOS transistor) implemented to source current in accordance with the present invention. For example, the use of one or both of an NMOS transistor or a PMOS transistor operates as a square root conversion function. For example, the drain current, ID, of a MOSFET is as follows: ID=μCOX2WL(VGS-VT)2, whereVGSis the voltage across the gate (G) to source (S) junction of the MOSFET;VTis the thermal voltage, kT/q, which is the Boltzmann constant, k, times temperature, T, divided by electron charge, q;W is gate width;L is gate length;μCoxis a process transconductance parameter; andμCox(W/L) is a MOSFET transconductance parameter. As such, the voltage across the gate (G) to source (S) junction of the MOSFET, VGS, is the output voltage of the N-bit DAC420-1. As such, the value of VGSis the output voltage of the N-bit DAC420-1. Given that VGSis the output voltage of the N-bit DAC420-1, then it is also the conversion of the Do 2 to an analog signal. Therefore, the Do 2 (shown as Do in the equation below) is an inverse function of the above equation showing the drain current, ID, as follows: Do=VGS=2LμCOXWID-VT As can be seen, this shows the Do 2 (shown as Do in the equation above) as being a square root function of the input current, which is the drain current, ID. Also, note that parallel measurement similar to the log ratio-metric measurement may be used to remove the dependence on VT, which is the thermal voltage, kT/q, and which varies as a function of temperature. For example, a similar diode configuration and Iref current mirror as in the bipolar transistor variant can be applied here with respect to MOSFET devices. For example, consider generating a first digital output signal, shown as Do1 below, and also a first digital output signal, shown as Do2 below: Do1=VGS=2LμCOXWID1-VT, and Do2=VGS=2LμCOXWID2-VT, then the difference between them is as follows: Do1-Do2=2LμCOXWID1-2LμCOXWID2, which is temperature independent and has no dependence on VT, which is the thermal voltage, kT/q. Referring again toFIG.29A, this diagram shows a PMOS transistor implemented to source current to a node that connects to the load32to match and track the load current. FIG.29Bis a schematic block diagram of an embodiment2902an ADC that includes an N-channel or N-type metal-oxide-semiconductor field-effect transistor (MOSFET) (alternatively, NMOS transistor) implemented to sink current in accordance with the present invention. This diagram shows an NMOS transistor implemented to sink current from a node that connects to the load32to match and track the load current. FIG.30is a schematic block diagram of an embodiment3000an ADC that includes both a PMOS transistor implemented to source current and an NMOS transistor implemented to sink current in accordance with the present invention. This diagram shows both a PMOS transistor implemented to source current to a node that connects to the load32to match and track the load current and also an NMOS transistor implemented to sink current from a node that connects to the load32to match and track the load current. In cooperation with one another, both the PMOS transistor and the NMOS transistor can operate either to sink or source current as may be needed to match and track the load current. FIG.31is a schematic block diagram showing an embodiment3100of digital domain filtering within an ADC that includes a non-linear N-bit DAC in accordance with the present invention. This diagram is similar toFIG.15with a difference being that a non-linear N-bit DAC1920is implemented in place of the N-bit DAC420. FIG.32is a schematic block diagram showing an embodiment3200of digital domain filtering using cascaded filters within an ADC that includes a non-linear N-bit DAC in accordance with the present invention. This diagram is similar toFIG.16with a difference being that a non-linear N-bit DAC1920is implemented in place of the N-bit DAC420. FIG.33is a schematic block diagram showing another embodiment3300of digital domain filtering using configurable/adjustable cascaded filters within an ADC that includes a non-linear N-bit DAC in accordance with the present invention. This diagram is similar toFIG.17with a difference being that a non-linear N-bit DAC1920is implemented in place of the N-bit DAC420. FIG.34is a schematic block diagram showing an embodiment3400of one or more processing modules implemented to perform digital domain filtering within an ADC that includes a non-linear N-bit DAC in accordance with the present invention. This diagram is similar toFIG.18with a difference being that a non-linear N-bit DAC1920is implemented in place of the N-bit DAC420. It is noted that terminologies as may be used herein such as bit stream, stream, signal sequence, etc. (or their equivalents) have been used interchangeably to describe digital information whose content corresponds to any of a number of desired types (e.g., data, video, speech, text, graphics, audio, etc. any of which may generally be referred to as ‘data’). As may be used herein, the terms “substantially” and “approximately” provide an industry-accepted tolerance for its corresponding term and/or relativity between items. For some industries, an industry-accepted tolerance is less than one percent and, for other industries, the industry-accepted tolerance is 10 percent or more. Other examples of industry-accepted tolerance range from less than one percent to fifty percent. Industry-accepted tolerances correspond to, but are not limited to, component values, integrated circuit process variations, temperature variations, rise and fall times, thermal noise, dimensions, signaling errors, dropped packets, temperatures, pressures, material compositions, and/or performance metrics. Within an industry, tolerance variances of accepted tolerances may be more or less than a percentage level (e.g., dimension tolerance of less than +/−1%). Some relativity between items may range from a difference of less than a percentage level to a few percent. Other relativity between items may range from a difference of a few percent to magnitude of differences. As may also be used herein, the term(s) “configured to”, “operably coupled to”, “coupled to”, and/or “coupling” includes direct coupling between items and/or indirect coupling between items via an intervening item (e.g., an item includes, but is not limited to, a component, an element, a circuit, and/or a module) where, for an example of indirect coupling, the intervening item does not modify the information of a signal but may adjust its current level, voltage level, and/or power level. As may further be used herein, inferred coupling (i.e., where one element is coupled to another element by inference) includes direct and indirect coupling between two items in the same manner as “coupled to”. As may even further be used herein, the term “configured to”, “operable to”, “coupled to”, or “operably coupled to” indicates that an item includes one or more of power connections, input(s), output(s), etc., to perform, when activated, one or more its corresponding functions and may further include inferred coupling to one or more other items. As may still further be used herein, the term “associated with”, includes direct and/or indirect coupling of separate items and/or one item being embedded within another item. As may be used herein, the term “compares favorably”, indicates that a comparison between two or more items, signals, etc., provides a desired relationship. For example, when the desired relationship is that signal1has a greater magnitude than signal2, a favorable comparison may be achieved when the magnitude of signal1is greater than that of signal2or when the magnitude of signal2is less than that of signal1. As may be used herein, the term “compares unfavorably”, indicates that a comparison between two or more items, signals, etc., fails to provide the desired relationship. As may be used herein, one or more claims may include, in a specific form of this generic form, the phrase “at least one of a, b, and c” or of this generic form “at least one of a, b, or c”, with more or less elements than “a”, “b”, and “c”. In either phrasing, the phrases are to be interpreted identically. In particular, “at least one of a, b, and c” is equivalent to “at least one of a, b, or c” and shall mean a, b, and/or c. As an example, it means: “a” only, “b” only, “c” only, “a” and “b”, “a” and “c”, “b” and “c”, and/or “a”, “b”, and “c”. As may also be used herein, the terms “processing module”, “processing circuit”, “processor”, “processing circuitry”, and/or “processing unit” may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions. The processing module, module, processing circuit, processing circuitry, and/or processing unit may be, or further include, memory and/or an integrated memory element, which may be a single memory device, a plurality of memory devices, and/or embedded circuitry of another processing module, module, processing circuit, processing circuitry, and/or processing unit. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. Note that if the processing module, module, processing circuit, processing circuitry, and/or processing unit includes more than one processing device, the processing devices may be centrally located (e.g., directly coupled together via a wired and/or wireless bus structure) or may be distributedly located (e.g., cloud computing via indirect coupling via a local area network and/or a wide area network). Further note that if the processing module, module, processing circuit, processing circuitry and/or processing unit implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory and/or memory element storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. Still further note that, the memory element may store, and the processing module, module, processing circuit, processing circuitry and/or processing unit executes, hard coded and/or operational instructions corresponding to at least some of the steps and/or functions illustrated in one or more of the Figures. Such a memory device or memory element can be included in an article of manufacture. One or more embodiments have been described above with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claims. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claims. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof. In addition, a flow diagram may include a “start” and/or “continue” indication. The “start” and “continue” indications reflect that the steps presented can optionally be incorporated in or otherwise used in conjunction with one or more other routines. In addition, a flow diagram may include an “end” and/or “continue” indication. The “end” and/or “continue” indications reflect that the steps presented can end as described and shown or optionally be incorporated in or otherwise used in conjunction with one or more other routines. In this context, “start” indicates the beginning of the first step presented and may be preceded by other activities not specifically shown. Further, the “continue” indication reflects that the steps presented may be performed multiple times and/or may be succeeded by other activities not specifically shown. Further, while a flow diagram indicates a particular ordering of steps, other orderings are likewise possible provided that the principles of causality are maintained. The one or more embodiments are used herein to illustrate one or more aspects, one or more features, one or more concepts, and/or one or more examples. A physical embodiment of an apparatus, an article of manufacture, a machine, and/or of a process may include one or more of the aspects, features, concepts, examples, etc. described with reference to one or more of the embodiments discussed herein. Further, from figure to figure, the embodiments may incorporate the same or similarly named functions, steps, modules, etc. that may use the same or different reference numbers and, as such, the functions, steps, modules, etc. may be the same or similar functions, steps, modules, etc. or different ones. Unless specifically stated to the contra, signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential. For instance, if a signal path is shown as a single-ended path, it also represents a differential signal path. Similarly, if a signal path is shown as a differential path, it also represents a single-ended signal path. While one or more particular architectures are described herein, other architectures can likewise be implemented that use one or more data buses not expressly shown, direct connectivity between elements, and/or indirect coupling between other elements as recognized by one of average skill in the art. The term “module” is used in the description of one or more of the embodiments. A module implements one or more functions via a device such as a processor or other processing device or other hardware that may include or operate in association with a memory that stores operational instructions. A module may operate independently and/or in conjunction with software and/or firmware. As also used herein, a module may contain one or more sub-modules, each of which may be one or more modules. As may further be used herein, a computer readable memory includes one or more memory elements. A memory element may be a separate memory device, multiple memory devices, or a set of memory locations within a memory device. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. The memory device may be in a form a solid-state memory, a hard drive memory, cloud memory, thumb drive, server memory, computing device memory, and/or other physical medium for storing digital information. While particular combinations of various functions and features of the one or more embodiments have been expressly described herein, other combinations of these features and functions are likewise possible. The present disclosure is not limited by the particular examples disclosed herein and expressly incorporates these other combinations. | 113,692 |
11863198 | The details of various embodiments of the methods and systems are set forth in the accompanying drawings and the description below. DETAILED DESCRIPTION For purposes of reading the description of the various embodiments below, the following descriptions of the sections of the specification and their respective contents can be helpful:Section A describes a network environment and computing environment which can be useful for practicing embodiments described herein; andSection B describes embodiments of systems and methods for power efficient SAR ADC, according to one or more embodiments. Various embodiments disclosed herein are related to a device for communication of data. In some embodiments, the device includes or is a SAR ADC employed in a physical layer product. In some embodiments, the SAR ADC is a high speed (e.g., with resolutions beyond 7 and speeds of several hundred Mega samples per second operating frequency). In some embodiments, the SAR ADC is adaptively tuned for PVT operations to increase speed or reduce noise as appropriate. The adaptive feedback technique can adjust the comparator driver threshold to increase the speed of operation in the slow corner without any common mode voltage VCMrelated error in the fast (FF) corner in some embodiments. Advantageously, systems and methods described herein can provide a SAR ADC that has reduced latency, reduced area, reduced mismatches, and increased bandwidth for large SAR ADC arrays. In some embodiments, an optimized (e.g., for power consumption and performance) SAR ADC having adaptive current or voltage parameter adjustments is provided. The SAR DAC is configured to adjust current or voltage parameters associated with the comparator to adjust speed of the comparator operation. Systems and methods adjust the comparator bias current or threshold voltages to enable a very high speed comparator with low noise and reduced power consumption (by 10 percent or more) in some embodiments. In some embodiments, current or voltage parameters are adjusted or adaptively tuned in accordance with a conversion margin. In some embodiments, the conversion margin is indicative of an amount of unused time during the compare operation available within a sampling cycle. In some embodiments, the conversion margin is determined using a time delay and probability relationship. In some embodiments, the current or voltage parameters include a current bias for a comparator current source, a threshold voltage for a comparator driver, a current bias for a comparator driver, and/or a supply voltage (e.g., VDDfor the comparator or other voltage provided by an on-board supply regulator). In some embodiments, the SAR ADC employs an operational flow to sense conditions to determine the probability relationship and adjust the voltage and current parameters. In some embodiments, the SAR ADC includes a data path configured for increased speed. The data path uses an enable signal (enable zero) to reset directly a capacitor digital to analog converter (CAPDAC) circuit which enables faster reset and improved CAPDAC circuit settling time. In some embodiments, the CAPDAC circuit receives feedback signals directly from ratioed latches instead of separate drivers, thereby increasing speed (e.g., by reducing the clock path delay significantly) and reducing layout complications. In some embodiments, the most significant bit (MSB) ratioed latch is driven by a single comparator driver while the remaining latches are driven by a second driver coupled to the single comparator driver. In some embodiments, the second driver is directly coupled to the single driver. In some embodiments, the use of this configuration of drivers reduces the load on the single comparator driver, thereby reducing settling time for the MSB and reducing layout complications. Various embodiments disclosed herein are related to a device. The device includes digital to analog conversion (DAC) circuit configured to sample an input voltage, a comparator circuit coupled to the DAC circuit and having an output, a first set of storage circuits, and a comparator driver. The comparator driver is disposed between the comparator circuit and the first set of storage circuits. The first set of storage circuits are coupled to the comparator circuit and the DAC circuit. The first set of storage circuits is configured to store first bits corresponding to the input voltage. The comparator driver includes a first driver and second driver. The first driver is coupled to a first input of a first storage circuit of the first set of storage circuits, and the second driver is coupled to first inputs of a second set of storage circuits within the first set of storage circuits. In some embodiments, the second set of storage circuits does not include the first storage circuit. In some embodiments, the first storage circuit stores a most significant bit of the first bits provided by the comparator circuit. In some embodiments, the second set of storage circuits includes all remaining storage circuits in the first set of storage circuits. In some embodiments, the device also includes a third set of storage circuits coupled to the comparator circuit and the DAC circuit. The third set of storage circuits is configured to store second bits corresponding to the input voltage. In some embodiments, the comparator driver includes a third driver and a fourth driver. The third driver is coupled to a first input of a second storage circuit of the third set of storage circuits. The fourth driver is coupled to first inputs of a fourth set of storage circuits within the third set of storage circuits. In some embodiments, the third driver is coupled to an input of the fourth driver. In some embodiments, the first driver is coupled to an input of the second driver. In some embodiments, the digital to analog conversion (DAC) circuit is reset by an enable signal received by the first set of storage circuits. Various embodiments disclosed herein are related to a device. The device includes a digital to analog conversion (DAC) circuit configured to sample an input voltage. The DAC circuit includes reset transistors. The device also includes a comparator circuit coupled to the DAC circuit, a first set of storage circuits coupled to the comparator circuit and the DAC circuit, and an enable circuit configured to provide enable signals to the first set of storage circuits. The first set of storage circuits is configured to store first bits corresponding to the input voltage provided by the comparator circuit. One of the enable signals is provided to the reset transistors to reset the sample and digital to analog conversion (DAC) circuit. In some embodiments, the one of the enable signals is an enable zero (EN0) signal. In some embodiments, the enable zero signal is a clock signal indicative the least significant bit being converted. In some embodiments, a delay is provided in a reset path associated with the reset transistors to ensure that a last conversion is not affected. In some embodiments, a comparator clock signal for the comparator circuit is stopped using the enable zero signal. In some embodiments, the enable circuit includes a number of flip flops, the flip flops being clocked by a comparator clock signal. In some embodiments, the device further includes a conversion status circuit which uses the one of the enable signals (e.g., an enable zero EN0 signal) to determine a conversion margin. Various embodiments disclosed herein are related to a device. The device includes a digital to analog conversion (DAC) circuit configured to sample an input voltage, a comparator circuit coupled to the DAC circuit, and a first set of storage circuits coupled to the comparator circuit and the sample and DAC circuit. The first set of storage circuits is configured to store first bits corresponding to the input voltage. The first set of storage circuits are ratioed latches having outputs coupled to feedback inputs of the digital to analog conversion (DAC) circuit. In some embodiments, the latches are set and reset latches. In some embodiments, the device further includes a comparator driver between an output of the comparator circuit and the first set of storage circuits. The comparator driver includes a first driver and second driver. The first driver is coupled to a first input of a first storage circuit of the first set of storage circuits, and the second driver is coupled to first inputs of a second set of storage circuits within the first set of storage circuits. In some embodiments, the first storage circuit stores a most significant bit of the first bits provided by the comparator circuit, and the second set of storage circuits includes all remaining storage circuits in the first set of storage circuits. In some embodiments, the device further includes an enable circuit configured to provide enable signals to the first set of storage circuits. One of the enable signals (e.g., an enable zero EN0 signal) is provided to reset transistors in the sample and digital to analog conversion (DAC) circuit to reset the sample and digital to analog conversion (DAC) circuit. In some embodiments, the conversion margin is indicative of unused or additional time during a compare operation available within a sampling cycle. In some embodiments, the control circuit is configured to determine the conversion margin and adjust the bias current, the threshold voltage, and/or the supply voltage on a periodic basis. In some embodiments, the control circuit is configured to determine the conversion margin and adjust the bias current, the threshold voltage, and/or the supply voltage at device power-up. In some embodiments, the control circuit is configured to adjust the bias current and the threshold voltage. In some embodiments, the control circuit is configured to adjust the bias current in response to the conversion margin, and the bias current is for a current mirror or a driver in the comparator circuit. In some embodiments, the control circuit is configured to adjust the threshold voltage in response to the conversion margin, and the threshold voltage is used by a driver in the comparator circuit. In some embodiments, the control circuit is configured to adjust the supply voltage in response to the conversion margin. In some embodiments, the device further includes a conversion status circuit configured to provide a conversion status signal, and the conversion status circuit includes a variable delay circuit. The control circuit adjusts the variable delay circuit to determine the conversion margin in some embodiments. Various embodiments disclosed herein are related to an apparatus including a receiver. The apparatus can be used in communication applications. The receiver includes an analog-to-digital conversion (ADC) circuit including a comparator and a processor. The processor is configured to determine a conversion margin and adjust a current or voltage used in the comparator using the conversion margin. A sample clock signal is used to sample a voltage received by the ADC circuit in some embodiments. A. Computing and Network Environment Prior to discussing specific embodiments of the present solution, it can be helpful to describe aspects of the operating environment as well as associated system components (e.g., hardware elements) in connection with the methods and systems described herein. Referring toFIG.1A, an embodiment of a network environment is depicted. In brief overview, the network environment includes a wireless communication system that includes one or more access points (APs) or network devices106, one or more wireless communication devices102and a node192. The wireless communication devices102can for example include laptop computers, tablets, personal computers and/or cellular telephone devices. The details of an embodiment of each wireless communication device102and/or network devices106or AP are described in greater detail with reference toFIGS.1B and1C. The network environment can be an ad hoc network environment, an infrastructure wireless network environment, a subnet environment, etc. in one embodiment. The network devices106or APs can be operably coupled to the network hardware or node192via local area network connections. The node192, which can include a router, gateway, switch, bridge, modem, system controller, appliance, etc., can provide a local area network connection for the communication system. Each of the network devices106or APs can have an associated antenna or an antenna array to communicate with the wireless communication devices in its area. The wireless communication devices102can register with a particular network devices106or AP to receive services from the communication system (e.g., via a SU-MIMO or MU-MIMO configuration). For direct connections (e.g., point-to-point communications), some wireless communication devices can communicate directly via an allocated channel and communications protocol. Some of the wireless communication devices102can be mobile or relatively static with respect to network devices106or AP. In some embodiments a network device106or AP includes a device or module (including a combination of hardware and software) that allows wireless communication devices102to connect to a wired network using wireless-fidelity (WiFi), or other standards. A network devices106or AP can sometimes be referred to as a wireless access point (WAP). A network devices106or AP can be implemented (e.g., configured, designed and/or built) for operating in a wireless local area network (WLAN). A network devices106or AP can connect to a router (e.g., via a wired network) as a standalone device in some embodiments. In other embodiments, a network devices106or AP can be a component of a router. A network devices106or AP can provide multiple devices access to a network. Network devices106or AP can, for example, connect to a wired Ethernet connection and provide wireless connections using radio frequency links for other devices102to utilize that wired connection. A network devices106or AP can be implemented to support a standard for sending and receiving data using one or more radio frequencies. Those standards, and the frequencies they use can be defined by the IEEE (e.g., IEEE 802.11 standards). Network devices106or AP can be configured and/or used to support public Internet hotspots, and/or on a network to extend the network's Wi-Fi signal range. In some embodiments, the access points106can be used for (e.g., in-home or in-building) wireless networks (e.g., IEEE 802.11, Bluetooth, ZigBee, any other type of radio frequency based network protocol and/or variations thereof). Each of the wireless communication devices102can include a built-in radio and/or is coupled to a radio. Such wireless communication devices102and/or access points106can operate in accordance with the various aspects of the disclosure as presented herein to enhance performance, reduce costs and/or size, and/or enhance broadband applications. Each wireless communication device102can have the capacity to function as a client node seeking access to resources (e.g., data, and connection to networked nodes such as servers) via one or more access points106. The network connections can include any type and/or form of network and can include any of the following: a point-to-point network, a broadcast network, a telecommunications network, a data communication network, a computer network. The topology of the network can be a bus, star, or ring network topology. The network can be of any such network topology as known to those ordinarily skilled in the art capable of supporting the operations described herein. In some embodiments, different types of data can be transmitted via different protocols. In other embodiments, the same types of data can be transmitted via different protocols. The communications device(s)102and access point(s)106can be deployed as and/or executed on any type and form of computing device, such as a computer, network device or appliance capable of communicating on any type and form of network and performing the operations described herein.FIGS.1B and1Cdepict block diagrams of a computing device100useful for practicing an embodiment of the wireless communication devices102or network devices106or AP. As shown inFIGS.1B and1C, each computing device100includes a processor121(e.g., central processing unit), and a main memory unit122. As shown inFIG.1B, a computing device100can include a storage device128, an installation device116, a network interface118, an I/O controller123, display devices124a-124n, a keyboard126and a pointing device127, such as a mouse. The storage device128can include an operating system and/or software. As shown inFIG.1C, each computing device100can also include additional optional elements, such as a memory port103, a bridge170, one or more input/output devices130a-130n, and a cache memory140in communication with the central processing unit121. The central processing unit121is any logic circuitry that responds to and processes instructions fetched from the main memory unit122. In many embodiments, the central processing unit121is provided by a microprocessor unit, such as: those manufactured by Intel Corporation of Santa Clara, California; those manufactured by International Business Machines of White Plains, New York; or those manufactured by Advanced Micro Devices of Sunnyvale, California. The computing device100can be based on any of these processors, or any other processor capable of operating as described herein. Main memory unit122can be one or more memory chips capable of storing data and allowing any storage location to be directly accessed by the microprocessor121, such as any type or variant of Static random access memory (SRAM), Dynamic random access memory (DRAM), Ferroelectric RAM (FRAM), NAND Flash, NOR Flash and Solid State Drives (SSD). The main memory122can be based on any of the above described memory chips, or any other available memory chips capable of operating as described herein. In the embodiment shown inFIG.1B, the processor121communicates with main memory122via a system bus150(described in more detail below).FIG.1Cdepicts an embodiment of a computing device100in which the processor communicates directly with main memory122via a memory port103. For example, inFIG.1Cthe main memory122can be DRDRAM. FIG.1Cdepicts an embodiment in which the main processor121communicates directly with cache memory140via a secondary bus, sometimes referred to as a backside bus. In other embodiments, the main processor121communicates with cache memory140using the system bus150. Cache memory140typically has a faster response time than main memory122and is provided by, for example, SRAM, BSRAM, or EDRAM. In the embodiment shown inFIG.1C, the processor121communicates with various I/O devices130via a local system bus150. Various buses can be used to connect the central processing unit121to any of the I/O devices130, for example, a VESA VL bus, an ISA bus, an EISA bus, a MicroChannel Architecture (MCA) bus, a PCI bus, a PCI-X bus, a PCI-Express bus, or a NuBus. For embodiments in which the I/O device is a video display124, the processor121can use an Advanced Graphics Port (AGP) to communicate with the display124.FIG.1Cdepicts an embodiment of a computer100in which the main processor121can communicate directly with I/O device130b, for example via HYPERTRANSPORT, RAPIDIO, or INFINIBAND communications technology.FIG.1Calso depicts an embodiment in which local busses and direct communication are mixed: the processor121communicates with I/O device130ausing a local interconnect bus while communicating with I/O device130bdirectly. A wide variety of I/O devices130a-130ncan be present in the computing device100. Input devices include keyboards, mice, trackpads, trackballs, microphones, dials, touch pads, touch screen, and drawing tablets. Output devices include video displays, speakers, inkjet printers, laser printers, projectors and dye-sublimation printers. The I/O devices can be controlled by an I/O controller123as shown inFIG.1B. The I/O controller can control one or more I/O devices such as a keyboard126and a pointing device127, e.g., a mouse or optical pen. Furthermore, an I/O device can also provide storage and/or an installation medium116for the computing device100. In still other embodiments, the computing device100can provide USB connections (not shown) to receive handheld USB storage devices such as the USB Flash Drive line of devices manufactured by Twintech Industry, Inc. of Los Alamitos, California. Referring again toFIG.1B, the computing device100can support any suitable installation device116, such as a disk drive, a CD-ROM drive, a CD-R/RW drive, a DVD-ROM drive, a flash memory drive, tape drives of various formats, USB device, hard-drive, a network interface, or any other device suitable for installing software and programs. The computing device100can further include a storage device, such as one or more hard disk drives or redundant arrays of independent disks, for storing an operating system and other related software, and for storing application software programs such as any program or software120for implementing (e.g., configured and/or designed for) the systems and methods described herein. Optionally, any of the installation devices116could also be used as the storage device. Additionally, the operating system and the software can be run from a bootable medium. Furthermore, the computing device100can include a network interface118to interface to a network through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (e.g., 802.11, T1, T3, 56 kb, X.25, SNA, DECNET), broadband connections (e.g., ISDN, Frame Relay, ATM, Gigabit Ethernet, Ethernet-over-SONET), wireless connections, or some combination of any or all of the above. Connections can be established using a variety of communication protocols (e.g., TCP/IP, IPX, SPX, NetBIOS, Ethernet, ARCNET, SONET, SDH, Fiber Distributed Data Interface (FDDI), RS232, IEEE 802.11, IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, IEEE 802.11ac, IEEE 802.11ad, CDMA, GSM, WiMax and direct asynchronous connections). In one embodiment, the computing device100communicates with other computing devices100′ via any type and/or form of gateway or tunneling protocol such as Secure Socket Layer (SSL) or Transport Layer Security (TLS). The network interface118can include a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing the computing device100to any type of network capable of communication and performing the operations described herein. In some embodiments, the computing device100can include or be connected to one or more display devices124a-124n. As such, any of the I/O devices130a-130nand/or the I/O controller123can include any type and/or form of suitable hardware, software, or combination of hardware and software to support, enable or provide for the connection and use of the display device(s)124a-124nby the computing device100. For example, the computing device100can include any type and/or form of video adapter, video card, driver, and/or library to interface, communicate, connect or otherwise use the display device(s)124a-124n. In one embodiment, a video adapter can include multiple connectors to interface to the display device(s)124a-124n. In other embodiments, the computing device100can include multiple video adapters, with each video adapter connected to the display device(s)124a-124n. In some embodiments, any portion of the operating system of the computing device100can be configured for using multiple displays124a-124n. In further embodiments, an I/O device130can be a bridge between the system bus150and an external communication bus, such as a USB bus, an Apple Desktop Bus, an RS-232 serial connection, a SCSI bus, a FireWire bus, a FireWire 800 bus, an Ethernet bus, an AppleTalk bus, a Gigabit Ethernet bus, an Asynchronous Transfer Mode bus, a FibreChannel bus, a fiber optic bus, a Serial Attached small computer system interface bus, a USB connection, or a HDMI bus. A computing device100of the sort depicted inFIGS.1B and1Ccan operate under the control of an operating system, which controls scheduling of tasks and access to system resources. The computing device100can be running any operating system such as any of the versions of the MICROSOFT WINDOWS operating systems, the different releases of the Unix and Linux operating systems, any version of the MAC OS for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, any operating systems for mobile computing devices, or any other operating system capable of running on the computing device and performing the operations described herein. Typical operating systems include, but are not limited to: Android, produced by Google Inc.; WINDOWS 7, 8 and 10, produced by Microsoft Corporation of Redmond, Washington; MAC OS, produced by Apple Computer of Cupertino, California; WebOS, produced by Research In Motion (RIM); OS/2, produced by International Business Machines of Armonk, New York; and Linux, a freely-available operating system distributed by Caldera Corp. of Salt Lake City, Utah, or any type and/or form of a Unix operating system, among others. The computer system100can be any workstation, telephone, desktop computer, laptop or notebook computer, server, handheld computer, mobile telephone or other portable telecommunications device, media playing device, a gaming system, mobile computing device, or any other type and/or form of computing, telecommunications or media device that is capable of communication. In some embodiments, the computing device100can have different processors, operating systems, and input devices consistent with the device. For example, in one embodiment, the computing device100is a smart phone, mobile device, tablet or personal digital assistant. Moreover, the computing device100can be any workstation, desktop computer, laptop or notebook computer, server, handheld computer, mobile telephone, any other computer, or other form of computing or telecommunications device that is capable of communication and that has sufficient processor power and memory capacity to perform the operations described herein. Aspects of the operating environments and components described above will become apparent in the context of the systems and methods disclosed herein. B. SAR ADC Various embodiments disclosed herein are related to a SAR ADC, such as, a high speed SAR ADC or very high speed SAR ADC. In some embodiments, the SAR ADC is relatively immune to PVT variations and is configured for use in 200G/100G networking applications. In some embodiments, the systems and methods described herein for a SAR ADC used in network integrated circuits (ICs) such as a 225 Gbps PAM4 Optical Transceiver or other transceiver. In some embodiments, the systems and methods described herein provide a speed advantage without a significant power/area penalty. The SAR ADC can be utilized in or communicate with the various components discussed above with reference toFIGS.1A-C. The SAR ADC can operate according to the principles described herein and use the conversion structure and operations described in U.S. Pat. No. 10,903,846 assigned to the assignee of the present application and incorporated herein by reference in its entirety. The SAR ADC can operate according to the principles described herein and use the conversion structure and operations described in U.S. Patent Application Serial No. 17/700,166 (106861 6120) invented by Singh et al. and filed on an Mar. 21, 2022, and U.S. patent application Ser. No. 17/694,225 invented by Liu et al. and filed on Mar. 14, 2022, both assigned to the assignee of the present application and incorporated herein by reference in their entireties. FIG.2is a block diagram depicting a communication device200, according to one or more embodiments. In some embodiments, the communication device200is a system, a device, or an apparatus for network communications. For example, the communication device200is implemented as part of the network device106, the node192or network component, the device102, or network equipment serving a network in communication with device102. In some embodiments, the device200includes a transmitter210, a receiver220, and a processor280. These components may operate together to communicate with another communication device through a network cable (e.g., Ethernet, USB, Firewire, etc.) and/or through a wireless medium (e.g., Wi-Fi, Bluetooth, 60 GHz link, cellular network, etc.). In some embodiments, the communication device200includes more, fewer, or different components than shown inFIG.2. The transmitter210is a circuit or a component that receives transmit data TX Data from the processor280, and generates output signals Out+, Out−. The transmitter210may receive N bits of digital data TX Data from the processor280, and generate the output signals Out+, Out− having voltages or currents corresponding to the digital data TX Data. The output signals Out+, Out− may be differential signals. In some embodiments, the transmitter210may generate a single ended signal or a signal in a different representation for the output signals Out+, Out−. In some embodiments, the transmitter210transmits the output signals Out+, Out− through a network cable. In some embodiments, the transmitter210provides the output signals Out+, Out− to a wireless transmitter (not shown) that can upconvert the output signals Out+, Out− to generate a wireless transmit signal at a radio frequency and transmit the wireless transmit signal through a wireless medium. The receiver220is a circuit or a component that receives input signals In+, In−, and generates receive data RX Data. In some embodiments, the receiver220receives the input signals In+, In− through a network cable. The input signals In+, In− may be differential signals. In some embodiments, the receiver220may receive a single ended signal or a signal in a different representation for the input signals In+, In−. In some embodiments, the receiver220receives the input signals In+, In− from a wireless receiver (not shown) that can receive a wireless receive signal through a wireless medium and downconvert the wireless receive signal to generate the input signals In+, In− at a baseband frequency. In some embodiments, the receiver220receives the input signals In+, In− and generates N bits of digital data RX Data corresponding to voltages or currents of the input signals In+, In−. The receiver220may provide the digital data RX Data to the processor280. In some embodiments, the receiver220includes a SAR ADC225that can convert the input signals In+, In− into N-bit digital data RX Data. The processor280is a circuit or a component that can perform logic computations. In some embodiments, the processor280is implemented as a field-programmable gate array, an application-specific integrated circuit, or state machine. The processor280may be electrically coupled to the transmitter210and the receiver220through conductive traces or bus connections. In this configuration, the processor280may receive the data RX Data from the receiver220and perform logic computations or execute various applications according to states of the received data RX Data. The processor280may also generate the data TX Data, and provide the data TX Data to the transmitter210. With reference toFIG.3A, a SAR ADC300may be implemented as SAR ADC225illustrated inFIG.2. In some embodiments, the SAR ADC300includes a sample capacitor and digital to analog (CAPDAC) circuit310, a comparator circuit330, a conversion status circuit350, a first set of storage circuits360A, a second set of storage circuits360B, an enable circuit370, a clock path circuit380, a controller382, and a re-timer circuit400. These components may operate together to receive input signals In+, In− and perform successive approximation analog to digital conversion to generate L-bit data RX Data corresponding to voltages of the input signals In+, In−, where L is any integer. In some embodiments, the SAR ADC300includes more, fewer or different components than shown inFIG.3. Although inFIG.3A, the CAPDAC circuit310, the comparator circuit330, and the storage circuits360A,360B are shown as generating and processing differential signals, some or all of these components may generate and process single ended signals. Additional comparator circuits and storage circuits can be provided in a cascaded, pipelined or serial fashion (e.g., a two comparator design for SAR ADC300). SAR ADC300is an IC device integrated on a single substrate, provided in a multi-chip package, or is part of another IC device in some embodiments. In some embodiments, the sample and DAC circuit310is a circuit or a component that samples the input signals In+, In−, and generates DAC output signals DAC Out+, DAC Out−. In one implementation, the sample and DAC circuit310is embodied as a capacitive DAC circuit. In some embodiments, the sample and DAC circuit310includes inputs311and312configured to receive the input signals In+, In−, feedback ports313,314,315, and316configured to receive L-bit data RX Data, and output ports331and332configured to output DAC output signals DAC Out+, DAC Out−. In some embodiments, N number of feedback ports315and316are coupled to N number of output ports of the first set of storage circuits360A, and M number of feedback ports314and314are coupled to M number of output ports of the second set of storage circuits360B, where M and N are any integers. In some examples, N may be 4, 6, 8, 9, 10, 16, 32, and M may be 4, 6, 8, 9, 10, 16 or 31. N and M can be equal and can be equal to L in some embodiments. The first output port331of the sample and DAC circuit310is coupled to a first input port of the comparator circuit330in some embodiments. The second output port333of the sample and DAC circuit310is coupled to a second input port of the comparator circuit330. In some embodiments, the sample and DAC circuit310receives the input signals In+, In− at the inputs311and312and L-bit data RX Data at the feedback ports313,314,315, and316, and samples the input signals In+, In−. The sample and DAC circuit310may perform DAC, according to the L-bit data RX Data to generate DAC output signals DAC Out+, DAC Out− at the output ports331and332. The sample and DAC circuit310may provide the DAC output signals DAC Out+, DAC Out− to the comparator circuit330. In one approach, for a bit Xthof the L-bit data RX Data, the DAC output signals DAC Out+, DAC Out− indicate voltages (e.g., Vin+, Vin−) of the input signals In+, In− with voltages corresponding to L-X number of MSB(s) of Data RX. In one approach, the sample and DAC circuit310generates the DAC output signals DAC Out+, DAC Out−, according to the following equation: VDACOut+-VDACOut-=-(Vin+-Vin-)+∑k=XL2k-L-1×Vref×(2*RXData(k)-1); where VDAC Out+ is the voltage of the DAC Out+ signal, VDAC Out− is the voltage of the DAC Out− signal, and Vrefis the reference voltage. In some embodiments, the comparator circuit330is a circuit or a component that receives the DAC output signals DAC Out+, DAC Out−, and determines a state of a corresponding bit of the data RX Data according to the DAC output signals DAC Out+, DAC Out−. In some embodiments, the comparator circuit330includes a first output port361coupled to first input ports of the first set of storage circuits360A via inverter336and inverter1132, a second output port362coupled to input ports of the second set of storage circuits360B via inverter338and inverter1134, and a clock input339coupled to a conductor373. Conductor373is coupled to an output of flip flop344(e.g., D-flip flop) of clock path circuit380. A clock path340includes conductor373, inverters336,338, and clock path circuit380which includes a logic device342, and flip flop344. A data path302includes first set of storage circuits360A and second set of storage circuits360B and extends between feedback ports313,314,315, and316of CAPDAC circuit310and output ports361and362of comparator circuit330. The first output port361of the comparator circuit330may be directly coupled to the first input ports of the storage circuits360A, and the second output port362of the comparator circuit330may be directly coupled to the input ports of the second storage circuits360bin some embodiments. Comparator circuit330may be enabled or disabled according to the clock signal CLK_SAR at clock input339from the clock path circuit380on conductor373. The clock signal CLK_SAR is a comparator clock signal for the comparator circuit330which is disabled or stopped using the enable zero signal. For example, comparator circuit330is enabled in response to a rising edge or logic state ‘1’ of the clock signal CLK_SAR and is disabled in response to a falling edge or logic state ‘0’ of the clock signal CLK_SAR. When the comparator circuit330is enabled, the comparator circuit330may determine a state of a bit according to the DAC output signals DAC Out+, DAC Out−, and generate comparator outputs Comp Out1+, Comp Out1− at the output ports361and362indicating the determined state of the bit. For example, when the comparator circuit330is enabled, in response to a difference in voltages of the DAC output signal DAC Out+, DAC Out− being higher than 0V or a reference voltage the comparator circuit330may generate the comparator output Comp Out1+ having a logic state ‘1’ and the comparator output Comp Out1− having a logic state ‘0’. For example, when the comparator circuit330is enabled, in response to a difference in voltages of the DAC output signals DAC Out+, DAC Out− being lower than 0V or the reference voltage, the comparator circuit330may generate the comparator output Comp Out1+ having a logic state ‘0’ and the comparator output Comp Out1− having a logic state ‘1’. When the comparator circuit330is disabled, the comparator circuit330may reset the comparator outputs Comp Out1+, Comp Out1− to logic state ‘0’. The comparator circuit330may provide the comparator outputs Comp Out1+, Comp Out1− to the first set of storage circuits360A and the second set of storage circuits360B. The comparator outputs Comp Out1+, Comp Out1− may be differential signals. With reference toFIG.3B, in some embodiments, the first set of storage circuits360A is a set of components that stores N bits (e.g., 2, 4, 6, 8, 9, 16, 32 bits, etc.) of data. In one implementation, the first set of storage circuits360A is embodied as N number of flip flops or latches. In some embodiments, the set of storage circuits360A are ratioed set reset (SR) latches. In some embodiments, the ratioed latches each include an output stage configured to drive feedback ports313,314,315, and316according to its bit position. In some embodiments, a first input port of each storage circuit360A other than storage circuit1102is coupled to the first output port361(FIG.3A) of the comparator circuit330through inverters336and1132(FIG.3B). In some embodiments, an enable port of each storage circuit360A is coupled to a corresponding enable output port of the enable circuit370, and an output port of each storage circuit360A is coupled to a corresponding feedback port315and316of the sample and DAC circuit310. In some embodiments, the first input port of storage circuit1102in set of storage circuits360A (e.g., corresponding to a most significant bit (MSB)) is coupled to the first output port361of the comparator circuit through inverter336. In this configuration, each storage circuit360A may be enabled or disabled according to a corresponding bit of enable signal EN on enable bus371. For example, a first storage circuit of storage circuits360A is enabled, in response to the enable signal EN being 00001 (the enable zero signal EN0); a second storage circuit of storage circuits360A is enabled in response to the enable signal EN being 00010; a third storage circuit of storage circuits360A is enabled, in response to the enable signal EN being 00100; a fourth storage circuit of storage circuits360A is enabled in response to the enable the enable signal EN being 01000; and a fifth storage circuit of storage circuits360A is enabled in response to the enable signal EN being 10000. The enable scheme follows the scheme as set forth above for the remaining storage circuits360A and360B in some embodiments. For example, all of the first set of storage circuits360A are disabled, in response to the enable signal EN being 00000. When a storage circuit360A is enabled, the storage circuit360A may update a corresponding bit of data RX Data, according to the comparator outputs Comp Out1+, Comp Out1−. For example, if a storage circuit360A is enabled, in response to the comparator output Comp Out1+ having a logic state ‘1’ and the comparator output Comp Out1− having a logic state ‘0’, the storage circuit360A may update a corresponding bit of the data RX Data to ‘1’. For example, if a storage circuit360A is enabled, in response to the comparator output Comp Out1− having a logic state ‘1’ and the comparator output Comp Out1+ having a logic state ‘0’, the storage circuit360A may update a corresponding bit of the data RX Data to ‘0’. If a storage circuit of storage circuits360A is disabled, the storage circuit may hold or maintain a corresponding bit of the data RX Data, irrespective of the comparator outputs Comp Out1+, Comp Out1− at the input ports. In some embodiments, the second set of storage circuits360B is a set of components that stores M bits of data. In one implementation, the second set of storage circuits360B is embodied as M number of flip flops or latches. In some embodiments, the set of storage circuits360B are ratioed set reset (SR) latches. In some embodiments, a first input port of each storage circuit360B except for storage circuit1104is coupled to the first output port362(FIG.3A) of the comparator circuit330through inverters338and1134. In some embodiments, an enable port of each storage circuit360B is coupled to a corresponding enable output port of the enable circuit370via bus371. In some embodiments, the first input port of storage circuit1104(FIG.3B) in set of storage circuits360B (e.g., corresponding to a most significant bit (MSB)) is coupled to the first output port362of the comparator circuit330through inverter338. An output port of each storage circuit360B is coupled to a corresponding feedback port313and314of the sample and DAC circuit310. In some embodiments, operation of the storage circuits360B is similar to the operation of the first set of storage circuits360A and the enabling of the storage circuits360B using the enable signal EN on bus371occurs as discussed above. In some embodiments, storage circuits360A are for the positive portion of the differential signal, and storage circuits360B are for the negative portion of the differential signal. By directly driving only the storage circuit1102with a driver (e.g., inverter336) and directly driving only the storage circuit1104with a driver (e.g., inverter338), the load of the comparator driver (e.g., inverters336and338) is reduced. This drive scheme can also simplify circuit layout. The remaining storage circuits of storage circuits360A-B are driven by different drivers (e.g., inverters1132and1134, respectively) in some embodiments. Inverter336has an output directly coupled to an input of inverter1132, and inverter338has an output directly coupled to an input of inverter1134in some embodiments. Inverters336and338directly drive the inputs of storage circuits1102and1104which represent the MSB in some embodiments. The delay in the data path302(FIG.3A) can be reduced by using an SR latch directly driving CAPDAC circuit310as opposed to using ratioed inverters between storage circuits360A-B and CAPDAC circuit310. With reference toFIG.3B, the enable 0 (EN0) signal on conductor345resets CAPDAC circuit310by driving the gates of switches or transistors1130in CAPDAC circuit310. The storage circuits360A and360B are enabled by the enable signal EN provided on bus371as described above. Re-timer circuit400is a flip flop or latched based circuit that receives outputs from storage circuits360A and360B. Re-timer circuit400aligns timing for downstream devices or components in some embodiments. In some embodiments, the conversion status circuit350, enable circuit370, and clock path circuit380cause the comparator circuit330and the storage circuits360A,360B to perform successive approximation analog to digital conversion. In some embodiments, the conversion status circuit350, enable circuit370, and clock path circuit380are implemented as a state machine and/or digital logic circuits. In some embodiments, a conductor372receives a T clock signal (T CLK), for example, from a clock generator (not shown). The T clock signal is a sampling clock and has a period corresponding to the sampling cycle. With reference toFIG.3A, the clock path circuit380provides the CLK_SAR signal on clock path340at conductor373for comparator circuit330and enable circuit370which provides enable signals on bus371. Flip flop344is configured as a D-type flip flop and includes an input coupled to conductor345which receives the enable 0 (EN0) signal from enable circuit370. Flip flop344includes a clock input coupled to logic device342which can be configured a NAND gate. Logic device342receives signals from inverters336and338. Inverters336,338,1132and1134are configured as a comparator driver. The CLK_SAR signal is an internally generated clock signal provided by flip flop344, logic device342, inverters336and338and comparator circuit330in some embodiments. The CLK_SAR signal is generated from comparator transitions and the enable 0 (EN0) signal in some embodiments. Enable circuit370is a latch or flip flop (e.g., D flip flop) based timing circuit that provides the enable EN signal on bus371for storage elements360A and360B in response to the clock signal on conductor373. Each flip flop or latch in enable circuit370is driven at a clock input by the CLK_SAR signal on conductor373and provides a bit of the enable EN signal on bus371. The enable 0 (EN0) signal from enable circuit370is provided on conductor345for use by conversion status circuit350and flip flop344of clock path circuit380. The enable 0 (EN0) signal, which is an indicator of the start of the last conversion cycle, is used to force an early reset (e.g., using transistors1130(FIG.3B)) via conductor345. The transistor303can be used to reset CAPDAC circuit310using the enable 0 (EN0) signal or other reset signal. A delay (e.g., a delay path or element) can be provided in the reset path associated with the enable 0 (EN0) signal to ensure that the last conversion is not affected by the reset operation using the enable 0 (EN0) signal. In some embodiments, the conductor345is directly coupled to transistors1130. Conversion status circuit350includes an inverter352, a conversion margin indicator circuit354and a conversion status output356. Inverter352receives the T clock signal at a conductor372and provides a CLK_RT signal to conversion margin indicator circuit354. Conversion margin indicator circuit354receives the enable 0 (EN0) signal at conductor345and receives the CLK_SAR signal at conductor372. Conversion status circuit350provides a conversion status signal at output356indicative of the status conversion operation. Conversion margin indicator circuit354provides signals for determining a conversion margin using the CLK_RT signal, enable 0 (EN0) signal, and the CLK_SAR signal at conductor372. In some embodiments, controller382is an on-chip controller configured to determine a conversion margin and adjust current and voltage parameters to achieve faster operation or less noise. Advantageously, controller382implements systems and methods described herein for determining conversion margin and adjusting voltage and current parameters in some embodiments. The controller382can be a hardware implementation or software (e.g., firmware implementation) integrated with SAR ADC300(e.g., provided as part of conversion status circuit350or other part or parts of SAR ADC300). In some embodiments, the processor or controller382determines a probability of a successful conversion being greater than a first threshold, and the successful conversion is defined when the conversion margin is greater than a second threshold. In some embodiments, controller382is processor, microcontroller, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or logic device, or any other type and form of dedicated semiconductor logic or processing circuitry capable of processing or supporting the operations described herein. In some embodiments, operations associated with controller382are performed wholly or in part by processor280(FIG.2). With reference toFIG.4, conversion margin indicator circuit354includes a D flip flop402, a variable time delay circuit404and a D flip flop406. Variable time delay circuit404is a programmable or configurable circuit that provides a selectable delay in some embodiments. D flip flop402includes a reset input coupled to conductor353for receiving the CLK_RT signal from inverter352(FIG.3), and D flip flop406includes a clock input coupled to conductor353for receiving the CLK_RT signal from inverter352. A D input of flip flop402is coupled to conductor345to receive the enable 0 (EN0) signal. An output of D flip flop402is coupled to an input of delay circuit404. The clock input of D flip flop402is coupled to conductor372to receive the CLK_SAR signal. The D input of flip flop406is coupled to receive a conversion done signal from variable time delay circuit404. The conversion done signal indicates when a conversion of the analog signal at inputs311and312to a digital representation in storage circuits360A-B has been completed in some embodiments. The conversion status signal is indicative that the conversion has occurred before the end of the sampling cycle in some embodiments. The conversion margin can be represented by or be proportional to a difference of the leading edge of the conversion done signal and the conversation status signal in some embodiments. In some embodiments, the conversion status circuit350provides the conversion status signal at output356. SAR ADC300(e.g., microcontroller382) can apply an adaptive feedback technique using the conversion margin determined from the conversion status signal and the conversion done signal or other parameter related thereto. The adaptive feedback can advantageously compensate for noise from comparator circuit330and process dependency associated with the driver (e.g., inverter336and338) for the comparator circuit330. In some embodiments, the conversion margin indicator circuit354is configured so that the approximate value of conversion margin using the conversion status signal can be deduced. With reference toFIG.5, wave form502is the CLK_RT signal at conductor353. Wave form504is the conversion done signal indicating that the conversion has been completed and is provided by time delay circuit404to the D input of flip flop406. Wave form506is the conversion status signal at output356. Wave form508is a conversion done signal with a time delay of TD1and is provided by time delay circuit404. Wave form510is the conversion status signal at output356when the time period TMis greater than the time delay TD1. The time period TMis proportional to the conversion margin. The wave form512is a conversion done signal with a time delay of TD2and is provided by time delay circuit404. Wave form514is the conversion status signal at output356when the time period TMis less than the time delay TD2. The time period TMdetermined using a destructive measurement of the conversion status signal at output356. For example, the value TDis varied until the conversion status signal at output356flips from 1 to 0 such that the value of the time delay TDcorresponds to time period TM(see wave form514). The time delay TDis provided by the variable time delay circuit404. In general, the time period TMand hence the conversion margin varies with the input to SAR ADC300, system noise and metastable events. In some embodiments, SAR ADC300employs a monitoring scheme that collects the values of the time period TMor other representations of the conversion margin. The values are collected over time. Values are collected as the time delay is varied in some embodiments. A probability that the time period TMis greater than a certain value can be calculated for various time delays. When the time period TMis greater than the time delay TD, a curve of the probability versus time period TMflattens out, thereby indicating a minimum conversion margin value at that delay. The delay can be used as a minimum conversion margin value which can be used to provide adaptive feedback for SAR ADC300as described below. With reference toFIG.6, comparator circuit330includes an adjustable current source circuit700in some embodiments. Current source circuit700includes transistor702, a current source704, a multiplexer706, a transistor710, and a transistor712configured as an adjustable current source for transistors714,716,718, and720of the comparator circuit330. Transistor712is driven by the CLK_SAR signal in some embodiments. Transistors714,716,718, and720are the first stage of comparator circuit330, receive the signals at output ports331and332(FIG.3), and can be part of a high speed comparator in some embodiments. Transistor710is configured to provide a current Ibiaswithin a range of Imaxand Imin. The value of current Ibiasis controlled by multiplexer706which selects a ground signal or a signal from between transistor702and current source704based on a selection signal provided to the select input of multiplexer706. The selection signal is provided by controller382in some embodiments. The multiplexer706drives transistor710in accordance with the selection. For a low speed corner (SS), the current source value is adjusted to Imaxfor high speed in some embodiments. For a high speed corner (FF), the current source value is adjusted to Iminfor low speed in some embodiments. The selection signal is related to the time period TMwhere larger conversion margin indicates lower Ibiasshould be provided by transistor710and vice versa. With reference toFIG.7, comparator circuit330includes an adjustable driver circuit800. Adjustable driver circuit800includes a transistor804, a multiplexer806, a current source820, a transistor814, a transistor824, a transistor826, a transistor828, and a transistor830. Adjustable driver circuit800is an adjustable comparator driver for a high speed comparator and can be provided at outputs361and362(FIG.3A) in some embodiments. The value of the current Ibiasiis controlled by multiplexer806which selects a ground signal or a signal from a node between transistor804and current source820to drive current Ibiasifor transistors824,828,826, and830. The select signal at select input of multiplexer806is used to tune adaptively the current Ibiasi. The select signal is based upon the conversion margin and can be provided by controller382(FIG.3). The current Ibiasithrough transistor814is adjusted to change the current Ibiasiwithout affecting speed significantly in some embodiments. The selection signal is related to the time period TMwhere larger conversion margin indicates lower ‘bias’ should be provided by transistor814and vice versa. With reference toFIG.8, comparator circuit330can includes an adjustable driver circuit900. Adjustable driver circuit900includes an input902, a transistor904, a transistor906, a transistor908, a transistor912, a transistor914, a multiplexer916, a transistor920, a multiplexer922, a transistor924, and an output930. Adjustable driver circuit900is configured as an adjustable comparator driver for a high speed comparator and can be provided at outputs361and362in some embodiments. The value of the driver threshold voltage Vthis controlled by multiplexers916and922which selects a voltage signal VDD or a clock signal (e.g., CLK_SAR) to drive transistor914and924, respectively. The select signals at select inputs of multiplexers916and922are used to tune adaptively the driver threshold voltage Vth. The select signals are based upon the conversion margin and can be provided by controller382(FIG.3). The selection signals are related to the time period TMwhere a larger conversion margin indicates lower threshold voltage Vthshould be provided by transistors914and924. In some embodiments, the time period TMis used to set VDDwhere a larger conversion margin indicates lower voltage VDDshould be provided and vice versa. With reference toFIG.9, SAR ADC300can perform a flow1000to measure statistics or values associated with the conversion status signal (e.g., wave form506(FIG.5)). Flow1000is performed to estimate a probability of the conversion status signal being 1 (e.g., wave form510as opposed to wave form514inFIG.5). The probability P(CS) can be determined by comparing the conversion margin to a value in some embodiments. The probability is a probability of a successful conversion, and a successful conversion is defined when the conversion margin is greater than a threshold in some embodiments. The value can be a fixed value, a percentage of the sample cycle, etc. The probability can be represented by curves1106,1108, and1110provided on an X-axis representing conversion margin in time and a Y-Axis representing probability. Curves1106,1108, and1110are each for a particular time delay (100 picoseconds (ps), 200 ps, 300 ps). The flow1000is performed so that SAR ADC300operates at the intersection of the linear and flat portion of curves1106,1108, and1110for a given delay TD, thereby providing a more optimum power, noise, and bit error rate (BER) tradeoff according to some embodiments. Generally, the flat portion of curves1106,1108, and1110begins when the conversion margin is greater than the time delay TD. Flow1000includes an operation1002where the current bias is adjusted (FIG.7). If current Ibiasis greater than Imin, the probability is estimated in an operation1004. If the probability is above a threshold (e.g., a percentage above 90 percent (e.g., 0.95)), flow1000returns to operation1002. If the probability is below a threshold (e.g., a percentage above 90 percent (e.g., 0.95)), flow1000ends adjustments in operation1006. If current Ibias(FIG.7) is less than Imin(minimum value of adjustable bias current of the comparator circuit330), the voltage threshold Vth(FIG.9) or the current Ibiasi(FIG.8) is adjusted in an operation1010. The voltage threshold Vthor the current Ibiasiis adjusted to increase speed in operation1010in some embodiments. If current Ibiasiis greater than Iminior the voltage threshold Vthis greater than Vmin(depending on implementation, seeFIGS.8and9), the probability is estimated in an operation1012. If the probability is above a threshold (e.g., a percentage above 90 percent (e.g.,0.95)), flow1000returns to operation1008. If the probability is below a threshold (e.g., a percentage above 90 percent (e.g., 0.95)), flow1000ends adjustments in operation1016. If current Ibiasiis less than Imini(minimum value of adjustable bias current of the comparator driver) or the voltage threshold Vthis less than Vmin(e.g., maximum NMOS adjustable code of the comparator driver), the voltage signal VDDis adjusted (e.g., lower) in an operation1020. If the voltage signal VDDis greater than VDDmin(minimum supply voltage for SAR ADC300), the probability is estimated in an operation1022. If the probability is above a threshold (e.g., a percentage above 90 percent (e.g., 0.95)), flow1000returns to operation1020. If the probability is below a threshold (e.g., a percentage above 90 percent (e.g., 0.95)), flow1000ends adjustments in operation1026. Flow1000can operate operations1010,1012, and1016for both threshold voltage Vthand current Ibiasitogether or sequentially. The order of branch of operations1002,1004, and1006, branch of operations1010,1012,1016, and branch of operations1020,1022, and1026can be switched in some embodiments. Flow1000can be performed in controller382. Flow1000can include fewer operations, such as operations1002,1004and1006or operations1010,1012, and1016, or operations1020,1022, and1026. Flow1000can be combined with other operations. The flow1000can be implemented in a hardware implementation or software (e.g., firmware implementation) (e.g., provided as part of conversion status circuit350or other part or parts of SAR ADC300). In some embodiments, controller382is any type and form of dedicated semiconductor logic or processing circuitry capable of processing or supporting flow1000. Flow1000can include software instructions provided on a non-transitory medium and can be implemented by executing the instructions on controller382. Flow1000can be performed at chip initialization, at power on, and periodically during operation. The conversion margin can be periodically calculate to determine if adjustments need to be made as the SAR ADC300operates (e.g., heats up). The flow1000can be performed periodically in millisecond time periods (e.g., every 4 milliseconds at start up and every 100-400 milliseconds during operation). In some embodiments, historical values of conversion margin are determined and stored. Large deltas between values can be used to initiate flow1000. In some embodiments, the conversion margin is indicative of an amount of unused time before a bit error occurs within a sampling cycle. It should be noted that certain passages of this disclosure can reference terms such as “first” and “second” in connection with devices or operations for purposes of identifying or differentiating one from another or from others. These terms are not intended to merely relate entities (e.g., a first device and a second device) temporally or according to a sequence, although in some cases, these entities can include such a relationship. Nor do these terms limit the number of possible entities that can operate within a system or environment. It should be understood that the systems described above can provide multiple ones of any or each of those components and these components can be provided on either a standalone machine or, in some embodiments, on multiple machines in a distributed system. In addition, the systems and methods described above can be provided as one or more computer-readable programs or executable instructions embodied on or in one or more articles of manufacture, e.g., a floppy disk, a hard disk, a CD-ROM, a flash memory card, a PROM, a RAM, a ROM, or a magnetic tape. The programs can be implemented in any programming language, such as LISP, PERL, C, C++, C #, or in any byte code language such as JAVA. The software programs or executable instructions can be stored on or in one or more articles of manufacture as object code. Further, certain components may be coupled together with intervening components provided there between. While the foregoing written description of the methods and systems enables one of ordinary skill to make and use embodiments thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The present methods and systems should therefore not be limited by the above described embodiments, methods, and examples, but by all embodiments and methods within the scope and spirit of the disclosure. | 63,899 |
11863199 | FIG.1is a schematic diagram of DAC circuitry100as a comparative example. DAC circuitry100comprises a succession of first and further load nodes121,123,125,127, and129, each successive further load node123,125,127, and129connected to its preceding load node via a divider impedance122,124,126, and128(which may be or comprise a resistor). That is, the further load node123is connected to its preceding load node (the first load node)121via the divider impedance122, and so on and so forth. DAC circuitry100further comprises a plurality of load impedances151,153,155,157, and159(which may be or comprise resistors) connected between a (high) voltage reference AVD and the plurality of load nodes121,123,125,127, and129, respectively. That is, the load impedance151is connected between the voltage reference AVD and the load node121, the load impedance153is connected between the voltage reference AVD and the load node123, and so on and so forth. DAC circuitry further comprises switching circuitry170and a plurality of current sources193,195,197, and199. The switching circuitry170is connected between the further load nodes123,125,127, and129and the current sources193,195,197, and199. The current sources193,195,197, and199are connected to a voltage reference lower than the voltage reference AVD (e.g. shown as GND). The first load node121is connected to an output node (not shown). The first load node121may be considered to be the output node. DAC circuitry100is configured to output an analogue output signal at the output node indicative of a digital input signal. This is achieved by the switching circuitry170which is configured to change, based on control signals (not shown), a magnitude of controllable current signals passing through the load nodes121,123,125,127, and129. The control signals are indicative of the digital input signal. In particular, although not shown, the switching circuitry170comprises a plurality of switches, each switch connected between a corresponding one of the further load nodes123,125,127, and129and a corresponding one of the current sources193,195,197, and199(e.g. between load node123and current source193). The switches correspond to different data bits of the digital input signal and the setting of the switch is by the corresponding data bit. For this reason the further load nodes123,125,127, and129may be associated with different bits of the digital input signal and may be labelled as LSB (least significant bit) 3, LSB2, LSB1, and LSB0, respectively. DAC circuitry100can be considered to be (or to comprise) an R2R network/ladder or an R2R circuit or R2R circuitry. That is, in order to effect binary weighting the impedances122,124,126,128and159may have a relative resistance R and the impedances153,155and157may have a relative resistance 2R, and this will be carried forwards as a running example. The voltage levels at the further load nodes123,125,127, and129(which may be referred to as ladder taps) can vary as the switches of the switching circuitry170turn on and off because the current through the R2R ladder varies due to contributions from different current sources193,195,197, and199(which may be referred to as slice currents). For example, the voltage level at the further load nodes123,125,127, and129can be higher when lower current is flowing in the R2R ladder. Furthermore, the voltage levels for the lower LSBs, e.g. LSB0, are higher than the higher LSBs, e.g. LSB3(that is, the voltage level at e.g. the load node129may be higher than that at the load node123). This voltage level (e.g. the load node129) can be higher than the rating of the components (i.e. the switches) used in the design which can lead to overvoltage. This can cause errors in the operation of the DAC circuitry100or cause the DAC circuitry100to stop working properly or have a reduced life. Furthermore the difference in DC level (common mode voltage) across or at the further load nodes123,125,127, and129can degrade performance (as explained more fully below with respect toFIG.9). FIG.2is a schematic diagram of DAC circuitry200as a comparative example. DAC circuitry200is the same as DAC circuitry100except for the additional impedances and connections along the top ofFIG.2. Like reference numerals have been given to elements corresponding to elements in DAC circuitry100(with a “2” at the start instead of a “1”). The elements corresponding to elements in DAC circuitry100will not be described and only the differences of DAC circuitry200compared to DAC circuitry100will be described. DAC circuitry200comprises a succession of first and further common nodes231,233,235,237and239, each successive further common node233,235,237and239connected to its preceding common node via a common impedance232,234,236, and238. That is, the further common node233is connected to its preceding common node (the first common node)231via the common impedance232, and so on and so forth. The common nodes231,233,235,237and239are connected to the load nodes221,223,225,227, and229via the load impedances251,253,255,257, and259, respectively. The first common node231is connected to the high voltage reference AVD in line withFIG.1. The common impedances232,234,236, and238(which may be or comprise resistors) cause a voltage drop to prevent overvoltage. However the common impedances232,234,236, and238impact the binary ratio of the R2R ladder in the running example and as such the DAC circuitry200does not properly function as an R2R ladder. FIG.3is a schematic diagram of differential circuitry300. Differential circuitry300may be referred to as DAC circuitry. Differential circuitry300comprises a first current path comprising a succession of first and further load nodes11,13,15,17, and19, each successive further load node13,15,17, and19connected to its preceding load node via a divider impedance12,14,16, and18. That is, the further load node13is connected to its preceding load node (the first load node)11via the divider impedance12, and so on and so forth. Differential circuitry300comprises a second current path similarly comprising a succession of first and further load nodes21,23,25,27, and29, each successive further load node23,25,27, and29connected to its preceding load node via a divider impedance22,24,26, and28. That is, the further load node23is connected to its preceding load node (the first load node)21via the divider impedance22, and so on and so forth. The first load nodes11and21of the first and second current paths comprise (and may be referred to as) a first pair of load nodes. Each successive further load node13,15,17, and19of the first current path and its corresponding successive further load node23,25,27, and29of the second current path comprise (and may be referred to as) a successive further pair of load nodes. For example, further load nodes13and23constitute a further pair of load nodes. Similarly, further load nodes15and25constitute a further pair of load nodes. The nodes of the first pair of load nodes11and21are each connected to a first common node31via respective first load impedances41and51. Each successive further pair of load nodes13,23,15,25,17,27,19, and29are connected to a successive further common node33,35,37, and39via respective further load impedances43,53,45,55,47,57,49, and59. That is, the load node13of the first current path is connected to the common node33via the load impedance43and the load node23of the second current path is connected to the common node33via the load impedance53, and so on and so forth. Each further common node33,35,37, and39is connected to its preceding common node via a common impedance32,34,36, and38. That is, the common node33is connected to its preceding common node (the first common node)31via the common impedance32, and so on and so forth. The divider impedances12,14,16,18,22,24,26, and28may be or comprise resistors. The load impedances41,51,43,53,45,55,47,57,49, and59may be or comprise resistors. The common impedances32,34,36, and38may be or comprise resistors. The common impedances32,34,36, and38may have a capacitance component and/or may be referred to as impedances, and/or may be referred to as discrete impedances, but in any case they have a resistance component and may be referred to as resistors. Differential circuitry300further comprises first switching circuitry60and a first plurality of current sources83,85,87, and89, and second switching circuitry70and a second plurality of current sources93,95,97, and99. The first switching circuitry60is connected between the further load nodes13,11,17, and19of the first current path and the current sources83,85,87, and89. The second switching circuitry70is connected between the further load nodes23,21,27, and29of the second current path and the current sources93,95,97, and99. The first switching circuitry60may be taken to comprise the current sources83,85,87, and89, or may be considered separate from the current sources. The second switching circuitry70may be taken to comprise the current sources93,95,97, and99, or may be considered separate from the current sources. The first common node31is connected to a voltage reference AVD (or VDD) in line withFIGS.1and2. The first load nodes11and21are connected respectively to first and second output nodes (not shown). The first load nodes11and21can be considered to be first and second output nodes, respectively. The current sources83,85,87,89,93,95,97, and99are connected to a voltage reference lower than the voltage reference AVD (e.g. shown as GND) in line withFIGS.1and2, i.e. at the uppermost and lowermost rails inFIG.3. Of course, the AVD and GND voltage levels will depend on the application, and the circuitry could be implemented “the other way up” i.e. with the values of AVD and GND reversed. Differential circuitry300is configured to output a differential analogue output signal at (between) the output nodes indicative of a differential digital input signal. This is achieved by the first and second switching circuitry60and70which are configured to change, based on control signals, a magnitude of controllable current signals passing through the load nodes11,13,15,17, and19of the first current path and through the load nodes21,23,25,27, and29of the second current path. The control signals are indicative of the differential digital input signal. The control signals may be controlled/supplied by a controller (not shown). In particular, although not shown, the first switching circuitry60comprises a plurality of switches, each switch connected between a corresponding one of the further load nodes13,15,17, and19and a corresponding one of the current sources83,85,87, and89(e.g. between load node13and current source83). Furthermore, the second switching circuitry70comprises a plurality of switches, each switch connected between a corresponding one of the further load nodes23,25,27, and29and a corresponding one of the current sources93,95,97, and99. The switches correspond to different data bits of the differential digital input signal and the setting of each switch is by the corresponding data bit. In particular, the switches corresponding to a pair of the load nodes may be considered a pair of switches and each pair of switches corresponds to a different data bit of the differential digital input signal. The further pairs of load nodes13,23,15,25,17,27,19, and29may be associated with the corresponding different bits of the differential digital input signal and may be labelled as LSB (least significant bit) 3, LSB2, LSB1, and LSB0, respectively (in line withFIGS.1and2). Differential circuitry300may comprise further connections shown by broken lines and further current sources81and91. The first switching circuitry60and the second switching circuitry70may comprise a further pair of switches corresponding and connected to the first pair of load nodes11and21, and to the current sources81and91. This further pair of switches (and also the first pair of load nodes11and21) may correspond to a bit of the differential digital input signal, for example the most significant bit (MSB). Of course, the number of bits implemented at each pair of load nodes, and indeed the number of pairs of load nodes provided, will differ from implementation to implementation, and the arrangement inFIG.3is simply an example. FIG.4is a schematic diagram of switching circuitry301. Switching circuitry301illustrates an example of a part of the first switching circuitry60and the second switching circuitry70. Switching circuitry301comprises one of the pairs of switches, in particular a first switch63connected between the load node13(at its drain terminal) and the current source93(at its source terminal), and a second switch73connected between the load node23(at its drain terminal) and the current source93(at its source terminal). The first switch63belongs to the first switching circuitry60and the second switch73belongs to the second switching circuitry70. Switching circuitry301as illustrated further comprises the load impedances43and45. The further connections of the load impedance43and53and of the load nodes13and23are not illustrated for simplicity of illustration and instead broken lines are used. However it will be appreciated how the switching circuitry301is a part of the differential circuitry300by referring toFIG.3of the differential circuitry300. The first and second switches63and73are connected to the same current source93inFIG.4.FIG.3shows the first switching circuitry60connected to a different current source to the second switching circuitry70. For each pair of switches in the first and second switching circuitry60and70, the switches may share the same current source (as shown inFIG.4) or may use different current sources (as shown inFIG.3). As mentioned above with reference toFIG.3, each pair of switches corresponds to a data bit of the differential digital input signal. As illustrated inFIG.4, the first switch63is controlled by a control signal (at its gate terminal) labelled DATA and the second switch73is controlled by a complementary control signal (at its gate terminal) labelled/DATA. The control signals DATA and/DATA may be considered to represent a bit of the differential digital input signal. When DATA is high (or logic 1)/DATA is low (or logic 0) and vice versa. The other pairs of switches in differential circuitry300illustrated inFIG.3may have the same or similar structures and connections as those shown inFIG.4. Returning toFIG.3, for each pair of switches, as one switch is turned on and current is thus contributed to the corresponding current path the other switch is turned off. Therefore the current flowing from a given pair of current sources is fixed assuming the pair of current sources concerned have the same current magnitude as one another or assuming that the pair of current sources concerned is implemented as a single shared current source in line withFIG.4. Each pair of switches may be considered to belong to a “slice” of the differential circuitry300. Each such slice may be taken to comprise the pair of switches, and a pair of current sources (or a single shared current source in line withFIG.4) connected to those switches. It may therefore be considered that the current flowing from the supply (controlled by the pair of current sources considered together, or by the single shared current source in line withFIG.4) to the slice is fixed for each slice. In a running example (in line with that mentioned earlier), the common impedances32,34,36, and38each have a resistance value of r, the divider impedances12,22,14,24,16,26,18, and28each have a resistance value of R, the load impedances41and51each have a resistance value of R0, the load impedances43,53,45,55,47and57each have a resistance value of 2R, and the load impedances49and59each have a resistance value of R. Of course, in other implementations the R2R binary weighting scheme need not be employed and the impedances may adopt other resistance values. Considering an example implementation of the differential circuitry300with the elements within the dashed box310not provided (i.e. in particular configured as a 6-bit DAC), the current caused to flow through output node21by the corresponding current source is 7I, where I is the current caused to flow through each of the load nodes23,25, and27by the respective corresponding current sources. The same considerations apply to the “other side” of the differential circuitry300, i.e. for the output node11and the load nodes13,15, and17. In such an implementation (with the elements within the dashed box310not provided), the load impedances47and57each have a resistance value of R (and not 2R). In such an implementation the following values for R, R0 and r lead to improved performance: R=150 Ohms, R0=60 Ohms, and r=(50/3) Ohms≈17 Ohms. The differential circuitry300achieves a number of advantages. The common impedances32,34,36, and38provide fixed DC voltage drops across the successive further common nodes33,35,37, and39(i.e. from one to the next) which do not change with the differential digital input signal. This in turn will lower the maximum voltages at the successive further load nodes13,15,17, and19and the successive further load nodes23,25,27, and29, with those maximum voltages being lower from one successive further load node to the next. This therefore solves or at least partly addresses the overvoltage problem described above with reference toFIGS.2and3(i.e. the components (switches) are better protected). For example, the maximum voltages at the load nodes19and29will be lower given the presence of the common impedances32,34,36, and38than they would be without the presence of the common impedances32,34,36, and38, thereby offering some overvoltage protection to the switches (corresponding to switches63/73inFIG.4) connected to those nodes. This fixed DC voltage drop compensates for the variation in the DC levels (common mode voltage) across the network of impedances (including the common, load, and divider impedances) which otherwise affects the current contribution from each slice and degrades performance (as explained below in relation toFIG.9). Incidentally, the voltage drop produced by the common impedances32,34,36, and38will change the current contribution from each slice, but since the voltage drops mentioned above are DC voltage drops the change in current contribution from each slice will be a DC change and will not affect the differential analogue output signal (current). In summary, the differential circuitry300addresses the overvoltage problem described with reference toFIGS.2and3, improves performance compared to the DAC circuitry100and200illustrated inFIGS.2and3, and functions with appropriate weightings between each slice (i.e. analogously to an R2R ladder/network) so as to be suitable for incorporation into or operation as a DAC. The graphs inFIGS.5to9will now be described. These graphs were obtained by simulating a simpler form of the differential circuitry300, in particular configured as a 6-bit DAC and with the elements within the dashed box310not provided. In such an implementation and in line with the running example, the load impedances47and57each have a resistance value of R (and not 2R). Thus, in the implementation as simulated, the load nodes17,27correspond to LSB0, the load nodes15,25correspond to LSB1, the load nodes13,23correspond to LSB2and the load nodes11,21correspond to a combination of the top 3 MSB bits (e.g. with the current sources81/91and associated switches of the switching circuitry60,70representative of 7 MSB slices controlled by thermometer decoded signals). Furthermore, in the simulation (unless stated otherwise) a data signal D0 for the switches of LSB0(at load nodes17,27) was toggled at 1 GHz, a data signal D1 for the switches of LSB1(at load nodes15,25) was toggled at 0.5 GHz and a data signal D2 for the switches of LSB2(at load nodes11,21) was toggled at 0.25 GHz, and data signals for the upper 3 MSB bits were kept static. It is also assumed that the common impedances32,34,36, and38have a value r. FIG.5is a graph useful for understanding the differential circuitry300(in its simpler form), and focusses on LSB0. The broken lines in each plot show, over time, the voltage at the load node17and the solid lines show, over time, the voltage at the load node27. The top plot shows the voltages when r=0 Ohms and the bottom plot shows the voltages when r=50 Ohms. For simplicity, these graphs were obtained by keeping the values of the D1 and D2 (and the MSB data signals) fixed and only toggling the values of the D0 signals (i.e. D0 and/D0, looking atFIG.4). It will be appreciated fromFIG.5that the voltage is at a lower level at both of the load nodes19and29when the common impedances are included in the circuitry (i.e. when r=50 Ohms) compared to when they are not (i.e. when r=0 Ohms). FIG.6is a graph useful for understanding the differential circuitry300(in its simpler form), and likeFIG.5focusses on LSB0. The simulation settings (e.g. the data signals) were the same as forFIG.5. The broken line shows how the DC voltage level at the load node17changes as the value of r changes and the solid line shows how the DC voltage level at the load node27changes also as the value of r changes. It will be appreciated that increasing the value of r reduces the DC voltage at the load nodes17and27. The maximum voltage (at the load node27) is 1.474 V with r=0 Ohms (i.e. without the common impedances) and is 1.283 V with r=50 Ohms. The values for the load node17are lower than those for the load node27because the circuitry300is differential and in the case of this graph, the switch corresponding to the load node17is off whilst the switch corresponding to the load node27is on. FIG.7is a graph useful for understanding the differential circuitry300(in its simpler form), and focusses on the LSBs (i.e. LSB0to LSB2). The solid lines show the voltages at the load nodes17,27,15,25,13, and23when r=0 Ohms (i.e. when there are no common impedances) and the broken lines show the voltages at the same load nodes when r=50 Ohms. This graph was obtained by toggling the values of D0, D1 and D2. It will be appreciated that the voltages are lower when r=50 Ohms compared to when r=0 Ohms and therefore the common impedances reduce the voltages at the load nodes. The maximum value of the voltage when r=0 Ohms is 1.48 V, and when r=50 Ohms it is 1.32 V. FIG.8is another graph useful for understanding the differential circuitry300(in its simpler form). In the top plot the solid line shows the voltage at the load node13when r=0 Ohms and the broken line shows the voltage at the load node13when r=50 Ohms. It will be appreciated that the voltage is lower when r=50 Ohms compared to when r=0 Ohms. In the middle plot, the solid line shows the differential voltage signal between the load nodes13and23when r=0 Ohms and the broken line shows the same differential voltage signal when r=50 Ohms. It will be appreciated that the differential voltage signal between the load nodes13and23is not affected by the change in value of r. In the bottom plot, the solid line shows the overall differential voltage output of the differential circuitry300(i.e. between the output nodes, effectively between nodes11and21) when r=0 Ohms and the broken line shows the same overall differential voltage output when r=50 Ohms. It will be appreciated, as for the middle plot, that the inclusion of the common impedances (i.e. r being non-zero or non-negligible, such as greater than 5 or 10 or 15 or 20 or 25 or 30 or 40 or 50 or 100 Ohms, depending on the application) does not affect the overall differential output voltage, as the solid and broken lines move together. The bottom plot appears “upside down” compared to the middle plot only because the differential output voltage in the bottom plot was calculated by calculating the difference between the two “sides” of differential circuitry300the other way round compared to the middle plot. FIG.9is another graph useful for understanding the differential circuitry300(in its simpler form). The solid lines show the DC levels (or common mode voltages) at the common nodes33,35, and37and the broken lines show the DC levels (or common mode voltages) at the load nodes13,23,15,25,17, and27. The top plot shows the DC levels when r=0 and the bottom plot shows the DC levels when r=20 Ohms. Looking at the solid lines, the DC level is the same at the common nodes33,35, and37when r=0 in the top plot (there are three traces but they are all the same so on top of each other). In the bottom plot, when r=20 Ohms, the DC levels at the common nodes33,35, and37shown by the solid lines are different from each other and are each lower than their values when r=0. Looking at the broken lines, the DC levels at the load nodes13,23,15,25,17, and27are lower when r=20 Ohms (bottom plot) compared to when r=0 (top plot), and are closer to each other (almost equal) when r=20 Ohms compared to when r=0. There are six traces in each plot for the load nodes13,23,15,25,17, and27(broken lines) but the pairs of corresponding load nodes of the same slice (i.e.13and23,15and25, and17and27) have the same DC levels so that only three traces are visible. The DC levels at the load nodes13,23,15,25,17, and27being closer together is advantageous because this means that the headroom for the current sources is closer together (and almost equal) across the different slices. This means that the current caused to flow in each slice by the current sources is closer to being equal than in the case when r=0 (because in that case the DC levels of the load nodes will be further apart from each other). The more equal currents above results in better performance. The DC levels discussed above may be considered to be common mode voltages given that the differential circuitry300is differential. The values used above for R and r are of course exemplary and there are many appropriate values that may be used. Furthermore, the values of for R and r may be different depending on the specific implementation of the differential circuitry300, e.g., the maximum allowed voltage, the magnitude of the current signals flowing in the differential circuitry, etc. Furthermore, the impedances may have different values of impedance/resistance than shown. The differential circuitry300may comprise more or fewer slices than as described above and shown inFIG.3. For example, the differential circuitry300may comprise just the slices corresponding to the common nodes31and33. Alternatively or additionally, the differential circuitry300may comprise fewer of the common impedances32,34,36, and38than as described in the examples above. In this regard,FIG.10is a schematic diagram showing three possible variations of a portion of the differential circuitry300ofFIG.3. The upper variation corresponds toFIG.3itself, and all of the common impedances32,34,36, and38are provided. However, the differential circuitry300may comprise only one of the common impedances32,34,36, and38or any number of the common impedances, for example only common impedance38as in the middle variation, or only a plurality of the common impedances such as36and38as in the lower variation. These are simply examples and other variations are of course possible looking atFIG.10. That is, only one (or each of any number) of the further common nodes33,35,37, and39may be connected to its preceding common node via a common impedance32,34,36, and38and the or each other further common node may be shorted or short-circuited to its preceding common node (i.e. a connection that is not via a resistor, or any other component). It may be said that the or each other further common node may be directly connected to its preceding common node, with “directly connected” meaning not connected via a resistor or a discrete impedance (or any other discrete component), or connected via a substantially zero resistance connection or a low or negligible impedance/resistance connection. That is, the common impedances32,34,36, and38are resistors in the running example (and may be referred to as discrete impedances, but having a resistance component) and are different from unavoidable (negligible) impedance in a circuit, e.g. of a wire connecting two components. Each of the common impedances32,34,36, and38, the load impedances41,51,43,53,45,55,47,57,49, and59, and the divider impedances12,22,14,24,16,26,18, and28may be a discrete impedance, or a resistor, or a discrete resistor/resistance (bearing in mind that the common impedances32,34,36, and38are or comprise resistors). These impedances are different from unavoidable impedance in a circuit, e.g. of a wire connecting two components. Any of these resistors may be implemented as polysilicon resistors, and/or as a discrete resistor or a distributed resistor. The switches of the first and second switching circuitry60and70(e.g. the switches63and73) may be transistors, in particular FET transistors. For example, they may be MOSFET transistors. The slices of differential circuitry300described above may correspond to any bit or bits of a differential digital input signal and not only the bits as described above. FIG.11is a schematic diagram of a DAC400comprising the differential circuitry300. As mentioned above, the slices of the differential circuitry300may correspond to bits of a differential digital input signal and the switches in differential circuitry300change, based on the bits of the differential digital input signal, a magnitude of the controllable current signals passing through the load nodes of differential circuitry300to ultimately output a differential analogue output signal between the first and second output nodes. FIG.12is a schematic diagram illustrating an integrated circuit (IC)500comprising the differential circuitry300or the DAC400. Circuitry of the present invention may be implemented as integrated circuitry, for example on an IC chip such as a flip chip. The present invention extends to integrated circuitry and IC chips as mentioned above, circuit boards comprising such IC chips, and communication networks (for example, internet fiber-optic networks and wireless networks) and network equipment of such networks, comprising such circuit boards. In any of the above aspects, various features may be implemented in hardware, or as software modules running on one or more processors/computers. The invention also provides a computer program or a computer program product comprising instructions which, when executed by a computer, cause the computer to carry out (as a method) any functionality described herein, and a non-transitory computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out any of the functionality described herein. A computer program embodying the invention may be stored on a non-transitory computer-readable medium, or it could, for example, be in the form of a signal such as a downloadable data signal provided from an Internet website, or it could be in any other form. | 31,171 |
11863200 | DETAILED DESCRIPTION In various embodiments of the present disclosure, “/” and “,” should be interpreted as “and/or”. For example, “A/B” may mean “A and/or B”. Further, “A, B” may mean “A and/or B”. Further, “A/B/C” may mean “at least one of A, B and/or C”. Further, “A, B, C” may mean “at least one of A, B and/or C”. In various embodiments of the present disclosure, “or” should be interpreted as “and/or”. For example, “A or B” may include “only A”, “only B”, and/or “both A and B”. In other words, “or” should be interpreted as “additionally or alternatively”. Techniques described herein may be used in various wireless access systems such as code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), orthogonal frequency division multiple access (OFDMA), single carrier-frequency division multiple access (SC-FDMA), and so on. CDMA may be implemented as a radio technology such as universal terrestrial radio access (UTRA) or CDMA2000. TDMA may be implemented as a radio technology such as global system for mobile communications (GSM)/general packet radio service (GPRS)/Enhanced Data Rates for GSM Evolution (EDGE). OFDMA may be implemented as a radio technology such as IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, evolved-UTRA (E-UTRA), or the like. IEEE 802.16m is an evolution of IEEE 802.16e, offering backward compatibility with an IRRR 802.16e-based system. UTRA is a part of universal mobile telecommunications system (UMTS). 3rd generation partnership project (3GPP) long term evolution (LTE) is a part of evolved UNITS (E-UMTS) using evolved UTRA (E-UTRA). 3GPP LTE employs OFDMA for downlink (DL) and SC-FDMA for uplink (UL). LTE-advanced (LTE-A) is an evolution of 3GPP LTE. A successor to LTE-A, 5thgeneration (5G) new radio access technology (NR) is a new clean-state mobile communication system characterized by high performance, low latency, and high availability. 5G NR may use all available spectral resources including a low frequency band below 1 GHz, an intermediate frequency band between 1 GHz and 10 GHz, and a high frequency (millimeter) band of 24 GHz or above. While the following description is given mainly in the context of LTE-A or 5G NR for the clarity of description, the technical idea of an embodiment of the present disclosure is not limited thereto. FIG.2illustrates the structure of an LTE system according to an embodiment of the present disclosure. This may also be called an evolved UNITS terrestrial radio access network (E-UTRAN) or LTE/LTE-A system. Referring toFIG.2, the E-UTRAN includes evolved Node Bs (eNBs)20which provide a control plane and a user plane to UEs10. A UE10may be fixed or mobile, and may also be referred to as a mobile station (MS), user terminal (UT), subscriber station (SS), mobile terminal (MT), or wireless device. An eNB20is a fixed station communication with the UE10and may also be referred to as a base station (BS), a base transceiver system (BTS), or an access point. eNBs20may be connected to each other via an X2 interface. An eNB20is connected to an evolved packet core (EPC)39via an S1 interface. More specifically, the eNB20is connected to a mobility management entity (MME) via an S1-MME interface and to a serving gateway (S-GW) via an S1-U interface. The EPC30includes an MME, an S-GW, and a packet data network-gateway (P-GW). The MME has access information or capability information about UEs, which are mainly used for mobility management of the UEs. The S-GW is a gateway having the E-UTRAN as an end point, and the P-GW is a gateway having a packet data network (PDN) as an end point. Based on the lowest three layers of the open system interconnection (OSI) reference model known in communication systems, the radio protocol stack between a UE and a network may be divided into Layer 1 (L1), Layer 2 (L2) and Layer 3 (L3). These layers are defined in pairs between a UE and an Evolved UTRAN (E-UTRAN), for data transmission via the Uu interface. The physical (PHY) layer at L1 provides an information transfer service on physical channels. The radio resource control (RRC) layer at L3 functions to control radio resources between the UE and the network. For this purpose, the RRC layer exchanges RRC messages between the UE and an eNB. FIG.3Aillustrates a user-plane radio protocol architecture according to an embodiment of the disclosure. FIG.3Billustrates a control-plane radio protocol architecture according to an embodiment of the disclosure. A user plane is a protocol stack for user data transmission, and a control plane is a protocol stack for control signal transmission. Referring toFIGS.3A and3B, the PHY layer provides an information transfer service to its higher layer on physical channels. The PHY layer is connected to the medium access control (MAC) layer through transport channels and data is transferred between the MAC layer and the PHY layer on the transport channels. The transport channels are divided according to features with which data is transmitted via a radio interface. Data is transmitted on physical channels between different PHY layers, that is, the PHY layers of a transmitter and a receiver. The physical channels may be modulated in orthogonal frequency division multiplexing (OFDM) and use time and frequencies as radio resources. The MAC layer provides services to a higher layer, radio link control (RLC) on logical channels. The MAC layer provides a function of mapping from a plurality of logical channels to a plurality of transport channels. Further, the MAC layer provides a logical channel multiplexing function by mapping a plurality of logical channels to a single transport channel. A MAC sublayer provides a data transmission service on the logical channels. The RLC layer performs concatenation, segmentation, and reassembly for RLC serving data units (SDUs). In order to guarantee various quality of service (QoS) requirements of each radio bearer (RB), the RLC layer provides three operation modes, transparent mode (TM), unacknowledged mode (UM), and acknowledged Mode (AM). An AM RLC provides error correction through automatic repeat request (ARQ). The RRC layer is defined only in the control plane and controls logical channels, transport channels, and physical channels in relation to configuration, reconfiguration, and release of RBs. An RB refers to a logical path provided by L1 (the PHY layer) and L2 (the MAC layer, the RLC layer, and the packet data convergence protocol (PDCP) layer), for data transmission between the UE and the network. The user-plane functions of the PDCP layer include user data transmission, header compression, and ciphering. The control-plane functions of the PDCP layer include control-plane data transmission and ciphering/integrity protection. RB establishment amounts to a process of defining radio protocol layers and channel features and configuring specific parameters and operation methods in order to provide a specific service. RBs may be classified into two types, signaling radio bearer (SRB) and data radio bearer (DRB). The SRB is used as a path in which an RRC message is transmitted on the control plane, whereas the DRB is used as a path in which user data is transmitted on the user plane. Once an RRC connection is established between the RRC layer of the UE and the RRC layer of the E-UTRAN, the UE is placed in RRC_CONNECTED state, and otherwise, the UE is placed in RRC_IDLE state. In NR, RRC_INACTIVE state is additionally defined. A UE in the RRC_INACTIVE state may maintain a connection to a core network, while releasing a connection from an eNB. DL transport channels carrying data from the network to the UE include a broadcast channel (BCH) on which system information is transmitted and a DL shared channel (DL SCH) on which user traffic or a control message is transmitted. Traffic or a control message of a DL multicast or broadcast service may be transmitted on the DL-SCH or a DL multicast channel (DL MCH). UL transport channels carrying data from the UE to the network include a random access channel (RACH) on which an initial control message is transmitted and an UL shared channel (UL SCH) on which user traffic or a control message is transmitted. The logical channels which are above and mapped to the transport channels include a broadcast control channel (BCCH), a paging control channel (PCCH), a common control channel (CCCH), a multicast control channel (MCCH), and a multicast traffic channel (MTCH). A physical channel includes a plurality of OFDM symbol in the time domain by a plurality of subcarriers in the frequency domain. One subframe includes a plurality of OFDM symbols in the time domain. An RB is a resource allocation unit defined by a plurality of OFDM symbols by a plurality of subcarriers. Further, each subframe may use specific subcarriers of specific OFDM symbols (e.g., the first OFDM symbol) in a corresponding subframe for a physical DL control channel (PDCCH), that is, an L1/L2 control channel. A transmission time interval (TTI) is a unit time for subframe transmission. FIG.4illustrates the structure of an NR system according to an embodiment of the present disclosure. Referring toFIG.4, a next generation radio access network (NG-RAN) may include a next generation Node B (gNB) and/or an eNB, which provides user-plane and control-plane protocol termination to a UE. InFIG.4, the NG-RAN is shown as including only gNBs, by way of example. A gNB and an eNB are connected to each other via an Xn interface. The gNB and the eNB are connected to a 5G core network (5GC) via an NG interface. More specifically, the gNB and the eNB are connected to an access and mobility management function (AMF) via an NG-C interface and to a user plane function (UPF) via an NG-U interface. FIG.5illustrates functional split between the NG-RAN and the 5GC according to an embodiment of the present disclosure. Referring toFIG.5, a gNB may provide functions including inter-cell radio resource management (RRM), radio admission control, measurement configuration and provision, and dynamic resource allocation. The AMF may provide functions such as non-access stratum (NAS) security and idle-state mobility processing. The UPF may provide functions including mobility anchoring and protocol data unit (PDU) processing. A session management function (SMF) may provide functions including UE Internet protocol (IP) address allocation and PDU session control. FIG.6illustrates a radio frame structure in NR, to which embodiment(s) of the present disclosure is applicable. Referring toFIG.6, a radio frame may be used for UL transmission and DL transmission in NR. A radio frame is 10 ms in length, and may be defined by two 5-ms half-frames. An HF may include five 1-ms subframes. A subframe may be divided into one or more slots, and the number of slots in an SF may be determined according to a subcarrier spacing (SCS). Each slot may include 12 or 14 OFDM(A) symbols according to a cyclic prefix (CP). In a normal CP (NCP) case, each slot may include 14 symbols, whereas in an extended CP (ECP) case, each slot may include 12 symbols. Herein, a symbol may be an OFDM symbol (or CP-OFDM symbol) or an SC-FDMA symbol (or DFT-s-OFDM symbol). Table 1 below lists the number of symbols per slot Nslotsymb, the number of slots per frame Nframe,uslot, and the number of slots per subframe Nsubframe,uslotaccording to an SCS configuration in the NCP case. TABLE 1SCS (15*2u)NslotsymbNframe,uslotNsubframe,uslot15 KHz (u = 0)1410130 KHz (u = 1)1420260 KHz (u = 2)14404120 KHz (u = 3)14808240 KHz (u = 4)1416016 Table 2 below lists the number of symbols per slot, the number of slots per frame, and the number of slots per subframe according to an SCS in the ECP case. TABLE 2SCS (15*2{circumflex over ( )}u)NslotsymbNframe,uslotNsubframe,uslot60 KHz (u = 2)12404 In the NR system, different OFDM(A) numerologies (e.g., SCSs, CP lengths, and so on) may be configured for a plurality of cells aggregated for one UE. Accordingly, the (absolute time) duration of a time resource including the same number of symbols (e.g., a subframe, slot, or TTI) (collectively referred to as a time unit (TU) for convenience) may be configured to be different for the aggregated cells. In NR, various numerologies or SCSs may be supported to support various 5G services. For example, with an SCS of 15 kHz, a wide area in traditional cellular bands may be supported, while with an SCS of 30 kHz/60 kHz, a dense urban area, a lower latency, and a wide carrier bandwidth may be supported. With an SCS of 60 kHz or higher, a bandwidth larger than 24.25 GHz may be supported to overcome phase noise. An NR frequency band may be defined by two types of frequency ranges, FR1 and FR2. The numerals in each frequency range may be changed. For example, the two types of frequency ranges may be given in Table 3. In the NR system, FR1 may be a “sub 6 GHz range” and FR2 may be an “above 6 GHz range” called millimeter wave (mmW). TABLE 3Frequency RangeCorrespondingSubcarrierdesignationfrequency rangeSpacing (SCS)FR1450 MHz-6000 MHz15, 30, 60 kHzFR224250 MHz-52600 MHz60, 120, 240 kHz As mentioned above, the numerals in a frequency range may be changed in the NR system. For example, FR1 may range from 410 MHz to 7125 MHz as listed in Table 4. That is, FR1 may include a frequency band of 6 GHz (or 5850, 5900, and 5925 MHz) or above. For example, the frequency band of 6 GHz (or 5850, 5900, and 5925 MHz) or above may include an unlicensed band. The unlicensed band may be used for various purposes, for example, vehicle communication (e.g., autonomous driving). TABLE 4Frequency RangeCorrespondingSubcarrierdesignationfrequency rangeSpacing (SCS)FR1410 MHz-7125 MHz15, 30, 60 kHzFR224250 MHz-52600 MHz60, 120, 240 kHz FIG.7illustrates a slot structure in an NR frame according to an embodiment of the present disclosure. Referring toFIG.7, a slot includes a plurality of symbols in the time domain. For example, one slot may include 14 symbols in an NCP case and 12 symbols in an ECP case. Alternatively, one slot may include 7 symbols in an NCP case and 6 symbols in an ECP case. A carrier includes a plurality of subcarriers in the frequency domain. An RB may be defined by a plurality of (e.g., 12) consecutive subcarriers in the frequency domain. A bandwidth part (BWP) may be defined by a plurality of consecutive (physical) RBs ((P)RBs) in the frequency domain and correspond to one numerology (e.g., SCS, CP length, or the like). A carrier may include up to N (e.g., 5) BWPs. Data communication may be conducted in an activated BWP. Each element may be referred to as a resource element (RE) in a resource grid, to which one complex symbol may be mapped. A radio interface between UEs or a radio interface between a UE and a network may include L1, L2, and L3. In various embodiments of the present disclosure, L1 may refer to the PHY layer. For example, L2 may refer to at least one of the MAC layer, the RLC layer, the PDCH layer, or the SDAP layer. For example, L3 may refer to the RRC layer. Now, a description will be given of sidelink (SL) communication. FIGS.8A and8Billustrate a radio protocol architecture for SL communication according to an embodiment of the present disclosure. Specifically,FIG.8Aillustrates a user-plane protocol stack in LTE, andFIG.8Billustrates a control-plane protocol stack in LTE. FIGS.9A and9Billustrate a radio protocol architecture for SL communication according to an embodiment of the present disclosure. Specifically,FIG.9Aillustrates a user-plane protocol stack in NR, andFIG.9Billustrates a control-plane protocol stack in NR. Resource allocation in SL will be described below. FIGS.10A and10Billustrate a procedure of performing V2X or SL communication according to a transmission mode in a UE according to an embodiment of the present disclosure. In various embodiments of the present disclosure, a transmission mode may also be referred to as a mode or a resource allocation mode. For the convenience of description, a transmission mode in LTE may be referred to as an LTE transmission mode, and a transmission mode in NR may be referred to as an NR resource allocation mode. For example,FIG.10Aillustrates a UE operation related to LTE transmission mode 1 or LTE transmission mode 3. Alternatively, for example,FIG.10Aillustrates a UE operation related to NR resource allocation mode 1. For example, LTE transmission mode 1 may be applied to general SL communication, and LTE transmission mode 3 may be applied to V2X communication. For example,FIG.10Billustrates a UE operation related to LTE transmission mode 2 or LTE transmission mode 4. Alternatively, for example,FIG.10Billustrates a UE operation related to NR resource allocation mode 2. Referring toFIG.10A, in LTE transmission mode 1, LTE transmission mode 3, or NR resource allocation mode 1, a BS may schedule SL resources to be used for SL transmission of a UE. For example, the BS may perform resource scheduling for UE1through a PDCCH (more specifically, DL control information (DCI)), and UE1may perform V2X or SL communication with UE2according to the resource scheduling. For example, UE1may transmit sidelink control information (SCI) to UE2on a PSCCH, and then transmit data based on the SCI to UE2on a PSSCH. For example, in NR resource allocation mode 1, a UE may be provided with or allocated resources for one or more SL transmissions of one transport block (TB) by a dynamic grant from the BS. For example, the BS may provide the UE with resources for transmission of a PSCCH and/or a PSSCH by the dynamic grant. For example, a transmitting UE may report an SL hybrid automatic repeat request (SL HARQ) feedback received from a receiving UE to the BS. In this case, PUCCH resources and a timing for reporting the SL HARQ feedback to the BS may be determined based on an indication in a PDCCH, by which the BS allocates resources for SL transmission. For example, the DCI may indicate a slot offset between the DCI reception and a first SL transmission scheduled by the DCI. For example, a minimum gap between the DCI that schedules the SL transmission resources and the resources of the first scheduled SL transmission may not be smaller than a processing time of the UE. For example, in NR resource allocation mode 1, the UE may be periodically provided with or allocated a resource set for a plurality of SL transmissions through a configured grant from the BS. For example, the grant to be configured may include configured grant type 1 or configured grant type 2. For example, the UE may determine a TB to be transmitted in each occasion indicated by a given configured grant. For example, the BS may allocate SL resources to the UE in the same carrier or different carriers. For example, an NR gNB may control LTE-based SL communication. For example, the NR gNB may transmit NR DCI to the UE to schedule LTE SL resources. In this case, for example, a new RNTI may be defined to scramble the NR DCI. For example, the UE may include an NR SL module and an LTE SL module. For example, after the UE including the NR SL module and the LTE SL module receives NR SL DCI from the gNB, the NR SL module may convert the NR SL DCI into LTE DCI type 5A, and transmit LTE DCI type 5A to the LTE SL module every Xms. For example, after the LTE SL module receives LTE DCI format 5A from the NR SL module, the LTE SL module may activate and/or release a first LTE subframe after Z ms. For example, X may be dynamically indicated by a field of the DCI. For example, a minimum value of X may be different according to a UE capability. For example, the UE may report a single value according to its UE capability. For example, X may be positive. Referring toFIG.10B, in LTE transmission mode 2, LTE transmission mode 4, or NR resource allocation mode 2, the UE may determine SL transmission resources from among SL resources preconfigured or configured by the BS/network. For example, the preconfigured or configured SL resources may be a resource pool. For example, the UE may autonomously select or schedule SL transmission resources. For example, the UE may select resources in a configured resource pool on its own and perform SL communication in the selected resources. For example, the UE may select resources within a selection window on its own by a sensing and resource (re)selection procedure. For example, the sensing may be performed on a subchannel basis. UE1, which has autonomously selected resources in a resource pool, may transmit SCI to UE2on a PSCCH and then transmit data based on the SCI to UE2on a PSSCH. For example, a UE may help another UE with SL resource selection. For example, in NR resource allocation mode 2, the UE may be configured with a grant configured for SL transmission. For example, in NR resource allocation mode 2, the UE may schedule SL transmission for another UE. For example, in NR resource allocation mode 2, the UE may reserve SL resources for blind retransmission. For example, in NR resource allocation mode 2, UE1may indicate the priority of SL transmission to UE2by SCI. For example, UE2may decode the SCI and perform sensing and/or resource (re)selection based on the priority. For example, the resource (re)selection procedure may include identifying candidate resources in a resource selection window by UE2and selecting resources for (re)transmission from among the identified candidate resources by UE2. For example, the resource selection window may be a time interval during which the UE selects resources for SL transmission. For example, after UE2triggers resource (re)selection, the resource selection window may start at T1≥0, and may be limited by the remaining packet delay budget of UE2. For example, when specific resources are indicated by the SCI received from UE1by the second UE and an L1 SL reference signal received power (RSRP) measurement of the specific resources exceeds an SL RSRP threshold in the step of identifying candidate resources in the resource selection window by UE2, UE2may not determine the specific resources as candidate resources. For example, the SL RSRP threshold may be determined based on the priority of SL transmission indicated by the SCI received from UE1by UE2and the priority of SL transmission in the resources selected by UE2. For example, the L1 SL RSRP may be measured based on an SL demodulation reference signal (DMRS). For example, one or more PSSCH DMRS patterns may be configured or preconfigured in the time domain for each resource pool. For example, PDSCH DMRS configuration type 1 and/or type 2 may be identical or similar to a PSSCH DMRS pattern in the frequency domain. For example, an accurate DMRS pattern may be indicated by the SCI. For example, in NR resource allocation mode 2, the transmitting UE may select a specific DMRS pattern from among DMRS patterns configured or preconfigured for the resource pool. For example, in NR resource allocation mode 2, the transmitting UE may perform initial transmission of a TB without reservation based on the sensing and resource (re)selection procedure. For example, the transmitting UE may reserve SL resources for initial transmission of a second TB using SCI associated with a first TB based on the sensing and resource (re)selection procedure. For example, in NR resource allocation mode 2, the UE may reserve resources for feedback-based PSSCH retransmission through signaling related to a previous transmission of the same TB. For example, the maximum number of SL resources reserved for one transmission, including a current transmission, may be 2, 3 or 4. For example, the maximum number of SL resources may be the same regardless of whether HARQ feedback is enabled. For example, the maximum number of HARQ (re)transmissions for one TB may be limited by a configuration or preconfiguration. For example, the maximum number of HARQ (re)transmissions may be up to 32. For example, if there is no configuration or preconfiguration, the maximum number of HARQ (re)transmissions may not be specified. For example, the configuration or preconfiguration may be for the transmitting UE. For example, in NR resource allocation mode 2, HARQ feedback for releasing resources which are not used by the UE may be supported. For example, in NR resource allocation mode 2, the UE may indicate one or more subchannels and/or slots used by the UE to another UE by SCI. For example, the UE may indicate one or more subchannels and/or slots reserved for PSSCH (re)transmission by the UE to another UE by SCI. For example, a minimum allocation unit of SL resources may be a slot. For example, the size of a subchannel may be configured or preconfigured for the UE. SCI will be described below. While control information transmitted from a BS to a UE on a PDCCH is referred to as DCI, control information transmitted from one UE to another UE on a PSCCH may be referred to as SCI. For example, the UE may know the starting symbol of the PSCCH and/or the number of symbols in the PSCCH before decoding the PSCCH. For example, the SCI may include SL scheduling information. For example, the UE may transmit at least one SCI to another UE to schedule the PSSCH. For example, one or more SCI formats may be defined. For example, the transmitting UE may transmit the SCI to the receiving UE on the PSCCH. The receiving UE may decode one SCI to receive the PSSCH from the transmitting UE. For example, the transmitting UE may transmit two consecutive SCIs (e.g., 2-stage SCI) on the PSCCH and/or PSSCH to the receiving UE. The receiving UE may decode the two consecutive SCIs (e.g., 2-stage SCI) to receive the PSSCH from the transmitting UE. For example, when SCI configuration fields are divided into two groups in consideration of a (relatively) large SCI payload size, SCI including a first SCI configuration field group is referred to as first SCI. SCI including a second SCI configuration field group may be referred to as second SCI. For example, the transmitting UE may transmit the first SCI to the receiving UE on the PSCCH. For example, the transmitting UE may transmit the second SCI to the receiving UE on the PSCCH and/or PSSCH. For example, the second SCI may be transmitted to the receiving UE on an (independent) PSCCH or on a PSSCH in which the second SCI is piggybacked to data. For example, the two consecutive SCIs may be applied to different transmissions (e.g., unicast, broadcast, or groupcast). For example, the transmitting UE may transmit all or part of the following information to the receiving UE by SCI. For example, the transmitting UE may transmit all or part of the following information to the receiving UE by first SCI and/or second SCI.PSSCH-related and/or PSCCH-related resource allocation information, for example, the positions/number of time/frequency resources, resource reservation information (e.g. a periodicity), and/oran SL channel state information (CSI) report request indicator or SL (L1) RSRP (and/or SL (L1) reference signal received quality (RSRQ) and/or SL (L1) received signal strength indicator (RSSI)) report request indicator, and/oran SL CSI transmission indicator (on PSSCH) (or SL (L1) RSRP (and/or SL (L1) RSRQ and/or SL (L1) RSSI) information transmission indicator), and/orMCS information, and/ortransmission power information, and/orL1 destination ID information and/or L1 source ID information, and/orSL HARQ process ID information, and/ornew data indicator (NDI) information, and/orredundancy version (RV) information, and/orQoS information (related to transmission traffic/packet), for example, priority information, and/oran SL CSI-RS transmission indicator or information about the number of SL CSI-RS antenna ports (to be transmitted);location information about a transmitting UE or location (or distance area) information about a target receiving UE (requested to transmit an SL HARQ feedback), and/orRS (e.g., DMRS or the like) information related to decoding and/or channel estimation of data transmitted on a PSSCH, for example, information related to a pattern of (time-frequency) mapping resources of the DMRS, rank information, and antenna port index information. For example, the first SCI may include information related to channel sensing. For example, the receiving UE may decode the second SCI using the PSSCH DMRS. A polar code used for the PDCCH may be applied to the second SCI. For example, the payload size of the first SCI may be equal for unicast, groupcast and broadcast in a resource pool. After decoding the first SCI, the receiving UE does not need to perform blind decoding on the second SCI. For example, the first SCI may include scheduling information about the second SCI. In various embodiments of the present disclosure, since the transmitting UE may transmit at least one of the SCI, the first SCI, or the second SCI to the receiving UE on the PSCCH, the PSCCH may be replaced with at least one of the SCI, the first SCI, or the second SC. Additionally or alternatively, for example, the SCI may be replaced with at least one of the PSCCH, the first SCI, or the second SCI. Additionally or alternatively, for example, since the transmitting UE may transmit the second SCI to the receiving UE on the PSSCH, the PSSCH may be replaced with the second SCI. The PUSCH/PDSCH sequence in the NR system may be initialized as follows according to Equations 1 and 2 below. (TS 38.211) cinit=nRNTI·215+nID[Equation 1] cinit=nRNTI·215+q·214+nID[Equation 2] In the above equations, nIDmay be a higher-layer parameter limited to a UE-specific search space scheduled for unicast with DCI, and in this case, may have a value of {0,1, . . . ,1023}. In other cases, nIDmay be given by {0,1, . . . ,1007} as a Cell ID. In a conventional LTE system, nIDin the case of Uu may be fixed to {0,1, . . . 503} as a Cell ID value, and in SL, nIDmay be fixed to an initial value that is 510 to distinguish from Uu. If a method used in a PD(U)CCH and a PD(U)SCH of a conventional NR Uu is used without change when scrambling sequences of the PSCCH and the PSSCH are generated to effectively transmit a resource in NR SL, there is a problem in that it is difficult to distinguish between NR SL and NR Uu UEs. Accordingly, hereinafter, an embodiment of the present disclosure proposes a method of generating a scrambling sequence in NR SL and an apparatus for supporting the method. According to an embodiment, a 1stsidelink control information (SCI) may be transmitted on a PSCCH and 2ndstage SCI may be transmitted on a PSSCH. Here, a first scrambling sequence related to the 1ststage SCI may be generated based on a fixed value, and a second scrambling sequence related to the 2ndstage SCI may be generated based on a cyclic redundancy check (CRC) related value. The CRC related value may be derived from CRC on the PSCCH. The fixed value may be related to initialization of the first scrambling sequence, and the CRC of the PSCCH may be related to initialization of the second scrambling sequence. That is, the 1stSCI may be determined using an initial value of a scrambling sequence, and although the 2ndSCI is control information, the 2ndSCI may be determined using the CRC of the scrambling sequence differently from the 1stSCI. Since the 1stSCI needs to be received by all UEs, a fixed value may be used, but since the 2ndSCI is for a specific UE, CRC needs to be used, and thus different scrambling sequences may be used according to a type of SCI. That is, the initial value of the scrambling sequence may be determined in consideration of the characteristics/target of UEs that receive different types of SCI, and thus randomization of the 2ndSCI may be effectively performed. Based on an initialization method in an LTE system and an initialization method in an NR Uu, the following may be considered during initialization in NR SL. Since nIDranges from 0 to 1023 in NR Uu, it may not be possible to distinguish a scrambling sequence between NR Uu and NR SL by fixing nIDto 510, which is a specific value in LTE SL, in NR SL. Thus, nIDin NR SL needs to be a value equal to or greater than 1024 (210) greater than 1023 that is currently used in NR Uu. In Equation 1, a value equal to or less than 215-1 needs to be used. That is, some values of {1024, . . . 32767} may be (pre)configured or predefined to nIDin NR SL. In this case, UEs may assume that an nIDvalue used in NR SL is not used in NR Uu. In another example, like in LTE SL, a specific value (e.g., 1030) may be fixed to an initial value in NR SL. In another example, in NR Uu, a specific value (some values of {1008, . . . ,1023}) of nIDmay be used for SL. In this case, UEs may assume that nIDused in NR SL is not used in NR Uu. In another example, an ID (15 bits) used in NR Uu may be divided and used in NR sidelink, and a value except for the corresponding ID may be used in NR sidelink. For example, {0, . . . ,16383} that is half of IDs of {0, . . . , 32767} may be used in NR Uu, and {16383, . . . 32767} that is the other half may be used in NR sidelink. Alternatively, X % of IDs of {0, . . . , 32767} may be used in NR Uu, and (100-X) % thereof may be used in NR sidelink. In another example, {0, . . . , 32767} may be used in NR Uu, and {32767, . . . ,65536+2{circumflex over ( )}(the sum of bits used in destination group ID and destination ID)} may be used in NR sidelink. When sensing operating is considered in the case of a PSCCH, all UEs need to perform decoding, and thus an nIDvalue may be fixed for each UE or may be (pre)configured for each resource pool. In the case of a PSSCH, all UEs may not need to perform decoding, and only a UE based on the corresponding ID (e.g. a destination ID) may perform decoding. In Equations 1 and 2, an nRNTIvalue may be1) Destination ID,2) concatenation of Source ID and Destination ID,3) CRC value of corresponding SCI, or the like. In this case, in the case of CRC, truncation may be required, and when a single ID is used, zero padding may be required. Alternatively, considering that a PSSCH resource is implicitly linked to the PSCCH, an nIDvalue used in the PSCCH may be inherited. Alternatively, when an nIDvalue is configured using a source ID, it may be assumed that an nIDvalue used in NR SL is not used in NR Uu to distinguish between NR Uu and NR SL as described above. Like in the above case, an initialization method needs to also be considered when a CSI-RS sequence is generated in NR SL. Currently, in an NR system, a CSI-RS sequence may be initialized using Equation 3 below. (TS 38.211) cinit=(210(Nsymbslotns,fμ+l+1)(2nID+1)+nID)mod 231[Equation 3] In the above equation, nIDmay be a higher layer signaled value (scramblingID or sequenceGenerationConfig) and may have a value of {0, 1, . . . , 1023}. First, 1) nIDmay be configured based on a source ID and may be considered as the following combination. 2) nIDmay be configured using a Source ID and a destination ID LSB. Alternatively, 3) A rate of bits of the source ID and the destination LSB may be adjusted while the sum of bits of the source ID and the destination ID LSB is maintained constant. Hereinafter, embodiments of the above 1), 2), and 3) will be described. Source ID 8 Bits Source ID 8 Bits+Destination ID LSB 2 Bits Source ID LSB X Bits+Destination ID LSB Y Bits. (X+Y=10 Bits) When an NR SL CSI-RS sequence is generated, randomization with an NR SL CSI-RS and randomization between NR SL UEs may be required. Thus, when the NR SL CSI-RS sequence is generated, an nIDvalue may be (pre)configured or predefined. For example, a value greater than 1023 may be (pre)configured or predefined as an nIDvalue for randomization with NR Uu. In consideration of sequence randomization between CSI-RS transmissions of NR SL UEs, CRC values of a PSCCH related to different PSSCH transmissions that partially or completely overlap in the same resource may be used to derive an nIDvalue. That is, in order to derive the nIDvalue,1) Destination ID,2) concatenation of Source ID and Destination ID,3) CRC value of corresponding SCI, or the like may be used. In this case, in the case of CRC, truncation may be required, and when a single ID is used, zero padding may be required. For example, (the aforementioned) method for sequence generation may also be applied in the same or similar way in order to generate a sequence of a PT-RS. That is, for example, for sequence generation and/or randomization, (during sequence generation, some) parameters may be (pre)configured or predefined. CRC values of a PSCCH related to different PSSCH transmissions that partially or completely overlap in the same resource may be used for sequence randomization. That is, for PT-RS sequence generation and/or randomization,1) Destination ID,2) concatenation of Source ID and Destination ID,3) CRC value of corresponding SCI, or the like may be used. In this case, in the case of CRC, truncation may be required, and when a single ID is used, zero padding may be required. In this case, among conditions (used to derive the aforementioned nRNTIand nIDvalues),1) Destination ID2) concatenation of Source ID and Destination ID, and3) CRC value of corresponding SCI may be interpreted extensively by the use of only destination ID information or source ID information for generation and/or randomization of a sequence (e.g., a PSCCH/PSSCH DMRS, PSCCH/PSSCH scrambling, a CSI-RS, and/or a PT-RS), the use of a combination of the source ID and the destination ID, and/or the use of some bits of CRC (e.g., some bits of LSB (e.g., 2 bits) or some bits of LSB except for bits used for randomization of other sequences (e.g., a PSCCH/PSSCH DMRS, PSCCH/PSSCH scrambling, a CSI-RS, and/or a PT-RS). As an example of (the aforementioned) PSCCH/PSSCH scrambling sequence generation, cinit=nRNTI·215+nIDmay be used for initialization when the corresponding sequence is generated. In the case of a PSCCH in the above equation, (since all UEs need to perform decoding) an nRNTIvalue may be fixed to one value (e.g., 0). Alternatively, the nRNTIvalue may be (pre)configured. Alternatively, the nRNTIvalue may be configured based on a PSCCH OCC index value. For example, some bits (e.g., 2, 3, or 4 bits) may be configured based on the PSCCH OCC index, and the other bits may be (pre)configured. In another example, one value may be selected among 16-bit nRNTIvalue candidates that are (pre)configured based on the PSCCH OCC index. In the case of nID, one value of {1008, . . . , 32767} (or {1024, . . . , 32767}) may be differently (or independently) configured specifically to a resource pool (a service type/priority, a (service) QoS parameter (e.g., reliability or latency), MCS, a UE (absolute or relative) speed, a sub-channel size, and/or a scheduled frequency resource domain size) (by a network/BS). In the case of a PSSCH in the above equation, in order to derive an nRNTIvalue, only destination ID information or source ID information may be used, a combination of the source ID and the destination ID may be used, some bits of PSCCH CRC (e.g., 16 bit LSB) may be used, a concatenation (e.g., 1st SCI CRC 8 bit LSB+2ndSCI CRC 8 bit LSB) value of some bits of PSCCH (1stSCI) CRC and 2ndSCI CRC may be used, and/or an XOR value of some bits of 1stSCI CRC and 2ndSCI CRC may be used. Alternatively, the nRNTIvalue may be (pre)configured. Alternatively, the nRNTIvalue may be configured based on a PSCCH OCC index value. For example, based on the PSCCH OCC index, some bit (e.g., 2, 3, or 4 bits) values may be configured, and the other bits may be (pre)configured. In another example, one value may be selected among 16 bit nRNTIvalue candidates that are (pre)configured based on the PSCCH OCC index. In the case of nID, one value of {1008, . . . , 32767} (or {1024, . . . , 32767}) may be differently (or independently) configured specifically to a resource pool (a service type/priority, a (service) QoS parameter (e.g., reliability or latency), MCS, a UE (absolute or relative) speed, a sub-channel size, and/or a scheduled frequency resource domain size) (by a network/BS). In the case of 2ndSCI in the above equation, in order to derive an nRNTIvalue, some bits of PSCCH (1stSCI) CRC (e.g., 16 bit LSB) may be used. Alternatively, the nRNTIvalue may be (pre)configured. Alternatively, the nRNTIvalue may be configured based on a PSCCH OCC index value. For example, one value may be selected among 16 bit nRNTIvalue candidates that are (pre)configured based on the PSCCH OCC index. In the above description, an nRNTIvalue linked by the PSSCH and/or a PSCCH OCC index applied to 2ndSCI may be differently configured from the nRNTIvalue linked by the PSCCH OCC index applied to the PSCCH. In the above description, in the case of a PSCCH, a PSSCH, and/or 2ndSCI, configuration of derivation of the nRNTIbased on the PSCCH OCC index may be interpreted extensively by a method of replacing nRNTIwith a PSCCH OCC index (e.g., 2, 3, or 4 bits) (e.g., a form in which the number of nRNTIbits is reduced) or a method of filling the remaining bits with a predefined/fixed specific value (e.g., 0) other than the bit derived from the PSCCH OCC index (e.g., 2, 3, or 4 bits) (i.e., a method in which the remaining bits are not (pre-)configured). In the present disclosure, as an example of PSCCH scrambling sequence generation, cinit=nRNTI·216+nIDmay be used for initialization when the corresponding sequence is generated. When the PSSCH scrambling sequence is configured, an nIDvalue may be derived using, for example, some bits (e.g., 16 bit LSB) of PSCCH CRC and/or an nRNTIvalue may be fixed (or preconfigured) to one value (e.g., 0). In another example, the nIDvalue may be the same as the nIDvalue configured when the PSCCH scrambling sequence is generated. According to the present disclosure, the PSSCH may be extensively interpreted by 2ndSCI or SL-SCH. As an example of (the aforementioned) PSCCH DMRS sequence generation, cinit=(217(Nsymbslotns,fμ+l+1)(2NID+1)+2NID+nSCID) mod 231may be used for initialization when the corresponding sequence is generated. One value of {0, 1} may be selected as nSCIDin the above equation by a TX UE. In the case of NID, one value of {1008, . . . , 65535} may be differently (or independently) configured specifically to a resource pool (a service type/priority, a (service) QoS parameter (e.g., reliability or latency), MCS, a UE (absolute or relative) speed, a sub-channel size, and/or a scheduled frequency resource domain size) (by a network/BS). In the above example, nSCIDis assumed to use 1 bit, but more bits (e.g., 2 bits or more) may be used. In this case, a range of a value to be selected as an NIDvalue may also be changed depending on a bit number used in nSCID. (For example, when nSCIDuses 2 bits, one value of {0, 1, 2, 3} may be selected as nSCIDby a TX UE, and in the case of NID, one value of {1008, . . . , 32767} may be differently (or independently) configured specifically to a resource pool (a service type/priority, a (service) QoS parameter (e.g., reliability or latency), MCS, a UE (absolute or relative) speed, a sub-channel size, and/or a scheduled frequency resource domain size) (by a network/BS.) In this case, a cinitvalue needs to be also changed depending on a value (or a value range) to be selected as nSCIDor NID. For example, in the case of (the aforementioned) nSCID∈{0,1, 2, 3}, NID∈{1008, . . . , 32767}, cinit=(217(Nsymbslotns,fμ+l+1)(2NID+1)+22NID+nSCID) mod 231may be satisfied. As an example of (the aforementioned) PSSCH DMRS sequence generation, cinit=(217(Nsymbslotns,fμ+l+1)(2NIDnSCID+1)+2NIDnSCID+nSCID) mod 231may be used for initialization when the corresponding sequence is generated. In order to derive one value of {0, 1}, nSCIDin the above equation may use some bits (e.g., 1 bit LSB) of PSCCH CRC may be used. In order to derive one value of {1008, . . . , 65535}, NID0and NID1may use some bits (e.g., 14 bit LSB (after 1 bit LSB used in nSCID) of PSCCH CRC. In the above example, nSCIDis assumed to use 1 bit, but more bits (e.g., 2 bits or more) may be used. In this case, a range of a value to be selected as an NIDnSCIDvalue may also be changed depending on a bit number used in nSCID. (For example, when nSCIDuses 2 bits, nSCIDmay use some bits (e.g., 2 bit LSB) of PSCCH CRC in order to derive one value of {0, 1, 2, 3}. NID0and NID1may use some bits (e.g., after 2 bit LSB used in nSCID) of PSCCH CRC in order to derive one value of {1008, . . . , 32767}. In this case, a cinitvalue needs to be also changed depending on a value (or a value range) to be selected as nSCIDor NIDnSCID. The (aforementioned) term (PSCCH) CRC used during PSSCH scrambling and/or PSSCH DMRS (base) sequence generation and/or used during SL-CSI-RS (base) sequence and/or PT-RS (base) sequence generation may be interpreted extensively by 2ndSCI CRC (a combination of 1st SCI CRC, L1-destination ID, L1-source ID, 1stSCI CRC, 2ndSCI CRC, L1-destination ID, and/or L1-source ID). 1.1.1. Bandwidth Part To avoid RF switching delay, it is assumed that the numerology of configured SL BWP is the same as that of active UL BWP in the same carrier at a given time. Next, it can be further considered that RF retuning is not needed to switch between active UL BWP and configured SL BWP. In other words, it can be considered that UE's RF setting covers both active UL BWP and configured SL BWP. In this case even though SL BWP and active UL BWP have different center frequency of BWP and BWP size, UE may not apply the switching delay. Meanwhile, in NR Uu link, for the uplink, the higher-layer parameter txDirectCurrentLocation indicates the location of the transmitter DC subcarrier in the uplink for a bandwidth part, including whether the DC subcarrier location is offset by 7.5 kHz relative to the center of the indicated subcarrier or not. Considering that UE's RF setting covers both active UL BWP and configured SL BWP, the DC subcarrier location for the sidelink needs to be the same as that of the uplink. On the other hand, for out-of-coverage UE or idle UE, the DC subcarrier location for the sidelink could be (pre)configured per SL BWP. To avoid RF switching delay, the UE expects the same location of DC subcarrier between UL BWP and SL BWP in a given time. In this case, SL BWP and UL BWP have different (or same) RF bandwidth, and SL BWP and UL BWP may be set at different locations within the different (or same) RF bandwidth. On the other hand, when the RF bandwidth and the location of DC subcarrier of UL BWP is determined, it can be considered UE expects that the configured SL BWP is deactivated if the location of DC subcarrier of SL BWP has different location with the configured location of DC subcarrier of UL BWP. Proposal 1: TX DC Subcarrier in the Sidelink is (Pre)Configured Per SL BWP Proposal 3: UE expects to use a same DC subcarrier location in the SL BWP and in an active UL BWP in a same carrier of a same cell. If the DC subcarrier location of the active UL BWP is different than the DC subcarrier location of the SL BWP, the SL BWP is deactivated. Regarding active DL BWP, for paired spectrum, it can be taken into account that separate RF chains between active DL BWP and configured SL BWP as in LTE V2X. On the other hand, for unpaired spectrum, it can be assumed that UE's RF setting covers both active DL BWP and configured SL BWP together with active UL BWP. Note that, in NR Uu link, for unpaired spectrum, UE expects that the center frequency of active DL BWP is aligned with that of active UL BWP and the same numerology is used for the active DL BWP and the active UL BWP. 1.1.2. Resource Pool In RAN1 #98 bis meeting [1] and RAN1 #99 meeting [2], followings are agreed for resource pool in time domain: Agreements: A slot is the time-domain granularity for resource pool configuration.To down-select:Alt 1. Slots for a resource pool is (pre-)configured with bitmap, which is applied with periodicityAlt 2. Slots for a resource pool is (pre-)configured, wherein the slots are applied with periodicity.FFS: signaling detailsFFS: how to apply the above bitmap signaling, For example, to all slots or only to a set of slotsFFS: symbols for sidelink in the slot, how to indicate for the case when not all symbols are for SL Agreements: For Rel-16, (Normal CP)Support 7, 8, 9, . . . , 14 symbols in a slot without SL-SSB for SL operationTarget reusing Uu DM-RS patterns for each of the symbol-length, with modifications as necessaryNo other additional spec impact is expected for supporting 7, 8, . . . , 13# of DM-RS symbols2, 3, 4For a dedicated carrier, only 14-symbol is mandatory There is a single (pre-)configured length of SL symbols in a slot without SL-SSB per SL BWP. There is a single (pre-)configured starting symbol for SL in a slot without SL-SSB per SL BWP Agreements: NR supports SL transmissions at least in cell-specific UL resources in Uu. When a UE is in-coverage, cell-specific UL resources will be indicated by higher layer parameter TDD-UL-DL-ConfigCommon. For out-of-coverage UE, PSBCH transmitted by another UE will indicate information about reference sidelink resources which can be potentially used for NR sidelink transmission. Due to the signaling overhead of PSBCH, a single pattern indicating the number of UL slots will be included in PSBCH contents while TDD-UL-DL-ConfigCommon could have two patterns indicating the number of UL slots and the number of UL symbols. In other words, UL resources indicated by PSBCH could be different from TDD-UL-DL-ConfigCommon as shown inFIG.11A. When the TX UE and RX UE have the different understanding on the cell-specific UL resources or reference SL resources, resource reservation or PSFCH transmission timing would not work properly. In this case, even for the in-coverage UE, it would be necessary that higher layer indicate reference SL resources whose value is the same as reference SL resources indicated by the PBSCH. Next, Depending on the TDD-UL-DL-ConfigCommon, all the symbols in a slot could be cell-specific UL resources, or a subset of symbols in a slow could be cell-specific UL resources. Meanwhile, a UE can be provided a number of symbols in a slot, by lengthSLsymbols, starting from symbol with index of statSLsymbols for NR sidelink. In this case, for all the slots indicated by the reference SL resource configuration, lengthSLsymbols symbols from startSLsymbols of a slot are cell-specific UL resources. Proposal 4: A UE is Configured with Reference SL Resources Via Higher Layer Signaling.Reference SL resource configuration consists of following parameters:P: Periodicity of SL reference slot patternN_refSL: Number of consecutive SL reference slots with a periodUE assumes that the last N_refSL slots with the period are reference SL resource. Considering resource usage flexibility, it can be considered to use bitmap is applied to the reference SL resources to indicate SL resource pool in time domain. To reduce signaling overhead, the bitmap with a small size compared to the total number of slots for the reference SL resources could be applied periodically. Next, it needs to consider that it will not be supported to multiplex S-SSB with other SL channels in a slot since S-SSB will occupy all the symbols in a slot. In addition, since the symbol duration of S-SSB could be different from other SL channels, FDM between S-SSB and other SL channels can cause additional AGC period or TX power change in a slot. In those points of views, slots available for S-SSB would not be included in the SL resource pool in time domain. In our view, as in LTE V2X, slots for S-SSB can be excluded from the reference SL resources before applying bitmap with a certain period. However, in this case, it is necessary to determine how to handle the case where the total number of slots of the reference SL resources excluding S-SSB slots within a system frame is not multiples of the bitmap size. If the LTE principle is reused, the concept of reserved slot can be used to resolve this issue. To be specific, among slots of the reference SL resources excluding S-SSB slots, there can be a number of slots that the bitmap cannot be applied and these slots are evenly distributed over the reference SL resources excluding S-SSB slots. The bitmap will be applied to the remaining slots of the reference SL resources to indicate SL resource pool in time domain. Proposal 5: The set of slots for SL resource pool in time domain is given by following steps:Step 1: The set of slots is given by reference SL resource configuration.Step 2: Slots configured for S-SSB are excluded from the set in Step 1.Step 3: Reserved slots to be excluded form the remaining set in Step 2 is determined by the following steps:Step 3-1: Slots in the set in Step 2 are denoted by (l0, l1, . . . , lNrefsL-NS-SSB-1) arranged in increasing order of slot index where NrefSLis the number of slots indicated by the reference SL resources within a radio frame and NS-SSBis the number of slots in which S-SSB is configured within a radio frame.Step 3-2: a slot lrbelongs to the reserved slot if r=⌊m(NrefSL-NS-SSB)Nreserved⌋ where m=0, 1, . . . Nreserved−1 and Nreserved=(NrefsL−NS-SSB) mod Lbitmap. Lbitmapis the length of the bitmap is configured by higher layers. Step 4: The UE determines the set of slots assigned to a SL resource pool as follows:A bitmap (b0, b1, . . . , bLbitmap) associated with the resource pool is used.A slot tkSLin the remaining set in Step 3 belongs to the SL resource pool if bk=1 wherein k′=k mod Lbitmap. Alternatively, the bitmap can be applied to slots indicated by the reference SL resources, and then S-SSB slots is excluded from the set of slots indicated by the bitmap to determine the set of slots for the SL resource pool. Regarding the SL resource pool configuration for frequency domain resource, it is necessary to clarify how to interpret higher layer parameter startRB-Subchannel. To be specific, the reference point of the starting RB index for SL resource pool in frequency domain need to be defined explicitly. Considering that the SL resource pool shall be confined within a configured SL BWP, it seems straightforward that the starting RB index is with respect to the lowest RB of the SL BWP. Proposal 6: Higher Layer Parameter startRB-Subchannel is Defined as the Lowest RB Index of the Sub-Channel with the Lowest Index in the Resource Pool with Respect to the Lowest RB Index of the SL BWP. According to TS38.101, there is case where the number of PRBs with a channel bandwidth is 11, 18, or 24. For instance, for SCS of 30 kHz, when the channel bandwidth is 5 MHz, the number of PRB will be 11. In those cases, at this moment, considering the minimum sub-channel size is 10 PRB, there is only one sub-channels within a resource pool and remaining PRBs would be wasted. Alternatively, it can be considered that some portion of sub-channels in a resource pool could have larger size than the configured sub-channel size to utilize resource efficiently without orphan resources. For instance, for channel bandwidth of 24 PRB, the first sub-channel size could be 14 while remaining sub-channel has the size of 10 PRB. Proposal 7: Support the Case where the Number of PRBs for a Resource Pool is not Multiple of Configured Sub-Channel Size.The size of the lowest sub-channel in a resource pool is determined by (total number of PRBs for a resource pool configured sub-channel size*(number of sub-channels in a resource pool−1)).The size of remaining sub-channels is the configured sub-channel size. 1.1.3. TBS Determination In NR Uu link, since the symbol duration of PDSCH/PUSCH can be dynamically changed, it is supported that formula-based TB size determination. In this case, one of the design principles is ensuring to enable the same TBS between initial transmission and re-transmission with the same-different number of PRBs or the same/different number of symbols in some cases. In this case UE can derive TBS even though the UE successfully decode only DCI scheduling retransmission. Regarding the formula for TBS determination, the intermediate information bit size is derived by the coding rate and modulation order given by MCS, the number of layer, the reference number of REs per RB for data mapping, and the number of PRBs. When the number of REs per RB is counted, the symbol duration of PDSCH or PUSCH, and DMRS overhead are considered. In addition, remaining overhead is treated by a single RRC configured parameter. In other words, even though PDSCH resource can be partially overlapped with other channels such as PDCCH, SSB, CSI-RS, or PT-RS, these overheads are not directly considered since these channels would not always overlapped with PDSCH. Similarly, resources for UCI mapping on PUSCH does not considered for TBS determination for PUSCH. On the other hand, considering PSCCH/PSSCH multiplexing Option 3, PSSCH resource will be always overlapped with PSCCH resources. In addition, PSSCH resource may include AGC symbol and TX-RX switching symbol. In this case, if these overheads are not considered for TBS determination for NR sidelink, the derived TBS would be overestimated. Alternatively, it can be considered that TX UE intentionally decrease MCS value. However, in this case, higher MCS would not be used frequently. Moreover, the symbol duration of PSSCH can be changed, but it will not be controlled by SCI. To be specific, depending on PSFCH resource period, some slots will contains PSFCH resources, and other slots will not contains PSFCH resources. In a licensed carrier, when UL and SL can be TDMed in a slot, the symbol duration of PSSCH can be changed depending on the number of symbols available for NR sidelink in a slot. Since initial transmission and retransmission could have different symbol duration of PSSCH, it would be necessary to define reference number of RE which is independent on the actual symbol duration of PSSCH to ensure to enable the same TBS between initial transmission and retransmission. For instance, the symbol duration of PSSCH transmission in a non-PSFCH slot could be used for TBS determination. In a similar manner, since the PSSCH DMRS pattern would be dynamically changed according to the SCI indication, it would be necessary to define reference overhead for the PSSCH DMRS. For instance, the number of REs for PSSCH DMRS per PRB would be determined based on the lowest DMRS density among the (pre)configured DMRS pattern. It would be beneficial to express peak data rate. Next, the actual 2nd-stage SCI overhead is derived by the sum of code block size which is given by TB size. In other words, if the 2nd-stage SCI overhead is used to derive TBS, it causes chicken-egg problem. In other words, for TBS determination, the 2nd-stage SCI overhead will not be considered. The upper bound of the number of REs per PRB could be determined by excluding TX-RX switching period, 2 symbol-PSSCH DMRS overhead, and AGC symbol overhead. In this case, the upper bound of the reference number of REs for TBS determination would be 132. Observation 1: In NR Sidelink Resource, AGC Symbol and TX-RX Switching Symbol Needs to be Excluded for TBS Determination. Proposal 9: For TBS Determination, Following Procedure is PerformedThe UE shall first determine the number of REs within the slotA UE first determines the number of REs allocated for PSSCH within a PRB by N′_RE=N_SC*N_symb−N_DMRS,N_SC=12 is the number of subcarriers in a PRB.N_symb is the number of symbols of the PSSCH resource allocation within the slot assuming that PSFCH is not configured in this slotAGC symbol and TX-RX switching period are not included in the PSSCH resource allocation within the slotN_DMRS is the number of REs for DM-RS per PRB in the PSSCH resource allocation assuming that PSFCH is not configured in this slot, which is corresponding to the lowest DMRS density among the (pre)configured DM-RS candidate pattern(s) N_RE=N′_RE*n_PRB−N_PSCCH,N_PSCCH is the number of REs for the corresponding PSCCH.Intermediate number of information bits (N_info) is obtaining by N_info-N_RE*R*Q_m*v.R is the coding rate given by MCS field.Q_m is the modulation order by MCS fieldv is the number of layers. 1.1.4. SCI Design The size variation of 2nd-stage can have impact on UE complexity. To be specific, when the size of 2nd-stage is varying in slot-by-slot, UE needs to be ready to have multiple Polar decoder with different sizes. In NR Uu link, considered UE complexity, the number of DCI format size for a UE is limited in semi-static manner. The total number of different DCI format size is currently 4, and the total number of different DCI format scrambled with C-RNTI is 3. This kind of restriction is called DCI format size budget. In a similar manner, when the possible sizes of 2nd-stage is too large, it may not be feasible for UE implementation. Instead, it would be needed to perform size fitting for 2nd-stage considering UE complexity. In other words, a number of different 2nd-stage candidates could have the same payload size with different contents. Observation 2: It can be Considered to Restrict the Number of the Size of 2nd-Stage Considering UE Complexity. SCI fields for broadcast, unicast, and groupcast without the TX-RX distance based HARQ-ACK feedback operation would be the same except for the one or two SCI fields, therefore, it can be considered that a single 2nd-stage SCI format can be used to schedule broadcast, unicast, or groupcast without the TX-RX distance based HARQ-ACK feedback operation. In this case, another 2nd-stage SCI format conveying Zone ID field and Communication range requirement field will be used to schedule groupcast with HARQ feedback Option 1 with the TX-RX distance-based HARQ-ACK feedback operation. Regarding HARQ feedback Option indicator field, in our view, groupcast with HARQ feedback Option 1 could be used without TX-RX distance-based HARQ-ACK feedback operation. To be specific, a resource pool would not have sufficiently large number of PSFCH resources to support groupcast with HARQ feedback Option 2 to have acceptable PSFCH collision probability. Meanwhile, a UE can be provided by application such as platooning. Another example is that a PSCCH/PSSCH TX UE may not decide its own location for TX-RX distance-based HARQ-ACK feedback operation. In those cases, it is necessary to support that groupcast with HARQ feedback Option 1 is scheduled by a SCI format without Zone ID field and Communication range requirement field. In addition, in RAN1 #98 bis, it is agreed that “SCI explicitly indicates whether HARQ feedback is used or not for the corresponding PSSCH transmission” as working assumption. In this case, the SCI format also needs to indicate how the PSCCH/PSSCH RX UE transmit SL HARQ feedback of the PSSCH transmission. In our view, it can be considered to support joint indication of whether or how the RX UE transmit SL HARQ feedback for SCI overhead saving. Proposal 11: Support Joint Indication of SL HARQ Feedback Enabling Disabling and Groupcast HARQ Feedback Option in the 2nd_Stage SCI Proposal 12: Support Following 2nd-Stage SCI Formats in Rel-16 NR Sidelink:SCI format 0_2: (this format is used for all the cast type and groupcast HARQ_feedback Options)HARQ Process IDNew data indicatorRedundancy versionSource IDDestination IDHARQ feedback indicator00: No HARQ feedback request01: HARQ feedback for groupcast Option 2 (ACK/NACK feedback)10: HARQ feedback for groupcast Option 1 (NACK-only feedback)11: reservedCSI requestSCI format 0_3: (this format is used for groupcast with HARQ feedback Option 1 only)HARQ Process IDNew data indicatorRedundancy versionSource IDDestination IDZone IDCommunication range requirement In this case, PSSCH for broadcast will be scheduled by the SCI format 0_2 with HARQ feedback indicator=00 and CSI request=0. For groupcast with HARQ feedback Option 1, PSSCH will be scheduled by the SCI format 0_2 with HARQ feedback indicator=00 or 10 and CSI request=0 or the SCI format 0_3. UE procedure for transmitting Sidelink Control Information needs to be described in the specification as in LTE V2X. For instance, UE shall set the MCS as indicated by higher layers. A TB can consists of multiple logical channel with different priority. In this case the L1-priority field in SIC will be set based on the highest priority among those priorities. All the logical channels associated with the same TB will have the same cast type, destination ID, and source ID. In this case, UE behavior according to the cast type, L1-destination ID, and L1-source ID, would be set as indicated by higher layers corresponding to the transport block. On the TX-RX distance-based HARQ feedback operation, TX UE's location will be transformed into Zone ID in higher layers, and higher layer will give the higher MCR to physical layer for a TB as agreed in RAN2 #108. In addition, as in agreement made in email discussion [3], UE shall randomly select one of frequency-domain OCC for PSCCH DMRS. Proposal 14: Capture “the UE Shall Randomly Select the OCC Index n_OCC in Each PSCCH Transmission” in TS 38.213 According to the Following Agreement. Agreements:NR PDCCH DMRS sequence is the baseline for PSCCH DMRS sequence at least with the following modification.n_ID is determined by a (pre-)configured value per resource poolFrequency-domain OCC is applied, one of the [2 or 3 or 4] OCCs is randomly selected by the Tx UE. Note: there is no (pre-)configuration on the number of OCCs. PSCCH Design V2X, OCC length with length 4 is used for the PSCCH DMRS. However, since candidate number of PRBs for PSCCH is {10, 12, 15, 20, 25}, the OCC can be applied across different PRBs, if the numbers of PRBs for PSCCH is 10 or 15 or 25. For example, when numbers of PRBs for PSCCH is 10, The orthogonal cover code is applied to every four REs for PSSCH DMRS in a symbol from the lowest subcarrier index. OCC length with length 4 cannot be used for the PSCCH DMRS in a symbol from the lowest subcarrier index. OCC length with length 4 cannot be used for the PSCCH DMRS. Thus, the candidate number of {10, 15, 25} for the number of PRBs for PSCCH is replace with {8, 16, 24}. Proposal 16: Support Frequency-Domain OCC with Length 4 for PSCCH DMRS Sequence.Orthogonal cover code with length 4 is defined inFIG.11B.The orthogonal cover code is applied to every four REs for PSCCH DMRS in a symbol from the lowest subcarrier index.Candidate number of {10, 15, 25} for the number of PRBs for PSCCH is replaced with {8, 16, 24}. Regarding PSCCH DMRS sequence generation, according to agreement made in email discussion [3], n_ID is determined by a (pre-)configured value per resource. Proposal 17: Capture “n_ID is determined by a (pre-)configured value per resource pool.” in TS 38.211 for random seed of PSCCH DMRS sequence generation according to the following agreement. Agreements: NR PDCCH DMRS sequence is the baseline for PSCCH DMRS sequence at least with the following modification. n_ID is determined by a (pre-)configured value per resource pool Frequency-domain OCC is applied, one of the [2 or 3 or 4] OCCs is randomly selected by the Tx UE. Note: there is no (pre-)configuration on the number of OCCs. In a similar manner, PSCCH scrambling sequence can be designed considering the sequence randomization between NR Uu link and NR sidelink, and all the UEs can decide SCI conveyed on PSCCH at least for sensing operation. In addition, it can be considered that n_RNTI is replace with PSCCH DMRS OCC index for the scrambling sequence for PSCCH. Proposal 18: PSCCH Scrambling Sequence Generation is Initialized with cinit=nRNTI216+nIDnID∈{0, 1, . . . , 65535} is (pre)configured per resource pool.nRNTI=0 Regarding precoding for PSSCH, according to agreement made in email discussion [4], for Rel-16 NR sidelink, only wideband precoding is assumed for PSSCH transmission and it is noted that this implies that PRG size equal to scheduled PSSCH BW is assumed in Rel-16, In a similar manner, only wideband precoding is assumed for PSCCH to take advantage of PSCCH coverage. Proposal 20: Precoder Granularity of PSCCH is the Same as the Number of PRBs for the PSCCH. According to the UE procedure related to PSSCH, there are two aspects: one is the UE procedure for transmitting PSSCH, and the other is the UE procedure for receiving PSSCH. On the other hand, in the latest version of the NR specification, it seems that the UE procedure for receiving PSCCH is missing. 1.1.5. PSSCH and PSSCH DMRS Design In NR structure, two DMRS types are supported for PDSCH/PUSCH DMRS. DMRS type 1 targets to cover up roughly 1000 ns delay spread (which cause frequency selectivity). Meanwhile, DMRS type 2 targets to support MU-MIMO and more antenna ports (12 APs). However, in NR V2X structure, the number of antenna ports will be limited (e.g. up to 2), and MU-MIMO is not a main target. Thus, for a carrier with a given numerology, there is no clear motivation/benefit to support multiple DM-RS patterns in frequency domain for PSSCH. Meanwhile, considering that PSCCH resource will be confined within PSSCH resource, for PSSCH DMRS pattern in time-domain design, it is necessary to make a decision on the form of PSCCH especially on symbol duration in advance. Proposal 23: For NR PSSCH DMRS Pattern in Frequency Domain, Support Both DMRS Type 1 and DMRS Type 2, and One of them is (Pre)Configured Per Resource Pool. In Rel-16 NR sidelink, 7, 8, 9, . . . , 14 symbols in a slot without SL_SSB for SL operation is supported with normal CP and only 14-symbol is mandatory for a dedicated carrier. In addition, the position(s) of the PSSCH DMRS symbols is given by the duration of the scheduled resources for transmission of PSSCH (i.e., 1_d=6, 7, 8, 9, . . . , 13 symbols (including AGC symbol)) and the associated PSCCH (i.e., 2 or 3 symbols). In a similar manner, in case of ECP, a clarification is required for PSSCH DMRS pattern in time domain, supported SL symbol duration in a slot, and the supported duration of the PSCCH. Since PSCCH symbol duration is related to the PSCCH coverage, PSCCH symbol duration does not need to vary with ECP or is limited to 2 symbols. In addition, 6, 7, 8, . . . , 12 symbols in a slot without SL-SSB for SL operation is supported with ECP and only 12-symbol is mandatory for a dedicated carrier. Thus, in Rel-16 NR sidelink with ECP, no additional PSSCH DMRS pattern is introduced, and less than 12 of 1_d is used for PSSCH DMRS pattern. In addition, for value of 1_d shorter than 6, 2-DMRS symbols pattern is not supported in NR Uulink. In a similar manner, in case of ECP, for value of 1_d shorter than 6, 2-DRMS symbols pattern is not supported in NR sidelink. Proposal 24: In Rel-16 NR Sidelink with ECP,Support 6, 7, 8, 9, . . . , 12 symbols in a slot without SL-SSB for SL operationFor a dedicated carrier, inly 12-symbol is mandatoryNo additional PSSCH DMRS pattern is introduced According to the agreement, DMRS pattern could be dynamically indicated by SCI. The motivation of the dynamic DMRS pattern is mainly to change DMRS density of PSSCH. In this point of view, it can be considered that dmrs-Additional Position or the target DMRS density is indicated by SCI. Considering signaling overhead, candidates of dmrs-Additional Position to be indicated by SCI can be (pre)configured. In this case, the exact DMRS pattern will be given by dmrs-Additional Position and symbol duration of PSSCH. On the other hand, since the symbol duration of the PSSCH is different in the PSFCH slot and the non-PSFCH slot, the DMRS pattern is also different for the PSFCH slot and non-PSFCH slot. Therefore, a parameter indicating a distinction between the PSFCH slot and the non-PSFCH slot is required, and it can be indicated in different DMRS pattern candidates between PSFCH slot and non-PSFCH slot. For example, the number of DMRS symbols in the PSFCH slot is indicated as 3 or 4, and the number of DMRS symbols in the non-PSFCH slot is indicated as 2 or 3. In this case, different DMRS patterns are indicated for the PSFCH slot and the non-PSFCH slot. PSFCH slot. Proposal 25: For NR PSSCH DMRS Pattern in Time Domain, Candidates of the Number of PSSCH DM-RS are (Pre)Configured for PSFCH Slot and Non-PSFCH Slot Separately, and a SCI Indicates One of the (Pre)Configured Candidates. For scrambling sequence design for PSSCH, PUSCH scrambling sequence can be a baseline with consideration of how to handle the case where multiple PSSCH transmissions are fully or partially overlapped in time-and-frequency resources. Furthermore, according to the agreement, scrambling operation for the 2nd-stage SCI is applied separately with SL-SCH. The scrambling sequence for the 2nd-stage SCI needs to be independent on the parameters given by the 2nd-stage SCI while the scrambling sequence for SL-SCH could use the parameters given by the 2nd-stage SCI. For instance, L1-source ID and/or L1-destination ID. In such case, the scrambling sequence for SL-SCH may need to use PSCCH CRC again. In case of PSFCH or PSCCH DMRS sequence generation, multiple seed values for initialization are not needed for considering UE complexity. In a similar manner, for 2nd-stage SCI and SL-SCH scrambling sequence generation, supporting multiple seed values for initialization in the same channel may increase UE complexity. Proposal 26: 2nd-Stage SCI and SL-SCH Scrambling Sequence Generation is Initialized with cinit=nRNTI215+nIDnID∈{0, 1, . . . , 1023} is (pre)configured per resource pool.nRNTIis derived by16-bit LSB of PSCCH CRC for the 2nd-stage SCI and SL-SCH. Regarding MCS table used for PSSCH transmission, at this moment, at least one MCS table is (pre)-configured, and 256QAM MCS table and low-spectral efficiency 64QAM MCS table would be optional. Meanwhile, pairs of modulation order and coding rate for MCS index 0˜19 in 256QAM MCS table are already supported by normal 64QAM MCS table with different MCS index. Similarly, pairs of modulation order and coding rate for MCS index 6˜28 in low-spectral efficiency 64QAM MCS table are already supported by normal 64QAM MCS table with different MCS index. In the perspective of UE complexity, even though 256QAM MCS table or low-spectral efficiency 64QAM MCS table is (pre)configured before exchange relevant UE capability, TX UE can transmit PSSCH, and the RX UE can demodulate and decode PSSCH by using the (pre)configured MCS table when the MCS index is selected among the entries supported in normal 64QAM MCS table. In this case, only drawback would low flexibility on the MCS selection. Alternatively, it can be considered that the MCS table can be overwritten by PC5 RRC. However, in this case, during the PC5 RRC (re)configuration period, TX UE and RX UE may have different understanding on the MCS table selection, and it will cause PSSCH detection performance degradation. To avoid this ambiguity issue, it can be considered that SCI indicates MCS table actually used for PSSCH transmission. Proposal 28: If More than on MCS Tables Configuration Introduced, SCI Indicates MCS Table Actually Used for PSSCH Transmission. 1.1.6. PSFCH Format for SFCI In RAN1 #99 meeting [2], it is agreed that, “The number of cyclic shift pairs used for a PSFCH transmission (denoted by Y) that can be multiplexed in a PRB is (pre-)configured per resource pool among {1, 2, 3, 4, 6}”. Remaining issues is the exact values of cyclic shifts use for a PSFCH transmission. In our view, for a given number of cyclic shift pairs for a PSFCH transmission, it would be beneficial to maximize the distance between different cyclic shifts considering target delay spread value. Proposal 29: Support Cyclic Shift Values for a Given Number of Cyclic Shift Pairs Used for a PSFCH Transmission that can be Multiplexed in a PRBWhen the number of m 0 values is 1,{0, 6}When the number of m 0 values is 2,{0, 6}, {3, 9}When the number of m 0 values is 3,{0, 6}, {2, 8}, {4, 10}When the number of m 0 values is 4,{0, 6}, {2, 8}, {4, 10}, {5, 11}When the number of m 0 values is 6,{0, 6}, {1, 7}, {2, 8}, {3, 9}, {4, 10}, {5, 11} 1.1.7. Sidelink CSI-RS Design It is necessary to ensure that the sidelink CSI-RS is not overlapped with REs used for PSSCH DMRS. In a shared carrier, the symbol duration of PSSCH could be changed slot-by-slot, then the PSSCH DMRS pattern in time domain would be also changed. For some cases, it would be possible that the last symbol index of PSSCH is used for PSSCH DMRS. As described in 2.1.6, the symbol duration of PSSCH would be different for the PSFCH slot and non-PSFCH slot. On the other hand, the sidelink CSI-RS is not FDMed/CDMed with PSSCH DMRS. In those points of views, sidelink CSI-RS symbol position is a slot is configured by PC5-RRC signaling for PSFCH slot and for non-PSFCH slot separately. Proposal 30: Sidelink CSI-RS Symbol Position in a Slot is Configured by PC5-RRC Signaling for PSFCH Slot and for Non-PSFCH Slot Separately. 1.1.8. Sidelink PT-RS Design Regarding physical sequence generation for sidelink PT-RS, in NR Uu link, the sequence of the PUSCH DMRS is copied according to PT-RS RE offset. In a similar, if the PSSCH DMRS is not FDMed with 1st SCI (and sidelink PT-RS is overlapped with 1stSCI in time domain or not), the sequence of the first DMRS position at that subcarrier is used to generate the PT-RS sequence as shown inFIG.12A. However, if the PSSCH DMRS is FDMed with 1stSCI and sidelink PT-RS is overlapped with 1st SCI in time domain as shown inFIG.12B, the sequence of the first DMRS position at the subcarrier is unavailable. In this case, since the last DMRS position of the PSSCH DMRS symbols (given by the duration of the scheduled resources for transmission of PSSCH and the associated PSCCH) at the subcarrier is not always FDMed with 1stSCI as shown inFIG.12C, PT-RS sequence mapped on subcarrier k is the same as PSSCH DMRS sequence mapped on subcarrier k in the last PSSCH DMRS symbol position within a PSSCH symbol duration Proposal 31: For Sidelink PT-RS, PT-RS Sequence Mapped on Subcarrier k is the Same as PSSCH DMRS Sequence Mapped on Subcarrier k in the Last PSSCH DMRS Symbol Position within a PSSCH Symbol Duration Examples of Communication Systems Applicable to the Present Disclosure The various descriptions, functions, procedures, proposals, methods, and/or operational flowcharts of the present disclosure described in this document may be applied to, without being limited to, a variety of fields requiring wireless communication/connection (e.g., 5G) between devices. Hereinafter, a description will be given in more detail with reference to the drawings. In the following drawings/description, the same reference symbols may denote the same or corresponding hardware blocks, software blocks, or functional blocks unless described otherwise. FIG.13illustrates a communication system1applied to the present disclosure. Referring toFIG.13, a communication system1applied to the present disclosure includes wireless devices, BSs, and a network. Herein, the wireless devices represent devices performing communication using RAT (e.g., 5G NR or LTE) and may be referred to as communication/radio/5G devices. The wireless devices may include, without being limited to, a robot100a, vehicles100b-1and100b-2, an extended reality (XR) device100c, a hand-held device100d, a home appliance100e, an Internet of things (IoT) device100f, and an artificial intelligence (AI) device/server400. For example, the vehicles may include a vehicle having a wireless communication function, an autonomous driving vehicle, and a vehicle capable of performing communication between vehicles. Herein, the vehicles may include an unmanned aerial vehicle (UAV) (e.g., a drone). The XR device may include an augmented reality (AR)/virtual reality (VR)/mixed reality (MR) device and may be implemented in the form of a head-mounted device (HMD), a head-up display (HUD) mounted in a vehicle, a television, a smartphone, a computer, a wearable device, a home appliance device, a digital signage, a vehicle, a robot, etc. The hand-held device may include a smartphone, a smartpad, a wearable device (e.g., a smartwatch or a smartglasses), and a computer (e.g., a notebook). The home appliance may include a TV, a refrigerator, and a washing machine. The IoT device may include a sensor and a smartmeter. For example, the BSs and the network may be implemented as wireless devices and a specific wireless device200amay operate as a BS/network node with respect to other wireless devices. The wireless devices100ato100fmay be connected to the network300via the BSs200. An AI technology may be applied to the wireless devices100ato100fand the wireless devices100ato100fmay be connected to the AI server400via the network300. The network300may be configured using a 3G network, a 4G (e.g., LTE) network, or a 5G (e.g., NR) network. Although the wireless devices100ato100fmay communicate with each other through the BSs200/network300, the wireless devices100ato100fmay perform direct communication (e.g., sidelink communication) with each other without passing through the BSs/network. For example, the vehicles100b-1and100b-2may perform direct communication (e.g. V2V/V2X communication). The IoT device (e.g., a sensor) may perform direct communication with other IoT devices (e.g., sensors) or other wireless devices100ato100f. Wireless communication/connections150a,150b, or150cmay be established between the wireless devices100ato100f/BS200, or BS200/BS200. Herein, the wireless communication/connections may be established through various RATs (e.g., 5G NR) such as UL/DL communication150a, sidelink communication150b(or, D2D communication), or inter BS communication (e.g. relay, integrated access backhaul (IAB)). The wireless devices and the BSs/the wireless devices may transmit/receive radio signals to/from each other through the wireless communication/connections150aand150b. For example, the wireless communication/connections150aand150bmay transmit/receive signals through various physical channels. To this end, at least a part of various configuration information configuring processes, various signal processing processes (e.g., channel encoding/decoding, modulation/demodulation, and resource mapping/demapping), and resource allocating processes, for transmitting/receiving radio signals, may be performed based on the various proposals of the present disclosure. Examples of Wireless Devices Applicable to the Present Disclosure FIG.14illustrates wireless devices applicable to the present disclosure. Referring toFIG.14, a first wireless device100and a second wireless device200may transmit radio signals through a variety of RATs (e.g., LTE and NR). Herein, {the first wireless device100and the second wireless device200} may correspond to {the wireless device100xand the BS200} and/or {the wireless device100xand the wireless device100x} ofFIG.13. The first wireless device100may include one or more processors102and one or more memories104and additionally further include one or more transceivers106and/or one or more antennas108. The processor(s)102may control the memory(s)104and/or the transceiver(s)106and may be configured to implement the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document. For example, the processor(s)102may process information within the memory(s)104to generate first information/signals and then transmit radio signals including the first information/signals through the transceiver(s)106. The processor(s)102may receive radio signals including second information/signals through the transceiver106and then store information obtained by processing the second information/signals in the memory(s)104. The memory(s)104may be connected to the processor(s)102and may store a variety of information related to operations of the processor(s)102. For example, the memory(s)104may store software code including commands for performing a part or the entirety of processes controlled by the processor(s)102or for performing the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document. Herein, the processor(s)102and the memory(s)104may be a part of a communication modem/circuit/chip designed to implement RAT (e.g., LTE or NR). The transceiver(s)106may be connected to the processor(s)102and transmit and/or receive radio signals through one or more antennas108. Each of the transceiver(s)106may include a transmitter and/or a receiver. The transceiver(s)106may be interchangeably used with Radio Frequency (RF) unit(s). In the present disclosure, the wireless device may represent a communication modem/circuit/chip. The second wireless device200may include one or more processors202and one or more memories204and additionally further include one or more transceivers206and/or one or more antennas208. The processor(s)202may control the memory(s)204and/or the transceiver(s)206and may be configured to implement the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document. For example, the processor(s)202may process information within the memory(s)204to generate third information/signals and then transmit radio signals including the third information/signals through the transceiver(s)206. The processor(s)202may receive radio signals including fourth information/signals through the transceiver(s)106and then store information obtained by processing the fourth information/signals in the memory(s)204. The memory(s)204may be connected to the processor(s)202and may store a variety of information related to operations of the processor(s)202. For example, the memory(s)204may store software code including commands for performing a part or the entirety of processes controlled by the processor(s)202or for performing the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document. Herein, the processor(s)202and the memory(s)204may be a part of a communication modem/circuit/chip designed to implement RAT (e.g., LTE or NR). The transceiver(s)206may be connected to the processor(s)202and transmit and/or receive radio signals through one or more antennas208. Each of the transceiver(s)206may include a transmitter and/or a receiver. The transceiver(s)206may be interchangeably used with RF unit(s). In the present disclosure, the wireless device may represent a communication modem/circuit/chip. Hereinafter, hardware elements of the wireless devices100and200will be described more specifically. One or more protocol layers may be implemented by, without being limited to, one or more processors102and202. For example, the one or more processors102and202may implement one or more layers (e.g., functional layers such as PHY, MAC, RLC, PDCP, RRC, and SDAP). The one or more processors102and202may generate one or more Protocol Data Units (PDUs) and/or one or more service data unit (SDUs) according to the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document. The one or more processors102and202may generate messages, control information, data, or information according to the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document. The one or more processors102and202may generate signals (e.g., baseband signals) including PDUs, SDUs, messages, control information, data, or information according to the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document and provide the generated signals to the one or more transceivers106and206. The one or more processors102and202may receive the signals (e.g., baseband signals) from the one or more transceivers106and206and acquire the PDUs, SDUs, messages, control information, data, or information according to the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document. The one or more processors102and202may be referred to as controllers, microcontrollers, microprocessors, or microcomputers. The one or more processors102and202may be implemented by hardware, firmware, software, or a combination thereof. As an example, one or more application specific integrated circuits (ASICs), one or more digital signal processors (DSPs), one or more digital signal processing devices (DSPDs), one or more programmable logic devices (PLDs), or one or more field programmable gate arrays (FPGAs) may be included in the one or more processors102and202. The descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document may be implemented using firmware or software and the firmware or software may be configured to include the modules, procedures, or functions. Firmware or software configured to perform the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document may be included in the one or more processors102and202or stored in the one or more memories104and204so as to be driven by the one or more processors102and202. The descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document may be implemented using firmware or software in the form of code, commands, and/or a set of commands. The one or more memories104and204may be connected to the one or more processors102and202and store various types of data, signals, messages, information, programs, code, instructions, and/or commands. The one or more memories104and204may be configured by read-only memories (ROMs), random access memories (RAMs), electrically erasable programmable read-only memories (EPROMs), flash memories, hard drives, registers, cash memories, computer-readable storage media, and/or combinations thereof. The one or more memories104and204may be located at the interior and/or exterior of the one or more processors102and202. The one or more memories104and204may be connected to the one or more processors102and202through various technologies such as wired or wireless connection. The one or more transceivers106and206may transmit user data, control information, and/or radio signals/channels, mentioned in the methods and/or operational flowcharts of this document, to one or more other devices. The one or more transceivers106and206may receive user data, control information, and/or radio signals/channels, mentioned in the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document, from one or more other devices. For example, the one or more transceivers106and206may be connected to the one or more processors102and202and transmit and receive radio signals. For example, the one or more processors102and202may perform control so that the one or more transceivers106and206may transmit user data, control information, or radio signals to one or more other devices. The one or more processors102and202may perform control so that the one or more transceivers106and206may receive user data, control information, or radio signals from one or more other devices. The one or more transceivers106and206may be connected to the one or more antennas108and208and the one or more transceivers106and206may be configured to transmit and receive user data, control information, and/or radio signals/channels, mentioned in the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document, through the one or more antennas108and208. In this document, the one or more antennas may be a plurality of physical antennas or a plurality of logical antennas (e.g., antenna ports). The one or more transceivers106and206may convert received radio signals/channels etc. from RF band signals into baseband signals in order to process received user data, control information, radio signals/channels, etc. using the one or more processors102and202. The one or more transceivers106and206may convert the user data, control information, radio signals/channels, etc. processed using the one or more processors102and202from the base band signals into the RF band signals. To this end, the one or more transceivers106and206may include (analog) oscillators and/or filters. Examples of a Vehicle or an Autonomous Driving Vehicle Applicable to the Present Disclosure FIG.15illustrates a vehicle or an autonomous driving vehicle applied to the present disclosure. The vehicle or autonomous driving vehicle may be implemented by a mobile robot, a car, a train, a manned/unmanned aerial vehicle (AV), a ship, etc. Referring toFIG.15, a vehicle or autonomous driving vehicle100may include an antenna unit108, a communication unit110, a control unit120, a driving unit140a, a power supply unit140b, a sensor unit140c, and an autonomous driving unit140d. The antenna unit108may be configured as a part of the communication unit110. The communication unit110may transmit and receive signals (e.g., data and control signals) to and from external devices such as other vehicles, BSs (e.g., gNBs and road side units), and servers. The control unit120may perform various operations by controlling elements of the vehicle or the autonomous driving vehicle100. The control unit120may include an ECU. The driving unit140amay cause the vehicle or the autonomous driving vehicle100to drive on a road. The driving unit140amay include an engine, a motor, a powertrain, a wheel, a brake, a steering device, etc. The power supply unit140bmay supply power to the vehicle or the autonomous driving vehicle100and include a wired/wireless charging circuit, a battery, etc. The sensor unit140cmay acquire a vehicle state, ambient environment information, user information, etc. The sensor unit140cmay include an inertial measurement unit (IMU) sensor, a collision sensor, a wheel sensor, a speed sensor, a slope sensor, a weight sensor, a heading sensor, a position module, a vehicle forward/backward sensor, a battery sensor, a fuel sensor, a tire sensor, a steering sensor, a temperature sensor, a humidity sensor, an ultrasonic sensor, an illumination sensor, a pedal position sensor, etc. The autonomous driving unit140dmay implement technology for maintaining a lane on which a vehicle is driving, technology for automatically adjusting speed, such as adaptive cruise control, technology for autonomously driving along a determined path, technology for driving by automatically setting a path if a destination is set, and the like. For example, the communication unit110may receive map data, traffic information data, etc. from an external server. The autonomous driving unit140dmay generate an autonomous driving path and a driving plan from the obtained data. The control unit120may control the driving unit140asuch that the vehicle or the autonomous driving vehicle100may move along the autonomous driving path according to the driving plan (e.g., speed/direction control). In the middle of autonomous driving, the communication unit110may aperiodically/periodically acquire recent traffic information data from the external server and acquire surrounding traffic information data from neighboring vehicles. In the middle of autonomous driving, the sensor unit140cmay obtain a vehicle state and/or surrounding environment information. The autonomous driving unit140dmay update the autonomous driving path and the driving plan based on the newly obtained data/information. The communication unit110may transfer information about a vehicle position, the autonomous driving path, and/or the driving plan to the external server. The external server may predict traffic information data using AI technology, etc., based on the information collected from vehicles or autonomous driving vehicles and provide the predicted traffic information data to the vehicles or the autonomous driving vehicles. Examples of a Vehicle and AR/VR Applicable to the Present Disclosure FIG.16illustrates a vehicle applied to the present disclosure. The vehicle may be implemented as a transport means, an aerial vehicle, a ship, etc. Referring toFIG.16, a vehicle100may include a communication unit110, a control unit120, a memory unit130, an I/O unit140a, and a positioning unit140b. The communication unit110may transmit and receive signals (e.g., data and control signals) to and from external devices such as other vehicles or BSs. The control unit120may perform various operations by controlling constituent elements of the vehicle100. The memory unit130may store data/parameters/programs/code/commands for supporting various functions of the vehicle100. The I/O unit140amay output an AR/VR object based on information within the memory unit130. The I/O unit140amay include an HUD. The positioning unit140bmay acquire information about the position of the vehicle100. The position information may include information about an absolute position of the vehicle100, information about the position of the vehicle100within a traveling lane, acceleration information, and information about the position of the vehicle100from a neighboring vehicle. The positioning unit140bmay include a GPS and various sensors. As an example, the communication unit110of the vehicle100may receive map information and traffic information from an external server and store the received information in the memory unit130. The positioning unit140bmay obtain the vehicle position information through the GPS and various sensors and store the obtained information in the memory unit130. The control unit120may generate a virtual object based on the map information, traffic information, and vehicle position information and the I/O unit140amay display the generated virtual object in a window in the vehicle (1410and1420). The control unit120may determine whether the vehicle100normally drives within a traveling lane, based on the vehicle position information. If the vehicle100abnormally exits from the traveling lane, the control unit120may display a warning on the window in the vehicle through the I/O unit140a. In addition, the control unit120may broadcast a warning message regarding driving abnormity to neighboring vehicles through the communication unit110. According to situation, the control unit120may transmit the vehicle position information and the information about driving/vehicle abnormality to related organizations. Examples of an XR Device Applicable to the Present Disclosure FIG.17illustrates an XR device applied to the present disclosure. The XR device may be implemented by an HMD, an HUD mounted in a vehicle, a television, a smartphone, a computer, a wearable device, a home appliance, a digital signage, a vehicle, a robot, etc. Referring toFIG.17, an XR device100amay include a communication unit110, a control unit120, a memory unit130, an I/O unit140a, a sensor unit140b, and a power supply unit140c. The communication unit110may transmit and receive signals (e.g., media data and control signals) to and from external devices such as other wireless devices, hand-held devices, or media servers. The media data may include video, images, and sound. The control unit120may perform various operations by controlling constituent elements of the XR device100a. For example, the control unit120may be configured to control and/or perform procedures such as video/image acquisition, (video/image) encoding, and metadata generation and processing. The memory unit130may store data/parameters/programs/code/commands needed to drive the XR device100a/generate XR object. The I/O unit140amay obtain control information and data from the exterior and output the generated XR object. The I/O unit140amay include a camera, a microphone, a user input unit, a display unit, a speaker, and/or a haptic module. The sensor unit140bmay obtain an XR device state, surrounding environment information, user information, etc. The sensor unit140bmay include a proximity sensor, an illumination sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, a light sensor, a microphone and/or a radar. The power supply unit140cmay supply power to the XR device100aand include a wired/wireless charging circuit, a battery, etc. For example, the memory unit130of the XR device100amay include information (e.g., data) needed to generate the XR object (e.g., an AR/VR/MR object). The I/O unit140amay receive a command for manipulating the XR device100afrom a user and the control unit120may drive the XR device100aaccording to a driving command of a user. For example, when a user desires to watch a film or news through the XR device100a, the control unit120transmits content request information to another device (e.g., a hand-held device100b) or a media server through the communication unit130. The communication unit130may download/stream content such as films or news from another device (e.g., the hand-held device100b) or the media server to the memory unit130. The control unit120may control and/or perform procedures such as video/image acquisition, (video/image) encoding, and metadata generation/processing with respect to the content and generate/output the XR object based on information about a surrounding space or a real object obtained through the I/O unit140a/sensor unit140b. The XR device100amay be wirelessly connected to the hand-held device100bthrough the communication unit110and the operation of the XR device100amay be controlled by the hand-held device100b. For example, the hand-held device100bmay operate as a controller of the XR device100a. To this end, the XR device100amay obtain information about a 3D position of the hand-held device100band generate and output an XR object corresponding to the hand-held device100b. Examples of a Robot Applicable to the Present Disclosure FIG.18illustrates a robot applied to the present disclosure. The robot may be categorized into an industrial robot, a medical robot, a household robot, a military robot, etc., according to a used purpose or field. Referring toFIG.18, a robot100may include a communication unit110, a control unit120, a memory unit130, an I/O unit140a, a sensor unit140b, and a driving unit140c. Herein, the blocks110to130/140ato140ccorrespond to the blocks110to130/140ofFIG.14, respectively. The communication unit110may transmit and receive signals (e.g., driving information and control signals) to and from external devices such as other wireless devices, other robots, or control servers. The control unit120may perform various operations by controlling constituent elements of the robot100. The memory unit130may store data/parameters/programs/code/commands for supporting various functions of the robot100. The I/O unit140amay obtain information from the exterior of the robot100and output information to the exterior of the robot100. The I/O unit140amay include a camera, a microphone, a user input unit, a display unit, a speaker, and/or a haptic module. The sensor unit140bmay obtain internal information of the robot100, surrounding environment information, user information, etc. The sensor unit140bmay include a proximity sensor, an illumination sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, a light sensor, a microphone, a radar, etc. The driving unit140cmay perform various physical operations such as movement of robot joints. In addition, the driving unit140cmay cause the robot100to travel on the road or to fly. The driving unit140cmay include an actuator, a motor, a wheel, a brake, a propeller, etc. Example of AI Device to which the Present Disclosure is Applied FIG.19illustrates an AI device applied to the present disclosure. The AI device may be implemented by a fixed device or a mobile device, such as a TV, a projector, a smartphone, a PC, a notebook, a digital broadcast terminal, a tablet PC, a wearable device, a Set Top Box (STB), a radio, a washing machine, a refrigerator, a digital signage, a robot, a vehicle, etc. Referring toFIG.19, an AI device100may include a communication unit110, a control unit120, a memory unit130, an I/O unit140a/140b, a learning processor unit140c, and a sensor unit140d. The blocks110to130/140ato140dcorrespond to blocks110to130/140ofFIG.14, respectively. The communication unit110may transmit and receive wired/radio signals (e.g., sensor information, user input, learning models, or control signals) to and from external devices such as other AI devices (e.g.,100x,200, or400ofFIG.13) or an AI server (e.g.,400ofFIG.13) using wired/wireless communication technology. To this end, the communication unit110may transmit information within the memory unit130to an external device and transmit a signal received from the external device to the memory unit130. The control unit120may determine at least one feasible operation of the AI device100, based on information which is determined or generated using a data analysis algorithm or a machine learning algorithm. The control unit120may perform an operation determined by controlling constituent elements of the AI device100. For example, the control unit120may request, search, receive, or use data of the learning processor unit140cor the memory unit130and control the constituent elements of the AI device100to perform a predicted operation or an operation determined to be preferred among at least one feasible operation. The control unit120may collect history information including the operation contents of the AI device100and operation feedback by a user and store the collected information in the memory unit130or the learning processor unit140cor transmit the collected information to an external device such as an AI server (400ofFIG.13). The collected history information may be used to update a learning model. The memory unit130may store data for supporting various functions of the AI device100. For example, the memory unit130may store data obtained from the input unit140a, data obtained from the communication unit110, output data of the learning processor unit140c, and data obtained from the sensor unit140. The memory unit130may store control information and/or software code needed to operate/drive the control unit120. The input unit140amay acquire various types of data from the exterior of the AI device100. For example, the input unit140amay acquire learning data for model learning, and input data to which the learning model is to be applied. The input unit140amay include a camera, a microphone, and/or a user input unit. The output unit140bmay generate output related to a visual, auditory, or tactile sense. The output unit140bmay include a display unit, a speaker, and/or a haptic module. The sensing unit140may obtain at least one of internal information of the AI device100, surrounding environment information of the AI device100, and user information, using various sensors. The sensor unit140may include a proximity sensor, an illumination sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, a light sensor, a microphone, and/or a radar. The learning processor unit140cmay learn a model consisting of artificial neural networks, using learning data. The learning processor unit140cmay perform AI processing together with the learning processor unit of the AI server (400ofFIG.13). The learning processor unit140cmay process information received from an external device through the communication unit110and/or information stored in the memory unit130. In addition, an output value of the learning processor unit140cmay be transmitted to the external device through the communication unit110and may be stored in the memory unit130. The above-described embodiments of the present disclosure are applicable to various mobile communication systems. | 108,238 |
11863201 | DETAILED DESCRIPTION Some wireless communications systems may support low-density parity-check (LDPC) decoding, such that a user equipment (UE) may support layered decoding of encoded signals including LDPC codes. In some cases (e.g., in new radio (NR) communications), LDPC codes may be characterized by two parity check base graph matrices (e.g., BG1 and BG2) where a row in the base graph matrix may represent a layer in layered decoding and each entry of the base graph matrix may be a submatrix block, equal to an all-zero matrix or a cyclic-shifted (e.g., non-zero) identity matrix. For a given code rate, the UE may extract a submatrix from the full base graph matrix to use as the parity check base matrix (e.g., for rate matching). In layered decoding, the UE may scan and update log likelihood ratios (LLRs) associated with each non-zero submatrix block (e.g., each node) in a parity check base graph matrix, where the LLRs associated with a single node may be scanned or updated in one hardware cycle. Traditionally, a UE uses a hardware sequence (e.g., decoder schedule) to determine which node the UE should scan and which node the UE should update at each hardware cycle, which is a computationally complex and resource intensive procedure. Techniques described herein may support correlation-based hardware sequences for layered decoding. In some cases, to improve hardware sequence or decoding schedule selection for LDPC layered decoding, a UE may extract a submatrix from a full base graph matrix associated with an encoded signal and may partition each layer of the submatrix into two sets according to the number of punctured columns for each layer. For example, a first set of layers (e.g., Type 1) may include layers that correspond to 1 punctured column and a second set of layers (e.g., Type 2) may include layers that correspond to 2 punctured columns. After partitioning the layers, the UE may order the layers of each type and build correlation tables based on combinations of layers of the respective type. For example, the UE may determine a correlation value between each pair of layers in the first set of layers based on the quantity of overlapping non-zero columns between the pair of layers and may generate a first correlation table based on the determined correlation values. In some cases, the UE may determine a layer order by selecting a first layer from the first set of layers as a starting layer and searching the correlation for a second layer that has the smallest correlation value when correlated with the first layer. The UE may continue the searching process for all layers from the first set of layers and may determine additional layer orders (e.g., a first set of layer orders) by using each layer from the first set of layers as the starting layer and performing the search in a similar manner. The UE may perform the aforementioned process with the second set of layers to determine a second set of layer orders. After determining the first set of layer orders and the second set of layer orders, the UE may concatenate the first set of layer orders and the second set of layer orders to obtain a combined set of layer orders. The UE may determine a decoding schedule for each layer order of the combined set of layer orders and may select a decoding schedule based on the based on the respective schedule lengths (e.g., may select the decoding schedule with the shortest schedule length or having a length below a threshold). The UE may perform layered decoding of an encoded signal using the selected decoding schedule. Aspects of the disclosure are initially described in the context of wireless communications systems. Aspects of the disclosure are then described in the context of a flow chart and a process flow. Aspects of the disclosure are further illustrated by and described with reference to apparatus diagrams, system diagrams, and flowcharts that relate to correlation-based hardware sequences for layered decoding. FIG.1illustrates an example of a wireless communications system100that supports correlation-based hardware sequences for layered decoding in accordance with one or more aspects of the present disclosure. The wireless communications system100may include one or more network entities105, one or more UEs115, and a core network130. In some examples, the wireless communications system100may be a Long Term Evolution (LTE) network, an LTE-Advanced (LTE-A) network, an LTE-A Pro network, an NR network, or a network operating in accordance with other systems and radio technologies, including future systems and radio technologies not explicitly mentioned herein. The network entities105may be dispersed throughout a geographic area to form the wireless communications system100and may include devices in different forms or having different capabilities. In various examples, a network entity105may be referred to as a network element, a mobility element, a radio access network (RAN) node, or network equipment, among other nomenclature. In some examples, network entities105and UEs115may wirelessly communicate via one or more communication links125(e.g., a radio frequency (RF) access link). For example, a network entity105may support a coverage area110(e.g., a geographic coverage area) over which the UEs115and the network entity105may establish one or more communication links125. The coverage area110may be an example of a geographic area over which a network entity105and a UE115may support the communication of signals according to one or more radio access technologies (RATs). The UEs115may be dispersed throughout a coverage area110of the wireless communications system100, and each UE115may be stationary, or mobile, or both at different times. The UEs115may be devices in different forms or having different capabilities. Some example UEs115are illustrated inFIG.1. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115or network entities105, as shown inFIG.1. As described herein, a node of the wireless communications system100, which may be referred to as a network node, or a wireless node, may be a network entity105(e.g., any network entity described herein), a UE115(e.g., any UE described herein), a network controller, an apparatus, a device, a computing system, one or more components, or another suitable processing entity configured to perform any of the techniques described herein. For example, a node may be a UE115. As another example, a node may be a network entity105. As another example, a first node may be configured to communicate with a second node or a third node. In one aspect of this example, the first node may be a UE115, the second node may be a network entity105, and the third node may be a UE115. In another aspect of this example, the first node may be a UE115, the second node may be a network entity105, and the third node may be a network entity105. In yet other aspects of this example, the first, second, and third nodes may be different relative to these examples. Similarly, reference to a UE115, network entity105, apparatus, device, computing system, or the like may include disclosure of the UE115, network entity105, apparatus, device, computing system, or the like being a node. For example, disclosure that a UE115is configured to receive information from a network entity105also discloses that a first node is configured to receive information from a second node. In some examples, network entities105may communicate with the core network130, or with one another, or both. For example, network entities105may communicate with the core network130via one or more backhaul communication links120(e.g., in accordance with an S1, N2, N3, or other interface protocol). In some examples, network entities105may communicate with one another over a backhaul communication link120(e.g., in accordance with an X2, Xn, or other interface protocol) either directly (e.g., directly between network entities105) or indirectly (e.g., via a core network130). In some examples, network entities105may communicate with one another via a midhaul communication link162(e.g., in accordance with a midhaul interface protocol) or a fronthaul communication link168(e.g., in accordance with a fronthaul interface protocol), or any combination thereof. The backhaul communication links120, midhaul communication links162, or fronthaul communication links168may be or include one or more wired links (e.g., an electrical link, an optical fiber link), one or more wireless links (e.g., a radio link, a wireless optical link), among other examples or various combinations thereof. A UE115may communicate with the core network130through a communication link155. One or more of the network entities105described herein may include or may be referred to as a base station140(e.g., a base transceiver station, a radio base station, an NR base station, an access point, a radio transceiver, a NodeB, an eNodeB (eNB), a next-generation NodeB or a giga-NodeB (either of which may be referred to as a gNB), a 5G NB, a next-generation eNB (ng-eNB), a Home NodeB, a Home eNodeB, or other suitable terminology). In some examples, a network entity105(e.g., a base station140) may be implemented in an aggregated (e.g., monolithic, standalone) base station architecture, which may be configured to utilize a protocol stack that is physically or logically integrated within a single network entity105(e.g., a single RAN node, such as a base station140). In some examples, a network entity105may be implemented in a disaggregated architecture (e.g., a disaggregated base station architecture, a disaggregated RAN architecture), which may be configured to utilize a protocol stack that is physically or logically distributed among two or more network entities105, such as an integrated access backhaul (IAB) network, an open RAN (O-RAN) (e.g., a network configuration sponsored by the O-RAN Alliance), or a virtualized RAN (vRAN) (e.g., a cloud RAN (C-RAN)). For example, a network entity105may include one or more of a central unit (CU)160, a distributed unit (DU)165, a radio unit (RU)170, a RAN Intelligent Controller (RIC)175(e.g., a Near-Real Time RIC (Near-RT RIC), a Non-Real Time RIC (Non-RT RIC)), a Service Management and Orchestration (SMO)180system, or any combination thereof. An RU170may also be referred to as a radio head, a smart radio head, a remote radio head (RRH), a remote radio unit (RRU), or a transmission reception point (TRP). One or more components of the network entities105in a disaggregated RAN architecture may be co-located, or one or more components of the network entities105may be located in distributed locations (e.g., separate physical locations). In some examples, one or more network entities105of a disaggregated RAN architecture may be implemented as virtual units (e.g., a virtual CU (VCU), a virtual DU (VDU), a virtual RU (VRU)). The split of functionality between a CU160, a DU165, and an RU175is flexible and may support different functionalities depending upon which functions (e.g., network layer functions, protocol layer functions, baseband functions, RF functions, and any combinations thereof) are performed at a CU160, a DU165, or an RU175. For example, a functional split of a protocol stack may be employed between a CU160and a DU165such that the CU160may support one or more layers of the protocol stack and the DU165may support one or more different layers of the protocol stack. In some examples, the CU160may host upper protocol layer (e.g., layer 3 (L3), layer 2 (L2)) functionality and signaling (e.g., Radio Resource Control (RRC), service data adaption protocol (SDAP), Packet Data Convergence Protocol (PDCP)). The CU160may be connected to one or more DUs165or RUs170, and the one or more DUs165or RUs170may host lower protocol layers, such as layer 1 (L1) (e.g., physical (PHY) layer) or L2 (e.g., radio link control (RLC) layer, medium access control (MAC) layer) functionality and signaling, and may each be at least partially controlled by the CU160. Additionally, or alternatively, a functional split of the protocol stack may be employed between a DU165and an RU170such that the DU165may support one or more layers of the protocol stack and the RU170may support one or more different layers of the protocol stack. The DU165may support one or multiple different cells (e.g., via one or more RUs170). In some cases, a functional split between a CU160and a DU165, or between a DU165and an RU170may be within a protocol layer (e.g., some functions for a protocol layer may be performed by one of a CU160, a DU165, or an RU170, while other functions of the protocol layer are performed by a different one of the CU160, the DU165, or the RU170). A CU160may be functionally split further into CU control plane (CU-CP) and CU user plane (CU-UP) functions. A CU160may be connected to one or more DUs165via a midhaul communication link162(e.g., F1, F1-c, F1-u), and a DU165may be connected to one or more RUs170via a fronthaul communication link168(e.g., open fronthaul (FH) interface). In some examples, a midhaul communication link162or a fronthaul communication link168may be implemented in accordance with an interface (e.g., a channel) between layers of a protocol stack supported by respective network entities105that are in communication over such communication links. In wireless communications systems (e.g., wireless communications system100), infrastructure and spectral resources for radio access may support wireless backhaul link capabilities to supplement wired backhaul connections, providing an IAB network architecture (e.g., to a core network130). In some cases, in an IAB network, one or more network entities105(e.g., IAB nodes104) may be partially controlled by each other. One or more IAB nodes104may be referred to as a donor entity or an IAB donor. One or more DUs165or one or more RUs170may be partially controlled by one or more CUs160associated with a donor network entity105(e.g., a donor base station140). The one or more donor network entities105(e.g., IAB donors) may be in communication with one or more additional network entities105(e.g., IAB nodes104) via supported access and backhaul links (e.g., backhaul communication links120). IAB nodes104may include an IAB mobile termination (IAB-MT) controlled (e.g., scheduled) by DUs165of a coupled IAB donor. An IAB-MT may include an independent set of antennas for relay of communications with UEs115, or may share the same antennas (e.g., of an RU170) of an IAB node104used for access via the DU165of the IAB node104(e.g., referred to as virtual IAB-MT (vIAB-MT)). In some examples, the IAB nodes104may include DUs165that support communication links with additional entities (e.g., IAB nodes104, UEs115) within the relay chain or configuration of the access network (e.g., downstream). In such cases, one or more components of the disaggregated RAN architecture (e.g., one or more IAB nodes104or components of IAB nodes104) may be configured to operate according to the techniques described herein. In the case of the techniques described herein applied in the context of a disaggregated RAN architecture, one or more components of the disaggregated RAN architecture may be configured to support correlation-based hardware sequences for layered decoding as described herein. For example, some operations described as being performed by a UE115or a network entity105(e.g., a base station140) may additionally, or alternatively, be performed by one or more components of the disaggregated RAN architecture (e.g., IAB nodes104, DUs165, CUs160, RUs170, RIC175, SMO180). A UE115may include or may be referred to as a mobile device, a wireless device, a remote device, a handheld device, or a subscriber device, or some other suitable terminology, where the “device” may also be referred to as a unit, a station, a terminal, or a client, among other examples. A UE115may also include or may be referred to as a personal electronic device such as a cellular phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or a personal computer. In some examples, a UE115may include or be referred to as a wireless local loop (WLL) station, an Internet of Things (IoT) device, an Internet of Everything (IoE) device, or a machine type communications (MTC) device, among other examples, which may be implemented in various objects such as appliances, or vehicles, meters, among other examples. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115that may sometimes act as relays as well as the network entities105and the network equipment including macro eNBs or gNBs, small cell eNBs or gNBs, or relay base stations, among other examples, as shown inFIG.1. The UEs115and the network entities105may wirelessly communicate with one another via one or more communication links125(e.g., an access link) over one or more carriers. The term “carrier” may refer to a set of RF spectrum resources having a defined physical layer structure for supporting the communication links125. For example, a carrier used for a communication link125may include a portion of a RF spectrum band (e.g., a bandwidth part (BWP)) that is operated according to one or more physical layer channels for a given radio access technology (e.g., LTE, LTE-A, LTE-A Pro, NR). Each physical layer channel may carry acquisition signaling (e.g., synchronization signals, system information), control signaling that coordinates operation for the carrier, user data, or other signaling. The wireless communications system100may support communication with a UE115using carrier aggregation or multi-carrier operation. A UE115may be configured with multiple downlink component carriers and one or more uplink component carriers according to a carrier aggregation configuration. Carrier aggregation may be used with both frequency division duplexing (FDD) and time division duplexing (TDD) component carriers. Communication between a network entity105and other devices may refer to communication between the devices and any portion (e.g., entity, sub-entity) of a network entity105. For example, the terms “transmitting,” “receiving,” or “communicating,” when referring to a network entity105, may refer to any portion of a network entity105(e.g., a base station140, a CU160, a DU165, a RU170) of a RAN communicating with another device (e.g., directly or via one or more other network entities105). Signal waveforms transmitted over a carrier may be made up of multiple subcarriers (e.g., using multi-carrier modulation (MCM) techniques such as orthogonal frequency division multiplexing (OFDM) or discrete Fourier transform spread OFDM (DFT-S-OFDM)). In a system employing MCM techniques, a resource element may refer to resources of one symbol period (e.g., a duration of one modulation symbol) and one subcarrier, in which case the symbol period and subcarrier spacing may be inversely related. The quantity of bits carried by each resource element may depend on the modulation scheme (e.g., the order of the modulation scheme, the coding rate of the modulation scheme, or both) such that the more resource elements that a device receives and the higher the order of the modulation scheme, the higher the data rate may be for the device. A wireless communications resource may refer to a combination of an RF spectrum resource, a time resource, and a spatial resource (e.g., a spatial layer, a beam), and the use of multiple spatial resources may increase the data rate or data integrity for communications with a UE115. The time intervals for the network entities105or the UEs115may be expressed in multiples of a basic time unit which may, for example, refer to a sampling period of Ts=1/(Δfmax·Nf) seconds, where Δfmaxmay represent the maximum supported subcarrier spacing, and Nfmay represent the maximum supported discrete Fourier transform (DFT) size. Time intervals of a communications resource may be organized according to radio frames each having a specified duration (e.g., 10 milliseconds (ms)). Each radio frame may be identified by a system frame number (SFN) (e.g., ranging from 0 to 1023). Each frame may include multiple consecutively numbered subframes or slots, and each subframe or slot may have the same duration. In some examples, a frame may be divided (e.g., in the time domain) into subframes, and each subframe may be further divided into a quantity of slots. Alternatively, each frame may include a variable quantity of slots, and the quantity of slots may depend on subcarrier spacing. Each slot may include a quantity of symbol periods (e.g., depending on the length of the cyclic prefix prepended to each symbol period). In some wireless communications systems100, a slot may further be divided into multiple mini-slots containing one or more symbols. Excluding the cyclic prefix, each symbol period may contain one or more (e.g., Nf) sampling periods. The duration of a symbol period may depend on the subcarrier spacing or frequency band of operation. A subframe, a slot, a mini-slot, or a symbol may be the smallest scheduling unit (e.g., in the time domain) of the wireless communications system100and may be referred to as a transmission time interval (TTI). In some examples, the TTI duration (e.g., a quantity of symbol periods in a TTI) may be variable. Additionally, or alternatively, the smallest scheduling unit of the wireless communications system100may be dynamically selected (e.g., in bursts of shortened TTIs (sTTIs)). Physical channels may be multiplexed on a carrier according to various techniques. A physical control channel and a physical data channel may be multiplexed on a downlink carrier, for example, using one or more of time division multiplexing (TDM) techniques, frequency division multiplexing (FDM) techniques, or hybrid TDM-FDM techniques. A control region (e.g., a control resource set (CORESET)) for a physical control channel may be defined by a set of symbol periods and may extend across the system bandwidth or a subset of the system bandwidth of the carrier. One or more control regions (e.g., CORESETs) may be configured for a set of the UEs115. For example, one or more of the UEs115may monitor or search control regions for control information according to one or more search space sets, and each search space set may include one or multiple control channel candidates in one or more aggregation levels arranged in a cascaded manner. An aggregation level for a control channel candidate may refer to an amount of control channel resources (e.g., control channel elements (CCEs)) associated with encoded information for a control information format having a given payload size. Search space sets may include common search space sets configured for sending control information to multiple UEs115and UE-specific search space sets for sending control information to a specific UE115. In some examples, a network entity105(e.g., a base station140, an RU170) may be movable and therefore provide communication coverage for a moving coverage area110. In some examples, different coverage areas110associated with different technologies may overlap, but the different coverage areas110may be supported by the same network entity105. In some other examples, the overlapping coverage areas110associated with different technologies may be supported by different network entities105. The wireless communications system100may include, for example, a heterogeneous network in which different types of the network entities105provide coverage for various coverage areas110using the same or different radio access technologies. The wireless communications system100may be configured to support ultra-reliable communications or low-latency communications, or various combinations thereof. For example, the wireless communications system100may be configured to support ultra-reliable low-latency communications (URLLC). The UEs115may be designed to support ultra-reliable, low-latency, or critical functions. Ultra-reliable communications may include private communication or group communication and may be supported by one or more services such as push-to-talk, video, or data. Support for ultra-reliable, low-latency functions may include prioritization of services, and such services may be used for public safety or general commercial applications. The terms ultra-reliable, low-latency, and ultra-reliable low-latency may be used interchangeably herein. In some examples, a UE115may be able to communicate directly with other UEs115over a device-to-device (D2D) communication link135(e.g., in accordance with a peer-to-peer (P2P), D2D, or sidelink protocol). In some examples, one or more UEs115of a group that are performing D2D communications may be within the coverage area110of a network entity105(e.g., a base station140, an RU170), which may support aspects of such D2D communications being configured by or scheduled by the network entity105. In some examples, one or more UEs115in such a group may be outside the coverage area110of a network entity105or may be otherwise unable to or not configured to receive transmissions from a network entity105. In some examples, groups of the UEs115communicating via D2D communications may support a one-to-many (1:M) system in which each UE115transmits to each of the other UEs115in the group. In some examples, a network entity105may facilitate the scheduling of resources for D2D communications. In some other examples, D2D communications may be carried out between the UEs115without the involvement of a network entity105. The core network130may provide user authentication, access authorization, tracking, Internet Protocol (IP) connectivity, and other access, routing, or mobility functions. The core network130may be an evolved packet core (EPC) or 5G core (5GC), which may include at least one control plane entity that manages access and mobility (e.g., a mobility management entity (MME), an access and mobility management function (AMF)) and at least one user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW), a Packet Data Network (PDN) gateway (P-GW), or a user plane function (UPF)). The control plane entity may manage non-access stratum (NAS) functions such as mobility, authentication, and bearer management for the UEs115served by the network entities105(e.g., base stations140) associated with the core network130. User IP packets may be transferred through the user plane entity, which may provide IP address allocation as well as other functions. The user plane entity may be connected to IP services150for one or more network operators. The IP services150may include access to the Internet, Intranet(s), an IP Multimedia Subsystem (IMS), or a Packet-Switched Streaming Service. The wireless communications system100may operate using one or more frequency bands, which may be in the range of 300 megahertz (MHz) to 300 gigahertz (GHz). Generally, the region from 300 MHz to 3 GHz is known as the ultra-high frequency (UHF) region or decimeter band because the wavelengths range from approximately one decimeter to one meter in length. The UHF waves may be blocked or redirected by buildings and environmental features, which may be referred to as clusters, but the waves may penetrate structures sufficiently for a macro cell to provide service to the UEs115located indoors. The transmission of UHF waves may be associated with smaller antennas and shorter ranges (e.g., less than 100 kilometers) compared to transmission using the smaller frequencies and longer waves of the high frequency (HF) or very high frequency (VHF) portion of the spectrum below 300 MHz. The wireless communications system100may also operate in a super high frequency (SHF) region using frequency bands from 3 GHz to 30 GHz, also known as the centimeter band, or in an extremely high frequency (EHF) region of the spectrum (e.g., from 30 GHz to 300 GHz), also known as the millimeter band. In some examples, the wireless communications system100may support millimeter wave (mmW) communications between the UEs115and the network entities105(e.g., base stations140, RUs170), and EHF antennas of the respective devices may be smaller and more closely spaced than UHF antennas. In some examples, this may facilitate use of antenna arrays within a device. The propagation of EHF transmissions, however, may be subject to even greater atmospheric attenuation and shorter range than SHF or UHF transmissions. The techniques disclosed herein may be employed across transmissions that use one or more different frequency regions, and designated use of bands across these frequency regions may differ by country or regulating body. The wireless communications system100may utilize both licensed and unlicensed RF spectrum bands. For example, the wireless communications system100may employ License Assisted Access (LAA), LTE-Unlicensed (LTE-U) radio access technology, or NR technology in an unlicensed band such as the 5 GHz industrial, scientific, and medical (ISM) band. While operating in unlicensed RF spectrum bands, devices such as the network entities105and the UEs115may employ carrier sensing for collision detection and avoidance. In some examples, operations in unlicensed bands may be based on a carrier aggregation configuration in conjunction with component carriers operating in a licensed band (e.g., LAA). Operations in unlicensed spectrum may include downlink transmissions, uplink transmissions, P2P transmissions, or D2D transmissions, among other examples. A network entity105(e.g., a base station140, an RU170) or a UE115may be equipped with multiple antennas, which may be used to employ techniques such as transmit diversity, receive diversity, multiple-input multiple-output (MIMO) communications, or beamforming. The antennas of a network entity105or a UE115may be located within one or more antenna arrays or antenna panels, which may support MIMO operations or transmit or receive beamforming. For example, one or more base station antennas or antenna arrays may be co-located at an antenna assembly, such as an antenna tower. In some examples, antennas or antenna arrays associated with a network entity105may be located in diverse geographic locations. A network entity105may have an antenna array with a set of rows and columns of antenna ports that the network entity105may use to support beamforming of communications with a UE115. Likewise, a UE115may have one or more antenna arrays that may support various MIMO or beamforming operations. Additionally, or alternatively, an antenna panel may support RF beamforming for a signal transmitted via an antenna port. Beamforming, which may also be referred to as spatial filtering, directional transmission, or directional reception, is a signal processing technique that may be used at a transmitting device or a receiving device (e.g., a network entity105, a UE115) to shape or steer an antenna beam (e.g., a transmit beam, a receive beam) along a spatial path between the transmitting device and the receiving device. Beamforming may be achieved by combining the signals communicated via antenna elements of an antenna array such that some signals propagating at particular orientations with respect to an antenna array experience constructive interference while others experience destructive interference. The adjustment of signals communicated via the antenna elements may include a transmitting device or a receiving device applying amplitude offsets, phase offsets, or both to signals carried via the antenna elements associated with the device. The adjustments associated with each of the antenna elements may be defined by a beamforming weight set associated with a particular orientation (e.g., with respect to the antenna array of the transmitting device or receiving device, or with respect to some other orientation). The UEs115and the network entities105may support retransmissions of data to increase the likelihood that data is received successfully. Hybrid automatic repeat request (HARQ) feedback is one technique for increasing the likelihood that data is received correctly over a communication link (e.g., a communication link125, a D2D communication link135). HARQ may include a combination of error detection (e.g., using a cyclic redundancy check (CRC)), forward error correction (FEC), and retransmission (e.g., automatic repeat request (ARQ)). HARQ may improve throughput at the MAC layer in poor radio conditions (e.g., low signal-to-noise conditions). In some examples, a device may support same-slot HARQ feedback, where the device may provide HARQ feedback in a specific slot for data received in a previous symbol in the slot. In some other examples, the device may provide HARQ feedback in a subsequent slot, or according to some other time interval. The wireless communications system100may support correlation-based hardware sequences for layered decoding (e.g., LDPC layered decoding). For example, a UE115may receive, from a network entity105, an encoded signal associated with an LDPC decoding procedure. The UE115may identify a base graph matrix associated with the encoded signal and extract a submatrix from the base graph matrix. The UE115may partition (e.g., sort) the layers of the submatrix into two sets of layers (e.g., Type 1 and Type 2). In some cases, the UE115may partition the layers of the submatrix based on a punctured column for each layer. For example, a first set of layers may include layers associated with one punctured column and a second set of layers may include layers associated with two punctured columns. In some cases, the UE115may generate a correlation table for each set of layers. For example, the UE115may determine a first set of correlation values between each possible combination of layers (e.g., each pair of layers) in the first set of layers and generate a first correlation table based on the determined first set of correlation values. In some cases, the first set of correlation values may be based on a quantity of non-zero overlapping columns between each pair of layers. The UE115may sort each set of layers into a respective set of layer orders based on an associated correlation table. In some cases, the UE115may generate each set of layer orders based on a searching procedure (e.g., greedy search). For example, the UE115may select a starting layer from the first set of layers and may sort the remaining layers of the first set of layers based on the first set of correlation values associated with the first correlation table to determine a first layer order. The UE115may repeat this process using each layer of the first set of layers as a starting layer to generate a first set of layer orders. Upon sorting each set of layers into the respective set of layer orders, the UE115may combine (e.g., concatenate) the two sets of layer orders to obtain a set of combined layer orders. In some cases, the UE115may generate a decoding schedule for each layer order of the set of combined layer orders to obtain a set of decoding schedules and may identify a schedule length associated with each layer order of the set of combined layer orders. The UE115may select a decoding schedule from the set of decoding schedules based on respective schedule lengths. For example, the UE115may select a decoding schedule from the set of decoding schedules with the shortest schedule length. The UE115may decode the encoded signal according to the selected decoding schedule and, in some cases, may transmit a feedback message to the network entity105indicating successful decoding of the encoded signal. FIG.2illustrates an example of a wireless communications system200that supports correlation-based hardware sequences for layered decoding in accordance with one or more aspects of the present disclosure. The wireless communications system200may implement or be implemented by aspects of the wireless communications system100. For example, the wireless communications system200may include a network entity105-aand a UE115-a. The UE115-amay represent an example of a UE115as described herein, including with reference toFIG.1. The network entity105-amay represent an example of a network entity105as described herein, including with reference toFIG.1. Additionally, the wireless communications system200may support communications between the UE115-aand the network entity105-a. For example, the UE115-amay transmit uplink messages to the network entity105-aover a communication link205(which may be an example of a communication link125described with reference toFIG.1) and may receive downlink messages on a communication link210(which may be an example of a communication link125). For example, the network entity105-amay transmit an encoded signal215to the UE115-aand the UE115-amay decode the encoded signal215according to a correlation-based decoding schedule (e.g., hardware sequence). Some wireless communications systems, such as the wireless communications system200, may support LDPC decoding, such that a UE115-amay support layered decoding (e.g., LDPC layered decoding) of an encoded signal215, the encoded signal215including LDPC codes. In layered decoding, the UE115-amay partition check nodes, or edge nodes connected to multiple variable nodes, associated with the encoded signal215into layers and implement a message passing algorithm layer by layer. That is, for each layer, the message passing algorithm may perform (e.g., be divided into) a scan phase and an update phase. In the scan phase, the message passing algorithm may scan the LLRs of the variable nodes connected to each check node in the respective layer to identify known values for the LLRs. The variable nodes may forward known values to the check nodes and the check nodes may accumulate the forwarded values. Additionally, the check nodes may use the known values for the LLRs to calculate (e.g., accumulate) the parity using a minimum sum (min-sum) algorithm and send the outputted (e.g., accumulated) parity to the variable nodes. In the update phase, the message passing algorithm may update the LLRs in the variable nodes based on the outputted parity. The UE115-amay continue to perform the message passing In some cases (e.g., 5G NR communications), LDPC codes associated with the layered decoding may be characterized by two parity check base graph matrices (e.g., BG1 and BG2). A row in a base graph matrix may represent a layer in layered decoding and each entry in the base graph matric may be a submatrix block. In some case, the submatrix block may be equal to an all-zero matrix or a cycle-shifted identify matrix. Further, the UE115-amay determine the size of the submatrix block based on an expansion factor. Based on a code rate, the UE115-amay extract a submatrix from a full base graph matrix as a parity check base matrix for rate match. Additionally, in layered decoding, the UE115-amay scan and update LLRs associated with each non-zero (e.g., cyclic-shifted identity) submatrix block (e.g., node) in a parity check base matrix. In some cases, the UE115-amay identify a non-zero submatrix block based on a layer identifier (ID) and a column ID. In some cases, the UE115-amay scan or update the LLRs associated with a check node at one hardware cycle. The UE115-amay use a hardware sequence, which may be referred to as a decoding schedule or decoder schedule, to determine which node the UE115-ashould scan and which node it should update at each hardware cycle. That is, the decoding schedule may indicate a layer order in which the UE115-amay scan and update nodes. In some cases, the decoding schedule may include a scan table and an update table. In some cases, the UE115-amay fully scan a layer and wait for pipeline stage cycles before updating the layer. In some cases, the UE115-amay update a column prior to scanning the column again. In some cases, the total number of processing layers at any hardware cycle may not exceed a cache limit (e.g., a layer may be being processed if the UE115-ahas started to scan a layer but has not fully yet updated the columns in the layer). In some cases, the scan table and the update tables may be repeatable for each iteration and may be the same size. In some cases, the UE115-amay use a baseline decoding schedule to decode the encoded signal215. For example, given a submatrix, the UE115-amay determine a layer order using a two-step method. In the first step, the UE115-amay partition all of the layers in the submatrix according to the number of punctured columns they connect to. In some cases, a Type 1 set of layers may include all layers that connect to one punctured column and a Type 2 set of layers may include all layers that connect to two punctured columns. In the second step, the UE115-amay further sort the layers in each type (e.g., Type 1 and Type 2) according to their row weights in an ascending manner. The UE115-amay then concatenate (e.g., combine) the sorted layers in each type to form a combined layer order. However, traditional techniques for determining a decoding schedule, such as those described previously, may be computationally complex and resource intensive. Techniques described herein may support generation of correlation-based decoding schedules (e.g., hardware sequences). In some cases, the UE115-amay receive the encoded signal215associated with an LDPC decoding procedure and may identify a base graph matrix associated with the encoded signal215. The UE115-amay extract a submatrix from the base graph matrix associated with the encoded signal215and may partition the layers of the submatrix into two sets of layers, including a first set of layers and a second set of layers. Upon partitioning the layers, the UE115-amay sort each set of layers into multiple layer orders, including a first set of layer orders associated with the first set of layers and a second set of layer orders associated with the second set of layers. The sorting may be based on a respective set of correlation values associated with each set of layers, as described further with respects toFIG.3. The UE115-amay combine (e.g., concatenate) the two sets of layers to obtain a set of combined layers. That is, the UE115-amay combine each layer order of the first set of layer orders with each layer order of the second set of layer orders to obtain the set of combined layer orders. The UE115-amay generate a decoding schedule for each layer order of the set of combined layer orders to obtain a set of decoding schedules. In some cases, the UE115-amay build each decoding schedule (e.g., initialize a schedule table) by stacking nodes according to the respective layer order of the set of combined layer orders (e.g., stacking nodes from a most prioritized layer to the beginning of the scan table). Upon stacking the nodes, the UE115-amay proceed to build the respective decoding schedule hardware cycle by hardware cycle and at each hardware cycle, the UE115-amay determine a node to be scanned. If a cache limit has not been reached, the UE115-amay select nodes whose layer is prioritized in the layer order (e.g., higher in the layer order), otherwise, the UE115-amay select nodes that lie in the processing layers. In some cases, such nodes may not be present (e.g., do not exist) and the UE115-amay set a current cycle in the scan table as a null cycle. In some other cases, multiple such nodes may be present and the UE115-amay select a node whose column appears the most frequently in the remaining unscanned nodes. Additionally, the UE115-amay determine the node to be updated. Among all of the nodes that can be updated, the UE115-amay select a node whose column appears earliest in the remaining unscanned nodes, where the unscanned nodes are sorted according to their layer orders. In some cases, such nodes may not be present and the UE115-amay set the current cycle in the update table as a null cycle. Once the UE115-ahas scanned and update all nodes in the given submatrix, the UE115-amay match the lengths of the scan and the update tables by appending null cycles to the shorter one. In some cases, the update table may start later than the scan table such that there may be an overlapping region at the boundary from one iteration to the next iteration. In some cases, the UE115-amay reorder nodes in the overlapping region (e.g., if any hardware constraint breaks). Upon building the set of decoder schedules, the UE115-amay select a decoding schedule for use in decoding the encoded signal215based on respective schedule lengths for generate decoding schedules. For example, the UE115-amay select a decoding schedule with the shortest schedule length and may decode the encoded signal215according to the decoding schedule. In some cases, the UE115-amay divide decoding iterations into two phases. In phase 1 iterations, the UE115-amay ignore the first four layers and may use the schedule built for the rest of the layers (e.g., partial iteration). In phase 2 iterations, the UE115may use the schedule built for all layers. In some cases, the UE115-amay transmit feedback220to the network entity105indicating successful decoding of the encoded signal215. Techniques described herein may result in the design of efficient schedules for LDPC layered decoding (e.g., under hardware constraints). Designing efficient layered-decoding schedules may result in efficient decoding operations, improved schedule performance, improved error performance (e.g., considering complexity tradeoffs), and shorter schedule lengths. Though described in the context of the UE115-a, it is understood that any wireless device may support the techniques described herein to perform correlation-based layered decoding of an encoded signal215or message. In one example, network entity105-amay receive an encoded signal215from the UE115-aand perform correlation-based layered decoding using the techniques described herein. Additionally or alternative, though described in the context of partitioning the layers of the submatrix into two sets of layers, it is understood that any quantity of sets may be supported. FIG.3illustrates an example of a flow chart300that supports correlation-based hardware sequences for layered decoding in accordance with one or more aspects of the present disclosure. The flow chart300may implement or be implemented by aspects of the wireless communications system100and wireless communications system200. For example, a UE115may implement the flow chart300to decode an encoded signal. In some cases, a UE115may extract a submatrix from a base graph matric associated with an encoded signal and, at305, may partition the layers of the submatrix into two sets of layers, including a first set of layers, at310-a, and a second set of layers, at310-b. In some cases, the UE115-amay partition the layers according to the number of punctured columns each layer connects to. For example, the first set of layers may contain layers that connect to one punctured column (e.g., Type 1) and the second set of layers may contain layers that connect to two punctured columns (e.g., Type 2). At320, the UE115may generate (e.g., build) a correlation table for each set of layers. For example, at320-a, the UE115may generate a first correlation table for the first set of layers and, at320-b, the UE115may generate a second correlation table for the second set of layers. The correlation tables may include a respective set of correlation values between each possible combination of layers in the respective set of layers. That is, the first correlation table may include a first set of correlation values associated with the first set of layers and the second correlation table may include a second set of correlation values associated with the second set of layers. Correlation between a pair of layers (e.g., combination of layers) may be associated with the number of overlapping (non-zero) columns between the pair of layers in the base graph matrix. For example, in a base graph matrix (e.g., BG1), a layer 5 may be non-zero at columns, 1, 2, and 27 and a layer 8 may be non-zero at columns, 1, 2, 5, 8, 9, 15, and 30. The correlation value between layer 5 and layer 8 may be two due to overlapping columns 1 and 2 (e.g., non-zero columns). A sum correlation of a layer order may be the sum of correlations between consecutive layers in the layer order. For example, the sum correlation of a layer order [A, B, C, D] may be the sum of correlations between AB, BC, and CD. A correlation table for a set of layers may store the sum correlations (e.g., correlation) between any two layers (e.g., pair or combination of layers) in the set. For example, an entry in the ithrow and the jthcolumn may represent the correlation between the ithlayer and the jthlayer in the base graph matrix. In some cases, at325, the UE115may select a starting layer from a set of layers for a layer order and may perform a search, such as a greedy search, to determine the remainder of the layer order. In some cases, the complexity of searching a layer order that leads to a minimum sum correlation may increase (e.g., factorially) with the cardinality of the layer order (e.g., quantity of layers in the layer order) and, as such, the UE115-amay search (e.g., greedy search) the correlation tables to reduce the complexity. That is, given a current layer, the UE115-amay select the next layer that has a minimum correlation with the current layer as the next layer. For example, at325-a, the UE115may select a first layer from the first set of layers to be the starting layer of a first layer order. At330-a, the UE115may search a first correlation table associated with the first set of layers to determine a second layer from a set of remaining layers from the first set of layers that has a minimum correlation with the first layer. Upon selecting the second layer, the UE115may search the first correlation table to determine a third layer from the set of remaining layers that has a minimum correlation with the second layer. The UE115may continue this process until all layers of the first set of layers have been sorted (e.g., ordered), resulting in the first layer order. Additionally, the UE115may repeat this process using each layer of the first set of layers as a starting layer at least once, resulting in a first set of layer orders (e.g., producing Type 1 cardinality number of layers). Further, the UE115, at325-band330-b, may repeat the process with the second set of layers to determine the second set of layer orders (e.g., producing Type 2 cardinality number of layers). Upon determining the first set of layer orders and the second set of layer orders, the UE115, at335, may concatenate (e.g., combine) the first set of layer orders with the second set of layer orders. That is, the UE115may combine each layer order of the first set of layer orders with each layer order of the second set of layer orders to determine a set of combined layer orders (e.g., producing Type 1 cardinality*Type 2 cardinality number of combined layer orders). In some cases, at340, the UE115may generate (e.g., build) a decoding schedule table (e.g., using a baseline algorithm) for each layer order (e.g. candidate layer order) of the set of combined layer orders and, at345, may select a decoding schedule that has the shortest schedule length (e.g., phase 2 schedule length). In some cases, sum correlation may be positively correlated with schedule length. That is, a smaller sum correlation may lead to a decrease quantity of hardware cycles performed by the UE115while decoding the encoded signal. In such cases, the UE115may determine whether a combined layer order will lead to a short schedule length based on the sum correlation (e.g., rather than building a decoding schedule). The selected decoding schedule may be the correlation-based decoding schedule (e.g., hardware sequence) for the given submatrix. The UE115may decode an encoded signal associated with the given submatrix using the selected decoding schedule. The techniques described herein may support prioritizing layers with higher reliability to support increased error performance and may leverage correlations to search (e.g., greedily search) a layer order that may lead to a short schedule length. Such techniques may result in increased performance compared to traditional techniques. FIG.4illustrates an example of a process flow400that supports correlation-based hardware sequences for layered decoding in accordance with one or more aspects of the present disclosure. The process flow400may implement or be implemented by aspects of the wireless communications system100, the wireless communications system200, and the flow chart300. For example, the process flow400may include a network entity105-band a UE115-b. The UE115-bmay represent an example of a UE115as described herein, including with reference toFIG.1. The network entity105-bmay represent an example of a network entity105as described herein, including with reference toFIG.1. For example, the network entity105-bmay transmit an encoded signal to the UE115-band the UE115-bmay decode the encoded signal using a correlation-based hardware sequence. In some cases, at405, the UE115-bmay receive an encoded signal associated with a parity check decoding procedure. The encoded signal may include LDPC codes which may be characterized by two parity check base graph matrices. The UE115-bmay identify a parity check base graph matrix (e.g., a full parity check base graph matrix) associated with the encoded signal and may extract a submatrix from the parity check base graph matrix (e.g., given a code rate). In some cases, the extracted submatrix may be for rate match. At410, the UE115-bmay partition the layers of the extracted submatrix associated with the parity check decoding procedure into a first set of layers and a second set of layers. In some cases, the UE115-bmay partition the layers of the submatrix based on a puncture column for each layer. For example, the first set of layers may be associated with one punctured column and the second set of layers may be associated with two punctured columns. In some cases, at415, the UE115-bmay determine correlation values and generate correlation tables for each set of layers (e.g., the first set of layers and the second set of layers). For example, the UE115-bmay determine a first set of correlation values associated with the first set of layers based on one or more combinations of the layers of the first set of layers. That is, the UE115-bmay determine a correlation value for each possible combination of two layers of the first set of layers. The UE115-bmay determine the first set of correlation values by identifying a quantity of overlapping non-zero columns in each combination of layers (e.g., each combination of two layers) of the first set of layers. In some cases, the UE115-bmay generate a first correlation table including the first set of correlation values. Additionally, the UE115-bmay perform this process with the second set of layers. For example, the UE115-bmay determine a second set of correlation values associated with the second set of layers based on one or more combinations of the layers of the second set of layers. That is, the UE115-bmay determine a correlation value for each possible combination of two layers of the second set of layers. The UE115-bmay determine the second set of correlation values by identifying a quantity of overlapping non-zero columns in each combination of layers (e.g., each combination of two layers) of the second set of layers. In some cases, the UE115-bmay generate a second correlation table including the second set of correlation values. At420, the UE115-bmay sort each set of layers into a respective set of layer orders. For example, the UE115-bmay sort the first set of layers into a first set of layer orders based on the first set of correlation values (e.g., in the first correlation table) associated with the first set of layers. Additionally, the UE115-bmay sort the second set of layers into a second set of layer orders based on the second set of correlation values (e.g., in the second correlation table) associated with the second set of layers. In some cases, the UE115-bmay determine a layer order in a given set of layer orders (e.g., in the first set of layer orders or the second set of layer orders) based on a searching procedure (e.g., greedy search procedure). For example, the UE115-bmay select a starting layer from the first set of layers for a first layer order associated with the first set of layers and may sort a set of remaining layers of the first set of layers for the first layer order based on the first set of correlation values. In some cases, the UE115-bmay sort the set of remaining layers by performing the searching procedure. That is, the UE115-bmay select the starting layer from the first set of layers and may search (e.g., greedy search) the first correlation table to identify a minimum correlation value associated with the starting layer. For example, the UE115-bmay identify a second layer from the first set of layers that has a minimum correlation value to the starting layer (e.g., has the smallest correlation value out of a set of correlation values associated with the starting layer). The UE115-bmay then search the first correlation table to identify a third layer with a minimum correlation value associated with the second layer (e.g., out of a remaining set of layers which may include the first set of layers except the starting layer and the second layer). The UE115-bmay repeat this process until all layers of the first set of layers have been ordered and the first layer order associated with the first set of layers has been determined (e.g., generated). Further, the UE115-bmay repeat this process with each layer of the first set of layers acting as a starting layer (e.g., being a starting layer once). In such cases, the first set of layer orders may include a quantity of layer orders equal to the quantity of layers in the first set of layers (e.g., cardinality of the first set of layers). Additionally, the UE115-bmay perform this process with the second set of layers. For example, the UE115-bmay select a starting layer from the second set of layers for a first layer order associated with the second set of layers and may sort a set of remaining layers of the second set of layers for the first layer order based on the second set of correlation values. In some cases, the UE115-bmay sort the set of remaining layers by performing the searching procedure. That is, the UE115-bmay select the starting layer from the second set of layers and may search (e.g., greedy search) the second correlation table to identify a second layer from the second set of layers with a minimum correlation value associated with the starting layer. The UE115-bmay repeat this process until all layers of the second set of layers have been ordered and the first layer order associated with the second set of layers has been determined (e.g., generated). Further, the UE115-bmay repeat this process with each layer of the second set of layers acting as a starting layer (e.g., being a starting layer once). In such cases, the second set of layer orders may include a quantity of layer orders equal to the quantity of layers in the second set of layers (e.g., cardinality of the second set of layers). At425, the UE115-bmay combine (e.g., concatenate) the first set of layer orders and the second set of layer orders to obtain a set of combined layer orders. That is, the UE may concatenate each layer order of the first set of layer orders with each layer order of the second set of layer orders to obtain the set of combined layer orders. In such cases, the set of combined layer orders may contain a quantity of layer orders equal to the quantity of layer orders in the first set of layer orders times the quantity of layer orders in the second set of layer orders. In some cases, at430, the UE115-bmay generate a respective decoding schedule for each layer order of the set of combined layer orders to obtain a set of decoding schedules. The set of decoding schedules may be used decode each of the set of combined layer orders. At435, the UE115-bmay select a decoding schedule from the set of decoding schedules based on respective schedule lengths for the set of decoding schedules. For example, the UE115-bmay determine a respective schedule length for each decoding schedule of the set of decoding schedules and select the decoding schedule having a length below a threshold. In some cases, the selected decoding schedule may be a decoding schedule with the shortest schedule length (e.g., a duration associated with decoding an encoded signal according to the selected decoding schedule is the shortest). In some cases, at440, the UE115-bmay decode the encoded signal based on (e.g., according to) the selected decoding schedule and, at445, may transmit feedback to the network entity105-bindicating successful decoding of the encoded signal. FIG.5shows a block diagram500of a device505that supports correlation-based hardware sequences for layered decoding in accordance with one or more aspects of the present disclosure. The device505may be an example of aspects of a UE115as described herein. The device505may include a receiver510, a transmitter515, and a communications manager520. The device505may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver510may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to correlation-based hardware sequences for layered decoding). Information may be passed on to other components of the device505. The receiver510may utilize a single antenna or a set of multiple antennas. The transmitter515may provide a means for transmitting signals generated by other components of the device505. For example, the transmitter515may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to correlation-based hardware sequences for layered decoding). In some examples, the transmitter515may be co-located with a receiver510in a transceiver module. The transmitter515may utilize a single antenna or a set of multiple antennas. The communications manager520, the receiver510, the transmitter515, or various combinations thereof or various components thereof may be examples of means for performing various aspects of correlation-based hardware sequences for layered decoding as described herein. For example, the communications manager520, the receiver510, the transmitter515, or various combinations or components thereof may support a method for performing one or more of the functions described herein. In some examples, the communications manager520, the receiver510, the transmitter515, or various combinations or components thereof may be implemented in hardware (e.g., in communications management circuitry). The hardware may include a processor, a digital signal processor (DSP), a central processing unit (CPU), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, a microcontroller, discrete gate or transistor logic, discrete hardware components, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure. In some examples, a processor and memory coupled with the processor may be configured to perform one or more of the functions described herein (e.g., by executing, by the processor, instructions stored in the memory). Additionally, or alternatively, in some examples, the communications manager520, the receiver510, the transmitter515, or various combinations or components thereof may be implemented in code (e.g., as communications management software or firmware) executed by a processor. If implemented in code executed by a processor, the functions of the communications manager520, the receiver510, the transmitter515, or various combinations or components thereof may be performed by a general-purpose processor, a DSP, a CPU, an ASIC, an FPGA, a microcontroller, or any combination of these or other programmable logic devices (e.g., configured as or otherwise supporting a means for performing the functions described in the present disclosure). In some examples, the communications manager520may be configured to perform various operations (e.g., receiving, obtaining, monitoring, outputting, transmitting) using or otherwise in cooperation with the receiver510, the transmitter515, or both. For example, the communications manager520may receive information from the receiver510, send information to the transmitter515, or be integrated in combination with the receiver510, the transmitter515, or both to obtain information, output information, or perform various other operations as described herein. The communications manager520may support wireless communications at a UE in accordance with examples as disclosed herein. For example, the communications manager520may be configured as or otherwise support a means for partitioning a set of multiple layers of a submatrix associated with a parity check decoding procedure into a first set of layers and a second set of layers. The communications manager520may be configured as or otherwise support a means for sorting the first set of layers into a first set of layer orders based on a first set of correlation values associated with the first set of layers. The communications manager520may be configured as or otherwise support a means for sorting the second set of layers into a second set of layer orders based on a second set of correlation values associated with the second set of layers. The communications manager520may be configured as or otherwise support a means for combining the first set of layer orders and the second set of layer orders to obtain a set of combined layer orders. The communications manager520may be configured as or otherwise support a means for selecting a decoding schedule from a set of decoding schedules for decoding each of the set of combined layer orders based on respective schedule lengths for the set of decoding schedules. By including or configuring the communications manager520in accordance with examples as described herein, the device505(e.g., a processor controlling or otherwise coupled with the receiver510, the transmitter515, the communications manager520, or a combination thereof) may support techniques for correlation-based hardware sequences for layered decoding which may result in reduced processing, reduced power consumption, and more efficient utilization of communication resources, among other advantages. FIG.6shows a block diagram600of a device605that supports correlation-based hardware sequences for layered decoding in accordance with one or more aspects of the present disclosure. The device605may be an example of aspects of a device505or a UE115as described herein. The device605may include a receiver610, a transmitter615, and a communications manager620. The device605may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver610may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to correlation-based hardware sequences for layered decoding). Information may be passed on to other components of the device605. The receiver610may utilize a single antenna or a set of multiple antennas. The transmitter615may provide a means for transmitting signals generated by other components of the device605. For example, the transmitter615may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to correlation-based hardware sequences for layered decoding). In some examples, the transmitter615may be co-located with a receiver610in a transceiver module. The transmitter615may utilize a single antenna or a set of multiple antennas. The device605, or various components thereof, may be an example of means for performing various aspects of correlation-based hardware sequences for layered decoding as described herein. For example, the communications manager620may include a partitioning component625, a sorting component630, a combining component635, a decoding schedule component640, or any combination thereof. The communications manager620may be an example of aspects of a communications manager520as described herein. In some examples, the communications manager620, or various components thereof, may be configured to perform various operations (e.g., receiving, obtaining, monitoring, outputting, transmitting) using or otherwise in cooperation with the receiver610, the transmitter615, or both. For example, the communications manager620may receive information from the receiver610, send information to the transmitter615, or be integrated in combination with the receiver610, the transmitter615, or both to obtain information, output information, or perform various other operations as described herein. The communications manager620may support wireless communications at a UE in accordance with examples as disclosed herein. The partitioning component625may be configured as or otherwise support a means for partitioning a set of multiple layers of a submatrix associated with a parity check decoding procedure into a first set of layers and a second set of layers. The sorting component630may be configured as or otherwise support a means for sorting the first set of layers into a first set of layer orders based on a first set of correlation values associated with the first set of layers. The sorting component630may be configured as or otherwise support a means for sorting the second set of layers into a second set of layer orders based on a second set of correlation values associated with the second set of layers. The combining component635may be configured as or otherwise support a means for combining the first set of layer orders and the second set of layer orders to obtain a set of combined layer orders. The decoding schedule component640may be configured as or otherwise support a means for selecting a decoding schedule from a set of decoding schedules for decoding each of the set of combined layer orders based on respective schedule lengths for the set of decoding schedules. FIG.7shows a block diagram700of a communications manager720that supports correlation-based hardware sequences for layered decoding in accordance with one or more aspects of the present disclosure. The communications manager720may be an example of aspects of a communications manager520, a communications manager620, or both, as described herein. The communications manager720, or various components thereof, may be an example of means for performing various aspects of correlation-based hardware sequences for layered decoding as described herein. For example, the communications manager720may include a partitioning component725, a sorting component730, a combining component735, a decoding schedule component740, a correlation component745, an encoded signal component750, a decoding component755, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The communications manager720may support wireless communications at a UE in accordance with examples as disclosed herein. The partitioning component725may be configured as or otherwise support a means for partitioning a set of multiple layers of a submatrix associated with a parity check decoding procedure into a first set of layers and a second set of layers. The sorting component730may be configured as or otherwise support a means for sorting the first set of layers into a first set of layer orders based on a first set of correlation values associated with the first set of layers. In some examples, the sorting component730may be configured as or otherwise support a means for sorting the second set of layers into a second set of layer orders based on a second set of correlation values associated with the second set of layers. The combining component735may be configured as or otherwise support a means for combining the first set of layer orders and the second set of layer orders to obtain a set of combined layer orders. The decoding schedule component740may be configured as or otherwise support a means for selecting a decoding schedule from a set of decoding schedules for decoding each of the set of combined layer orders based on respective schedule lengths for the set of decoding schedules. In some examples, the correlation component745may be configured as or otherwise support a means for determining the first set of correlation values based on one or more combinations of layers of the first set of layers. In some examples, the correlation component745may be configured as or otherwise support a means for generating a first correlation table including the first set of correlation values. In some examples, the sorting component730may be configured as or otherwise support a means for selecting a starting layer from the first set of layers for a first layer order. In some examples, the sorting component730may be configured as or otherwise support a means for sorting a set of remaining layers of the first set of layers for the first layer order based on the first set of correlation values, where the first set of layer orders includes the first layer order. In some examples, the correlation component745may be configured as or otherwise support a means for identifying a quantity of overlapping non-zero columns in each combination of the one or more combinations of layers of the first set of layers, where the first set of correlation values is based on the quantity of overlapping non-zero columns. In some examples, the correlation component745may be configured as or otherwise support a means for determining the second set of correlation values based on one or more combinations of layers of the second set of layers. In some examples, the correlation component745may be configured as or otherwise support a means for generating a second correlation table including the second set of correlation values. In some examples, the sorting component730may be configured as or otherwise support a means for selecting a starting layer from the second set of layers for a first layer order. In some examples, the sorting component730may be configured as or otherwise support a means for sorting a set of remaining layers of the second set of layers for the first layer order based on the second set of correlation values, where the second set of layer orders includes the first layer order. In some examples, the correlation component745may be configured as or otherwise support a means for identifying a quantity of overlapping non-zero columns in each combination of the one or more combinations of layers of the first set of layers, where the first set of correlation values is based on the quantity of overlapping non-zero columns. In some examples, to support combining the first set of layer orders and the second set of layer orders, the combining component735may be configured as or otherwise support a means for concatenating each layer order of the first set of layer orders with each layer order of the second set of layer orders to obtain the set of combined layer orders. In some examples, partitioning the set of multiple layers of the submatrix is based on a punctured column for each layer of the set of multiple layers. In some examples, the first set of layers are associated with one punctured column and the second set of layers are associated with two punctured columns. In some examples, the decoding schedule component740may be configured as or otherwise support a means for generating a respective decoding schedule for each layer order of the set of combined layer orders to obtain the set of decoding schedules. In some examples, the decoding schedule component740may be configured as or otherwise support a means for determining a respective schedule length for each decoding schedule of the set of decoding schedules. In some examples, the decoding schedule component740may be configured as or otherwise support a means for selecting the decoding schedule having a length below a threshold. In some examples, the encoded signal component750may be configured as or otherwise support a means for receiving a encoded signal associated with the parity check decoding procedure. In some examples, the encoded signal component750may be configured as or otherwise support a means for identifying a base graph matrix associated with the encoded signal. In some examples, the encoded signal component750may be configured as or otherwise support a means for extracting the submatrix from the base graph matrix associated with the encoded signal. In some examples, the decoding component755may be configured as or otherwise support a means for decoding the encoded signal based on the selected decoding schedule. FIG.8shows a diagram of a system800including a device805that supports correlation-based hardware sequences for layered decoding in accordance with one or more aspects of the present disclosure. The device805may be an example of or include the components of a device505, a device605, or a UE115as described herein. The device805may communicate (e.g., wirelessly) with one or more network entities105, one or more UEs115, or any combination thereof. The device805may include components for bi-directional voice and data communications including components for transmitting and receiving communications, such as a communications manager820, an input/output (I/O) controller810, a transceiver815, an antenna825, a memory830, code835, and a processor840. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more buses (e.g., a bus845). The I/O controller810may manage input and output signals for the device805. The I/O controller810may also manage peripherals not integrated into the device805. In some cases, the I/O controller810may represent a physical connection or port to an external peripheral. In some cases, the I/O controller810may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system. Additionally or alternatively, the I/O controller810may represent or interact with a modem, a keyboard, a mouse, a touchscreen, or a similar device. In some cases, the I/O controller810may be implemented as part of a processor, such as the processor840. In some cases, a user may interact with the device805via the I/O controller810or via hardware components controlled by the I/O controller810. In some cases, the device805may include a single antenna825. However, in some other cases, the device805may have more than one antenna825, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The transceiver815may communicate bi-directionally, via the one or more antennas825, wired, or wireless links as described herein. For example, the transceiver815may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver815may also include a modem to modulate the packets, to provide the modulated packets to one or more antennas825for transmission, and to demodulate packets received from the one or more antennas825. The transceiver815, or the transceiver815and one or more antennas825, may be an example of a transmitter515, a transmitter615, a receiver510, a receiver610, or any combination thereof or component thereof, as described herein. The memory830may include random access memory (RAM) and read-only memory (ROM). The memory830may store computer-readable, computer-executable code835including instructions that, when executed by the processor840, cause the device805to perform various functions described herein. The code835may be stored in a non-transitory computer-readable medium such as system memory or another type of memory. In some cases, the code835may not be directly executable by the processor840but may cause a computer (e.g., when compiled and executed) to perform functions described herein. In some cases, the memory830may contain, among other things, a basic I/O system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor840may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor840may be configured to operate a memory array using a memory controller. In some other cases, a memory controller may be integrated into the processor840. The processor840may be configured to execute computer-readable instructions stored in a memory (e.g., the memory830) to cause the device805to perform various functions (e.g., functions or tasks supporting correlation-based hardware sequences for layered decoding). For example, the device805or a component of the device805may include a processor840and memory830coupled with or to the processor840, the processor840and memory830configured to perform various functions described herein. The communications manager820may support wireless communications at a UE in accordance with examples as disclosed herein. For example, the communications manager820may be configured as or otherwise support a means for partitioning a set of multiple layers of a submatrix associated with a parity check decoding procedure into a first set of layers and a second set of layers. The communications manager820may be configured as or otherwise support a means for sorting the first set of layers into a first set of layer orders based on a first set of correlation values associated with the first set of layers. The communications manager820may be configured as or otherwise support a means for sorting the second set of layers into a second set of layer orders based on a second set of correlation values associated with the second set of layers. The communications manager820may be configured as or otherwise support a means for combining the first set of layer orders and the second set of layer orders to obtain a set of combined layer orders. The communications manager820may be configured as or otherwise support a means for selecting a decoding schedule from a set of decoding schedules for decoding each of the set of combined layer orders based on respective schedule lengths for the set of decoding schedules. By including or configuring the communications manager820in accordance with examples as described herein, the device805may support techniques for correlation-based hardware sequences for layered decoding which may result in improved communication reliability, reduced latency, improved user experience related to reduced processing, reduced power consumption, more efficient utilization of communication resources, improved coordination between devices, longer battery life, and improved utilization of processing capability, among other advantages. In some examples, the communications manager820may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the transceiver815, the one or more antennas825, or any combination thereof. Although the communications manager820is illustrated as a separate component, in some examples, one or more functions described with reference to the communications manager820may be supported by or performed by the processor840, the memory830, the code835, or any combination thereof. For example, the code835may include instructions executable by the processor840to cause the device805to perform various aspects of correlation-based hardware sequences for layered decoding as described herein, or the processor840and the memory830may be otherwise configured to perform or support such operations. FIG.9shows a flowchart illustrating a method900that supports correlation-based hardware sequences for layered decoding in accordance with one or more aspects of the present disclosure. The operations of the method900may be implemented by a UE or its components as described herein. For example, the operations of the method900may be performed by a UE115as described with reference toFIGS.1through8. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the described functions. Additionally, or alternatively, the UE may perform aspects of the described functions using special-purpose hardware. At905, the method may include partitioning a set of multiple layers of a submatrix associated with a parity check decoding procedure into a first set of layers and a second set of layers. The operations of905may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of905may be performed by a partitioning component725as described with reference toFIG.7. At910, the method may include sorting the first set of layers into a first set of layer orders based on a first set of correlation values associated with the first set of layers. The operations of910may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of910may be performed by a sorting component730as described with reference toFIG.7. At915, the method may include sorting the second set of layers into a second set of layer orders based on a second set of correlation values associated with the second set of layers. The operations of915may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of915may be performed by a sorting component730as described with reference toFIG.7. At920, the method may include combining the first set of layer orders and the second set of layer orders to obtain a set of combined layer orders. The operations of920may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of920may be performed by a combining component735as described with reference toFIG.7. At925, the method may include selecting a decoding schedule from a set of decoding schedules for decoding each of the set of combined layer orders based on respective schedule lengths for the set of decoding schedules. The operations of925may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of925may be performed by a decoding schedule component740as described with reference toFIG.7. FIG.10shows a flowchart illustrating a method1000that supports correlation-based hardware sequences for layered decoding in accordance with one or more aspects of the present disclosure. The operations of the method1000may be implemented by a UE or its components as described herein. For example, the operations of the method1000may be performed by a UE115as described with reference toFIGS.1through8. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the described functions. Additionally, or alternatively, the UE may perform aspects of the described functions using special-purpose hardware. At1005, the method may include partitioning a set of multiple layers of a submatrix associated with a parity check decoding procedure into a first set of layers and a second set of layers. The operations of1005may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1005may be performed by a partitioning component725as described with reference toFIG.7. At1010, the method may include determining the first set of correlation values based on one or more combinations of layers of the first set of layers. The operations of1010may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1010may be performed by a correlation component745as described with reference toFIG.7. At1015, the method may include generating a first correlation table including the first set of correlation values. The operations of1015may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1015may be performed by a correlation component745as described with reference toFIG.7. At1020, the method may include sorting the first set of layers into a first set of layer orders based on a first set of correlation values associated with the first set of layers. The operations of1020may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1020may be performed by a sorting component730as described with reference toFIG.7. At1025, the method may include sorting the second set of layers into a second set of layer orders based on a second set of correlation values associated with the second set of layers. The operations of1025may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1025may be performed by a sorting component730as described with reference toFIG.7. At1030, the method may include combining the first set of layer orders and the second set of layer orders to obtain a set of combined layer orders. The operations of1030may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1030may be performed by a combining component735as described with reference toFIG.7. At1035, the method may include selecting a decoding schedule from a set of decoding schedules for decoding each of the set of combined layer orders based on respective schedule lengths for the set of decoding schedules. The operations of1035may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1035may be performed by a decoding schedule component740as described with reference toFIG.7. FIG.11shows a flowchart illustrating a method1100that supports correlation-based hardware sequences for layered decoding in accordance with one or more aspects of the present disclosure. The operations of the method1100may be implemented by a UE or its components as described herein. For example, the operations of the method1100may be performed by a UE115as described with reference toFIGS.1through8. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the described functions. Additionally, or alternatively, the UE may perform aspects of the described functions using special-purpose hardware. At1105, the method may include partitioning a set of multiple layers of a submatrix associated with a parity check decoding procedure into a first set of layers and a second set of layers. The operations of1105may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1105may be performed by a partitioning component725as described with reference toFIG.7. At1110, the method may include determining the second set of correlation values based on one or more combinations of layers of the second set of layers. The operations of1110may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1110may be performed by a correlation component745as described with reference toFIG.7. At1115, the method may include generating a second correlation table including the second set of correlation values. The operations of1115may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1115may be performed by a correlation component745as described with reference toFIG.7. At1120, the method may include sorting the first set of layers into a first set of layer orders based on a first set of correlation values associated with the first set of layers. The operations of1120may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1120may be performed by a sorting component730as described with reference toFIG.7. At1125, the method may include sorting the second set of layers into a second set of layer orders based on a second set of correlation values associated with the second set of layers. The operations of1125may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1125may be performed by a sorting component730as described with reference toFIG.7. At1130, the method may include combining the first set of layer orders and the second set of layer orders to obtain a set of combined layer orders. The operations of1130may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1130may be performed by a combining component735as described with reference toFIG.7. At1135, the method may include selecting a decoding schedule from a set of decoding schedules for decoding each of the set of combined layer orders based on respective schedule lengths for the set of decoding schedules. The operations of1135may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1135may be performed by a decoding schedule component740as described with reference toFIG.7. The following provides an overview of aspects of the present disclosure: Aspect 1: A method for wireless communications at a UE, comprising: partitioning a plurality of layers of a submatrix associated with a parity check decoding procedure into a first set of layers and a second set of layers; sorting the first set of layers into a first set of layer orders based at least in part on a first set of correlation values associated with the first set of layers; sorting the second set of layers into a second set of layer orders based at least in part on a second set of correlation values associated with the second set of layers; combining the first set of layer orders and the second set of layer orders to obtain a set of combined layer orders; and selecting a decoding schedule from a set of decoding schedules for decoding each of the set of combined layer orders based at least in part on respective schedule lengths for the set of decoding schedules. Aspect 2: The method of aspect 1, further comprising: determining the first set of correlation values based at least in part on one or more combinations of layers of the first set of layers; and generating a first correlation table comprising the first set of correlation values. Aspect 3: The method of aspect 2, further comprising: selecting a starting layer from the first set of layers for a first layer order; and sorting a set of remaining layers of the first set of layers for the first layer order based at least in part on the first set of correlation values, wherein the first set of layer orders comprises the first layer order. Aspect 4: The method of any of aspects 2 through 3, further comprising: identifying a quantity of overlapping non-zero columns in each combination of the one or more combinations of layers of the first set of layers, wherein the first set of correlation values is based at least in part on the quantity of overlapping non-zero columns. Aspect 5: The method of any of aspects 1 through 4, further comprising: determining the second set of correlation values based at least in part on one or more combinations of layers of the second set of layers; and generating a second correlation table comprising the second set of correlation values. Aspect 6: The method of aspect 5, further comprising: selecting a starting layer from the second set of layers for a first layer order; and sorting a set of remaining layers of the second set of layers for the first layer order based at least in part on the second set of correlation values, wherein the second set of layer orders comprises the first layer order. Aspect 7: The method of any of aspects 5 through 6, further comprising: identifying a quantity of overlapping non-zero columns in each combination of the one or more combinations of layers of the first set of layers, wherein the first set of correlation values is based at least in part on the quantity of overlapping non-zero columns. Aspect 8: The method of any of aspects 1 through 7, wherein combining the first set of layer orders and the second set of layer orders comprises: concatenating each layer order of the first set of layer orders with each layer order of the second set of layer orders to obtain the set of combined layer orders. Aspect 9: The method of any of aspects 1 through 8, wherein partitioning the plurality of layers of the submatrix is based at least in part on a punctured column for each layer of the plurality of layers. Aspect 10: The method of aspect 9, wherein the first set of layers are associated with one punctured column and the second set of layers are associated with two punctured columns. Aspect 11: The method of any of aspects 1 through 10, further comprising: generating a respective decoding schedule for each layer order of the set of combined layer orders to obtain the set of decoding schedules. Aspect 12: The method of any of aspects 1 through 11, further comprising: determining a respective schedule length for each decoding schedule of the set of decoding schedules; and selecting the decoding schedule having a length below a threshold. Aspect 13: The method of any of aspects 1 through 12, further comprising: receiving a encoded signal associated with the parity check decoding procedure; identifying a base graph matrix associated with the encoded signal; extracting the submatrix from the base graph matrix associated with the encoded signal; and decoding the encoded signal based at least in part on the selected decoding schedule. Aspect 14: An apparatus for wireless communications at a UE, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform a method of any of aspects 1 through 13. Aspect 15: An apparatus for wireless communications at a UE, comprising at least one means for performing a method of any of aspects 1 through 13. Aspect 16: A non-transitory computer-readable medium storing code for wireless communications at a UE, the code comprising instructions executable by a processor to perform a method of any of aspects 1 through 13. It should be noted that the methods described herein describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, aspects from two or more of the methods may be combined. Although aspects of an LTE, LTE-A, LTE-A Pro, or NR system may be described for purposes of example, and LTE, LTE-A, LTE-A Pro, or NR terminology may be used in much of the description, the techniques described herein are applicable beyond LTE, LTE-A, LTE-A Pro, or NR networks. For example, the described techniques may be applicable to various other wireless communications systems such as Ultra Mobile Broadband (UMB), Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDM, as well as other systems and radio technologies not explicitly mentioned herein. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative blocks and components described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, a CPU, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein may be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that may be accessed by a general-purpose or special-purpose computer. By way of example, and not limitation, non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable ROM (EEPROM), flash memory, compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that may be used to carry or store desired program code means in the form of instructions or data structures and that may be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of computer-readable medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. As used herein, including in the claims, “or” as used in a list of items (e.g., a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an example step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” The term “determine” or “determining” encompasses a variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (such as via looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (such as receiving information), accessing (such as accessing data in a memory) and the like. Also, “determining” can include resolving, obtaining, selecting, choosing, establishing and other such similar actions. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label, or other subsequent reference label. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “example” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples. The description herein is provided to enable a person having ordinary skill in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to a person having ordinary skill in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein. | 107,328 |
11863202 | DETAILED DESCRIPTION Polar codes are a new approach to maximizing the rate and reliability of data transmissions. In an example, polar codes have been adopted to improve coding performance for control channels in 5G. At the same time, they reduce the complexity of design and ensure service quality. Polar codes are a type of linear block error correcting code, whose code construction is based on a multiple recursive concatenation of a short kernel code which transforms the physical channel into virtual outer channels. When the number of recursive concatenations becomes large, the virtual channels tend to either have very high reliability or very low reliability (in other words, they polarize), and the data bits are allocated to the most reliable channels. Prior to the technology described herein, polar codes were never considered to be capable of efficiently correcting synchronization errors (such as deletion and/or insertion errors). Section headings are used in the present document to improve readability of the description and do not in any way limit the discussion or embodiments (and/or implementations) to the respective sections only. Furthermore, the present document uses examples from the 3 GPP New Radio (NR) network architecture and 5G protocol only to facilitate understanding and the disclosed techniques and embodiments may be practiced in other communication systems that use different communication protocols. Symbol deletion. In the present document, n denotes the code-length and d denotes the number of deletions. In some embodiments, the number of deletions d could be a constant, a function of the code-length, or a random number generated from a binomial distribution. In some embodiments, locations of the deleted symbols could be selected uniformly at random. Alternatively, they could be selected according to another probability distribution (n,d) over all (nd) possible scenarios. The transmitted symbols are denoted by X1n∈{0,1}n, while Y1n∈ncorrespond to the received symbols prior to the deletion effect. The final n−d symbols are denoted as {tilde over (Y)}1n−d∈n−d, wherein the d randomly chosen symbols go through a deletion transformation (e.g., from the symbol xito the empty word/symbol). Channel models. Embodiments and examples in the present document are described in the context of the noiseless deletion channel, but are equally applicable to general noisy channels with deletions. The noisy d-deletion can be considered as the cascade between binary-input discrete memoryless channel (B-DMC) and d-deletion channel. There is no commonly known channel model for the noisy d-deletion channel. However, since the synchronization problems often happen at the receiver and after the noise effect of the wireless channel, the DMC is placed prior to the d-deletion channel.FIGS.1A and1Billustrate the noiseless and noisy deletion channels as described above, respectively. In addition to the d-deletion channel, embodiments of the disclosed technology are equally applicable to an insertion channel, which adds a symbol with a predefined probability, and is similarly characterized. Polar codes. In the present document, embodiments of the disclosed technology use a (n, k) polar encoder, where n=2mcorresponds to m levels of polarization and k denotes the number of information bits in the polar code, resulting in n−k frozen bits. In some embodiments, the frozen bits are set to zero. In other embodiments, the frozen bits are computed such that they are known a priori at both the transmitter and the receiver. The relationship between the transmitted symbols {xi} and the uncoded information bits {ui} is given by: x=uGn,Gn=Bn[1011]⊗m. Herein, Gnis the polar code generator (or generating) matrix. In some embodiments, the polar code generator matrix is the m-th Kronecker power of the 2×2 kernel matrix that is multiplied by a length-n bit reversal matrix Bnand u denotes the uncoded information bit vector that includes k information bits and n−k frozen bits. In an example, the 2×2 kernel matrix (denoted F) and the m-th Kronecker power of the kernel matrix for m=3 are given by: F=[1011],andF⊗3=[1000000011000000101000001111000010001000110011001010101011111111]. In some embodiments, different 2×2 kernel matrices (e.g.,[1110]) may be used to generate the polar code. In other embodiments, the bit reversal matrix Bnmay be omitted in the generation of the polar code. Successive cancellation decoding. In some embodiments, a receiver (illustrated inFIG.2) includes a front-end210, a demodulator220and a polar code decoder230, which implements successive cancellation decoding. In an example, successive cancellation decoding starts by using the symbols received over the channel to decode a first bit, and then subsequent bits are decoded based on the symbols received over the channel and one or more previously decoded bits. For example, a tenth bit is decoded as a function of the symbols received over the channel and the values determined (e.g., “0” or “1”) for at least one of the nine previous bits that have already been decoded. In some embodiments, successive cancellation decoding comprises decoding uifor i=1, 2, . . . , n sequentially while assuming the values of the previous ui's. In other words, to decode ui, it is assumed that the previous bits ui, . . . , ui−1, are all known (or correctly decoded) and hence available to the decoder similar to the channel observation vector after deletions {tilde over (y)}. For example, u1is determined based on the channel observation vector after deletions {tilde over (y)}, whereas u2is determined based on {tilde over (y)} and the value of u1(e.g., the value of u1determines whether a function at a node inFIG.3implements a multiplication or a division). The process continues with the decoding of uibeing based on {tilde over (y)} and the previous bits u1, . . . , ui−1(e.g., decoding u5requires {tilde over (y)} and at least one of {u1, u2, u3, u4}). Additional details regarding successive cancellation decoding of polar codes may be found in U.S. Pat. No. 9,176,927, which is hereby incorporated by reference, in its entirety, as part of this application. SC decoding in deletion channels. In some embodiments, the construction of the polar code generator matrix based on Kronecker products allows its encoding circuit to be realized over a FFT-like graph that is sometimes referred to as the Tanner graph of polar codes, or more commonly as just the polar computation graph. The butterfly structure of this graph allows the encoding algorithm to be realized with(n log n) computational complexity. Moreover, the very same graph can be used in decoder to successively estimate ui's again with(n log n) computational complexity.FIG.3depicts the graph of a length-4 polar code (with m=2, n=4). The polar computation graph has m+1 layers that correspond tom steps of the polarization. Each layer also has n nodes in it. These nodes are labeled withφ, βλ, where φ denotes their phase number, β denotes their branch number and λ denotes the layer number. InFIG.3, the bit reversal permutation (e.g., the effect of the matrix Bn) is absorbed in between the layers, and for nodes within layer λ, it is assumed: 0≤φ≤2λand 0≤β<2m−λ. In some embodiments, the decoding polar codes based on successive cancellation decoding uses two data structures, denoted B, which stores a binary value∈{0,1} for each node in the polar computation graph that corresponds to its hard value, and P, which stores the probabilities (or the likelihoods) of each node being equal to 0 or 1 given the received vector y. The structure of the polar computation graph enables the efficient calculation of these values in a recursive fashion. In the decoding of a polar code, embodiments of the disclosed technology do not fix a deletion pattern from the beginning of the decoding process, but rather, limit the deletion pattern gradually during the recursive process of the successive cancellation decoder. In an example, the first step of this process is elucidated in the context ofFIG.4, and includes an attempt at decoding the first information bit, u1. This is based on the evaluation of two intermediate bit-channels in layer λ=m−1 that are labeled withφ, β=0,0and0,1(e.g., nodes V1and V2on the polar computation graph illustrated inFIG.4). These two bit-channels are independent and are looking at two disjoint sub-vectors of length-(n/2) from ŷi's as the output, where the ŷi's are the reconstructed vector of received symbols in which the deleted symbols are treated as erasures. However, in presence of deletions, it is unclear which of the {tilde over (y)}isymbols should be mapped to the top half, and which belong to the bottom half. Instead of distinguishing between all different (nd) deletion patterns at this stage, the number of deletions that belong to each half is decided, and further calculations are postponed till a later stage. In other words, the original nodes in the polar computation graph are replaced with multiple copies for decoding purposes, where each replacement addresses a different set of mappings from the received symbols to its corresponding output span.FIG.5illustrates an example of the modified decoding graph for a polar code of length 4 in presence of 1 deletion error. A node that was originally labeled withφ, βis now replaced with a group of nodes each labeled with δd0, d1, wherein d0and d1denote the number of deletions prior and inside the output span of the originalφ, βλnode. As discussed previously, replacing each node with the group of nodes enables the successive cancellation decoding implementation to reduce the overall computational complexity by not considering all the mappings, but rather using the frozen bits to determine the correct mapping as each step of the recursive successive cancellation decoding implementation is traversed. In another example, a length-N codeword (denoted x0N−1) is received through a deletion channel, and for a node in the polar computation graph (e.g., a node inFIGS.3-5), it is assumed that the coded bits corresponding to this node are xab, where b−a≥d. When d coded bits are deleted, there are at most (d+1)(d+2)/2 mapping rules between xaband the received symbols. Each mapping rule is termed as a scenario. Continuing with this example, the received length-N codeword is first split into three parts, as illustrated inFIG.6A. As shown therein, x0N−1=(x0a−1, xab, xb+1N−1) and it is assumed that d1, d2and d3are the number of deleted symbols among the segments x0a−1, xab, and xb+1N−1, respectively. This implies d1+d2≤d1+d2+d3=d. All possible scenarios are listed in the table shown inFIG.6B. To label each of these scenarios, d2is first fixed, and all possible values for d1are traversed. Then, d2is changed, and all the possible values of d1are traversed again. Based on this enumeration, the number of scenario is (d+1)(d+2)/2. In other words, at each node, only (i) the number of deletions in an interval and (ii) the number of deletions before that interval need to be tracked. Thus, in an example, the received length-N codeword may be split into two parts (which is equivalent to the number of deletions in the first part or the third part being zero). In some embodiments, the successive cancellation decoding framework described in this document can be mathematically represented as Equations (1) and (2), which capture the recursive construction of layers of the polar bit-channels (e.g., the layer of the bit-channel illustrated inFIG.4) in a successive cancellation decoder in the presence of deletion and/or insertion errors. WΛ(2ψ+1)(y~1n-d\〈d0,d1,β,λ〉,u12ψ❘u2ψ+1)︷branchβ=1(Λd1)∑t=0d1(Λ2t)(Λ2d1-t)︸WA×{12∑u2ψ+2WΛ2(ψ+1)(y~1n-d\〈d0,t,2β,λ-1〉,u1,even2ψ⊕u1,odd2ψ❘u2ψ+1+u2ψ+2)︸branch2β.WΛ2(ψ+1)(y~1n-d\〈d0,+t,d1-t,2β+1,λ-1〉,u1,even2ψ❘u2ψ+1)︸branch2β+1}WΛ(2ψ+2)(y~1n-d\〈d0,d1,β,λ〉,u12ψ+1❘u2ψ+1)︷branchβ=1(Λd1)∑t=0d1(Λ2t)(Λ2d1-t)︸WA×Equation(1){12WΛ2(ψ+1)(y~1n-d\〈d0,t,2β,λ-1〉,u1,even2ψ⊕u1,odd2ψ❘u2ψ+1+u2ψ+2)︸branch2β.WΛ2(ψ+1)(y~1n-d\〈d0+t,d1-t,2β+1,λ-1〉,u1,even2ψ❘u2ψ+1)︸branch2β+1}Equation(2) As described in the context ofFIGS.2-4, each processing node within the polar computation graph can be represented by four parameters—a phase number (φ), a branch number (β), a layer number (λ) and a state number (denoted by, e.g.,d0,d1). In some embodiments, each processing node is further configured to store and forward soft information about its bit value. For example, the soft information can be represented by a pair of probabilities, a likelihood ratio, or the logarithm of the likelihood ratio (LLR). In some embodiments, each processed node is connected to a sub-vector (or sub string) of polar-coded symbols, which is defined as a subset of consecutively indexed polar coded symbols. The state (e.g.) of the processing node determines the start (d0) and end (d1) of the sub string connected to that processing node, or equivalently, the number of insertion and/or deletion errors that occurred before and within the sub string. For example, Equation (1) represents a branch β at a layer [lambda] as the product of a first branch 2β and a second branch 2β+1, wherein the first and second branches are in the previous layer λ−1. This corresponds to the processing nodeφ,βλcombining the soft information from the nodes in the immediately lower layer (atφ,2βλ−1andφ,2β+1λ−1). For example, the soft information is combined using a weighted average (the sum and combinatorial terms denoted “WA” in Equation (1)) that accounts for the various combinations of d0and d1deletion or insertion error that occurred before and within the substring of polar coded symbols associated with the node whose being processed at that stage, respectively. In some embodiments, the successive cancellation decoding of polar codes over the modified polar computation graph is performed by a predetermined schedule of soft-information updates. FIG.7illustrates the performance of an exemplary polar code in a deletion channel, for different number of deletions (e.g., d=0, 1, 2, 4, 6, 8). The top figure depicts the error probability of polar bit-channels for a polar code of length n=1024 and with a binary erasure channel (BEC) with probability ½. It is noted that the top figure only depicts the middle half of the polarized bit-channels upon them being sorted. The lower figure inFIG.7shows the frame error rate of a rate-½ polar code with length n=2048 for an AWGN channel with deletions. FIG.8illustrates a flowchart of an exemplary method800for using polar codes in a deletion and/or insertion channel. The method800includes, at operation810, receiving a portion of a block of polar-coded symbols that includes d≥2 insertion or deletion symbol errors, the block comprising N symbols, the received portion of the block comprising M symbols, wherein d, M and N are integers. The method800includes, at operation820, estimating, based on one or more recursive calculations in a successive cancellation decoder, a location or a value corresponding to each of the d insertion or deletion symbol errors. The method800includes, at operation830, decoding, based on estimated locations or values, the portion of the block of polar-coded symbols to generate an estimate of information bits that correspond to the block of polar-coded symbols. In some embodiments, the successive cancellation decoder comprises at least log 2(N)+1 layers, each layer comprising up to d2N processing nodes arranged as N groups, each of the N groups comprising up to d2 processing nodes. In some embodiments, a complexity of estimating the location or the value scales polynomially with d. In some embodiments, at least one bit of the information bits has a known value at the data processing system prior to receiving the portion of the block of polar-coded symbols. In some embodiments, the d insertion or deletion symbol errors comprise (a) two or more deleted symbols, (b) two or more inserted symbols, or (c) at least one deleted symbol and at least one inserted symbol. In some embodiments, and with references toFIGS.3and4, the up to d2processing nodes have replaced a corresponding one of N processing nodes of a conventional successive cancellation decoder. In some embodiments, generating the estimate of the information bits excludes using a combinatorial search of the d insertion or deletion symbol errors. For example, the combinatorial search has a complexity governed by(Nd+1log N), and wherein the complexity of generating the estimate is governed by(d3N log N). In some embodiments, generating the estimate is based on a polar computation graph structure. In an example, a node of the polar computation graph corresponds to a sub string of symbols in the block of polar-coded symbols and is characterized by a phase number, a branch number, a layer number of the at least log2(N)+1 layers, and a state number. In another example, the nodes in a first layer of the at least log2(N)+1 layers are initialized based on a corresponding channel observation for a symbol associated with the nodes. In some embodiments, the block of polar-coded symbols correspond to the information bits encoded using a polar code generator matrix having N rows and N columns. For example, G2=[1011] is a polarizing matrix, wherein the polar code generator matrix is GN=G2⊗nwhich is an n-th Kronecker power of the polarizing matrix. In some embodiments, each group of up to d2processing nodes corresponds to a sub string of the block of polar-coded symbols. In an example, a number of processing nodes within each group is based on a number of deletion-type errors occurring before and after the corresponding sub string of the block of polar-coded symbols. FIG.9illustrates a flowchart of another exemplary method900for using polar codes in a deletion and/or insertion channel. The method900includes, at operation910, receiving a portion of a block of symbols that includes a plurality of errors, the symbols corresponding to information bits encoded using a polar code, the plurality of errors comprising (a) two or more deleted symbols, (b) two or more inserted symbols, or (c) at least one deleted symbol and at least one inserted symbol, the block comprising N symbols and the received portion of the block comprising M symbols, and M and N being integers. The method900includes, at operation920, using the received portion of the block of symbols to perform successive cancellation decoding to generate an estimate of the information bits corresponding to the block of symbols. In some embodiments, at least one bit of the information bits has a known value at the data processing system prior to receiving the portion of the block of polar-coded symbols. In some embodiments, a complexity of generating the estimate scales polynomially with a number of the plurality of errors and a number of symbols in the block (N). Embodiments of the disclosed technology can be applied in any communication system affected by synchronization errors, such as insertions and deletions. Since the disclosed technology for correcting such errors is based on polar coding, the described embodiments are particularly attractive in systems that already use polar coding. Additionally, the disclosed embodiments boost the value of polar coding for use in future systems that can benefit from correcting “conventional” errors (e.g., caused by noise) and synchronization errors using the same coding technique. In the context of wireless communication systems, the described embodiments can advantageously improve communication technology. In this context, many services running on modern digital telecommunications networks require accurate synchronization for correct operation. For example, if switches do not operate with the same clock rates, then slips will occur and degrade performance. Embodiments of the disclosed technology significantly reduce the complexity required for synchronization, thereby enabling higher data rates and more resilient communications between wireless devices, which in turn can be used to provide higher quality voice, video, data, or other types of information in wired and/or wireless communication systems, thereby improving the area of communication technology. Additionally, the described embodiments can advantageously improve networked and storage systems. In this context, synchronization algorithms find numerous areas of use, including data storage, file sharing, source code control systems, and cloud applications. For example, cloud storage services such as Dropbox synchronize between local copies and cloud backups each time users make changes to local versions. For example, two nodes (servers) may have copies of the same file but one is obtained from the other over a noiseless (or noisy) link that is affected by deletions. Thus, a file obtained from one of the servers by the other server may have d bits missing, and which may be corrected using embodiments of the disclosed technology. This improves the resiliency of redundant storage systems, cloud-based services and applications, and so on. Additional applications of the disclosed technology include implementations in mobile devices, such as unmanned aerial devices (UAVs), and autonomous vehicles, as well as smartphones and portable computing devices, which can benefit from the improved error correction capabilities of the disclosed embodiments that result in better quality of communications, as well as a reduction in cost, weight, and/or energy consumption of the components. FIG.10is a block diagram representation of a portion of an apparatus, in accordance with some embodiments of the presently disclosed technology. An apparatus1005, such as a base station or a wireless device (or UE), can include processor electronics1010such as a microprocessor that implements one or more of the techniques (including, but not limited to, methods800and900) presented in this document. The apparatus1005can include transceiver electronics1015to send and/or receive wireless signals over one or more communication interfaces such as antenna(s)1020. The apparatus1005can include other communication interfaces for transmitting and receiving data. Apparatus1005can include one or more memories (not explicitly shown) configured to store information such as data and/or instructions. In some implementations, the processor electronics1010can include at least a portion of the transceiver electronics1015. In some embodiments, at least some of the disclosed techniques, modules or functions are implemented using the apparatus1005. Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing unit” or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. It is intended that the specification, together with the drawings, be considered exemplary only, where exemplary means an example. As used herein, the singular forms“a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Additionally, the use of “or” is intended to include “and/or”, unless the context clearly indicates otherwise. While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub combination or variation of a sub combination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments. Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document. | 28,671 |
11863203 | BEST MODE Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. The detailed description, which will be given below with reference to the accompanying drawings, is intended to explain exemplary embodiments of the present invention, rather than to show the only embodiments that can be implemented according to the present invention. The following detailed description includes specific details in order to provide a thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the present invention may be practiced without such specific details. Although most terms used in the present invention have been selected from general ones widely used in the art, some terms have been arbitrarily selected by the applicant and their meanings are explained in detail in the following description as needed. Thus, the present invention should be understood based upon the intended meanings of the terms rather than their simple names or meanings. The present invention provides apparatuses and methods for transmitting and receiving broadcast signals for future broadcast services. Future broadcast services according to an embodiment of the present invention include a terrestrial broadcast service, a mobile broadcast service, a UHDTV service, etc. The present invention may process broadcast signals for the future broadcast services through non-MIMO (Multiple Input Multiple Output) or MIMO according to one embodiment. A non-MIMO scheme according to an embodiment of the present invention may include a MISO (Multiple Input Single Output) scheme, a SISO (Single Input Single Output) scheme, etc. While MISO or MIMO uses two antennas in the following for convenience of description, the present invention is applicable to systems using two or more antennas. The present invention may defines three physical layer (PL) profiles—base, handheld and advanced profiles—each optimized to minimize receiver complexity while attaining the performance required for a particular use case. The physical layer (PHY) profiles are subsets of all configurations that a corresponding receiver should implement. The three PHY profiles share most of the functional blocks but differ slightly in specific blocks and/or parameters. Additional PHY profiles can be defined in the future. For the system evolution, future profiles can also be multiplexed with the existing profiles in a single RF channel through a future extension frame (FEF). The details of each PHY profile are described below. 1. Base Profile The base profile represents a main use case for fixed receiving devices that are usually connected to a roof-top antenna. The base profile also includes portable devices that could be transported to a place but belong to a relatively stationary reception category. Use of the base profile could be extended to handheld devices or even vehicular by some improved implementations, but those use cases are not expected for the base profile receiver operation. Target SNR range of reception is from approximately 10 to 20 dB, which includes the 15 dB SNR reception capability of the existing broadcast system (e.g. ATSC A/53). The receiver complexity and power consumption is not as critical as in the battery-operated handheld devices, which will use the handheld profile. Key system parameters for the base profile are listed in below table 1. TABLE 1LDPC codeword length16K, 64K bitsConstellation size4~10 bpcu (bits per channel use)Time de-inter leaving memory size≤219data cellsPilot patternsPilot pattern for fixed receptionFFT size16K, 32K points 2. Handheld Profile The handheld profile is designed for use in handheld and vehicular devices that operate with battery power. The devices can be moving with pedestrian or vehicle speed. The power consumption as well as the receiver complexity is very important for the implementation of the devices of the handheld profile. The target SNR range of the handheld profile is approximately 0 to 10 dB, but can be configured to reach below 0 dB when intended for deeper indoor reception. In addition to low SNR capability, resilience to the Doppler Effect caused by receiver mobility is the most important performance attribute of the handheld profile. Key system parameters for the handheld profile are listed in the below table 2. TABLE 2LDPC codeword length16K bitsConstellation size2~8 bpcuTime de-inter leaving memory size≤218data cellsPilot patternsPilot patterns for mobile andindoor receptionFFT size8K, 16K points 3. Advanced Profile The advanced profile provides highest channel capacity at the cost of more implementation complexity. This profile requires using MIMO transmission and reception, and UHDTV service is a target use case for which this profile is specifically designed. The increased capacity can also be used to allow an increased number of services in a given bandwidth, e.g., multiple SDTV or HDTV services. The target SNR range of the advanced profile is approximately 20 to 30 dB. MIMO transmission may initially use existing elliptically-polarized transmission equipment, with extension to full-power cross-polarized transmission in the future. Key system parameters for the advanced profile are listed in below table 3. TABLE 3LDPC codeword length16K, 64K bitsConstellation size8~12 bpcuTime de-inter leaving memory size≤218data cellsPilot patternsPilot pattern for fixed receptionFFT size16K, 32K points The following terms and definitions may apply to the present invention. The following terms and definitions can be changed according to design.auxiliary stream: sequence of cells carrying data of as yet undefined modulation and coding, which may be used for future extensions or as required by broadcasters or network operatorsbase data pipe: data pipe that carries service signaling databaseband frame (or BBFRAME): set of Kbch bits which form the input to one FEC encoding process (BCH and LDPC encoding)cell: modulation value that is carried by one carrier of the OFDM transmissioncoded block: LDPC-encoded block of PLS1data or one of the LDPC-encoded blocks of PLS2datadata pipe: logical channel in the physical layer that carries service data or related metadata, which may carry one or multiple service(s) or service component(s).data pipe unit: a basic unit for allocating data cells to a DP in a frame.data symbol: OFDM symbol in a frame which is not a preamble symbol (the frame signaling symbol and frame edge symbol is included in the data symbol)DP_ID: this 8-bit field identifies uniquely a DP within the system identified by the SYSTEM_IDdummy cell: cell carrying a pseudo-random value used to fill the remaining capacity not used for PLS signaling, DPs or auxiliary streamsemergency alert channel: part of a frame that carries EAS information dataframe: physical layer time slot that starts with a preamble and ends with a frame edge symbolframe repetition unit: a set of frames belonging to same or different physical layer profile including a FEF, which is repeated eight times in a super-framefast information channel: a logical channel in a frame that carries the mapping information between a service and the corresponding base DPFECBLOCK: set of LDPC-encoded bits of a DP dataFFT size: nominal FFT size used for a particular mode, equal to the active symbol period Tsexpressed in cycles of the elementary period Tframe signaling symbol: OFDM symbol with higher pilot density used at the start of a frame in certain combinations of FFT size, guard interval and scattered pilot pattern, which carries a part of the PLS dataframe edge symbol: OFDM symbol with higher pilot density used at the end of a frame in certain combinations of FFT size, guard interval and scattered pilot patternframe-group: the set of all the frames having the same PHY profile type in a super-frame.future extension frame: physical layer time slot within the super-frame that could be used for future extension, which starts with a preamble Futurecast UTB system: proposed physical layer broadcasting system, of which the input is one or more MPEG2-TS or IP or general stream(s) and of which the output is an RF signalinput stream: A stream of data for an ensemble of services delivered to the end users by the system.normal data symbol: data symbol excluding the frame signaling symbol and the frame edge symbolPHY profile: subset of all configurations that a corresponding receiver should implementPLS: physical layer signaling data consisting of PLS1and PLS2PLS1: a first set of PLS data carried in the FSS symbols having a fixed size, coding and modulation, which carries basic information about the system as well as the parameters needed to decode the PLS2NOTE: PLS1data remains constant for the duration of a frame-group.PLS2: a second set of PLS data transmitted in the FSS symbol, which carries more detailed PLS data about the system and the DPsPLS2dynamic data: PLS2data that may dynamically change frame-by-framePLS2static data: PLS2data that remains static for the duration of a frame-grouppreamble signaling data: signaling data carried by the preamble symbol and used to identify the basic mode of the systempreamble symbol: fixed-length pilot symbol that carries basic PLS data and is located in the beginning of a frameNOTE: The preamble symbol is mainly used for fast initial band scan to detect the system signal, its timing, frequency offset, and FFT-size.reserved for future use: not defined by the present document but may be defined in futuresuper-frame: set of eight frame repetition unitstime interleaving block (TI block): set of cells within which time interleaving is carried out, corresponding to one use of the time interleaver memoryTI group: unit over which dynamic capacity allocation for a particular DP is carried out, made up of an integer, dynamically varying number of XFECBLOCKsNOTE: The TI group may be mapped directly to one frame or may be mapped to multiple frames. It may contain one or more TI blocks.Type 1 DP: DP of a frame where all DPs are mapped into the frame in TDM fashionType 2 DP: DP of a frame where all DPs are mapped into the frame in FDM fashionXFECBLOCK: set of Ncellscells carrying all the bits of one LDPC FECBLOCK FIG.1illustrates a structure of an apparatus for transmitting broadcast signals for future broadcast services according to an embodiment of the present invention. The apparatus for transmitting broadcast signals for future broadcast services according to an embodiment of the present invention can include an input formatting block1000, a BICM (Bit interleaved coding & modulation) block1010, a frame structure block1020, an OFDM (Orthogonal Frequency Division Multiplexing) generation block1030and a signaling generation block1040. A description will be given of the operation of each module of the apparatus for transmitting broadcast signals. IP stream/packets and MPEG2-TS are the main input formats, other stream types are handled as General Streams. In addition to these data inputs, Management Information is input to control the scheduling and allocation of the corresponding bandwidth for each input stream. One or multiple TS stream(s), IP stream(s) and/or General Stream(s) inputs are simultaneously allowed. The input formatting block1000can demultiplex each input stream into one or multiple data pipe(s), to each of which an independent coding and modulation is applied. The data pipe (DP) is the basic unit for robustness control, thereby affecting quality-of-service (QoS). One or multiple service(s) or service component(s) can be carried by a single DP. Details of operations of the input formatting block1000will be described later. The data pipe is a logical channel in the physical layer that carries service data or related metadata, which may carry one or multiple service(s) or service component(s). Also, the data pipe unit: a basic unit for allocating data cells to a DP in a frame. In the BICM block1010, parity data is added for error correction and the encoded bit streams are mapped to complex-value constellation symbols. The symbols are interleaved across a specific interleaving depth that is used for the corresponding DP. For the advanced profile, MIMO encoding is performed in the BICM block1010and the additional data path is added at the output for MIMO transmission. Details of operations of the BICM block1010will be described later. The Frame Building block1020can map the data cells of the input DPs into the OFDM symbols within a frame. After mapping, the frequency interleaving is used for frequency-domain diversity, especially to combat frequency-selective fading channels. Details of operations of the Frame Building block1020will be described later. After inserting a preamble at the beginning of each frame, the OFDM Generation block1030can apply conventional OFDM modulation having a cyclic prefix as guard interval. For antenna space diversity, a distributed MISO scheme is applied across the transmitters. In addition, a Peak-to-Average Power Reduction (PAPR) scheme is performed in the time domain. For flexible network planning, this proposal provides a set of various FFT sizes, guard interval lengths and corresponding pilot patterns. Details of operations of the OFDM Generation block1030will be described later. The Signaling Generation block1040can create physical layer signaling information used for the operation of each functional block. This signaling information is also transmitted so that the services of interest are properly recovered at the receiver side. Details of operations of the Signaling Generation block1040will be described later. FIGS.2,3and4illustrate the input formatting block1000according to embodiments of the present invention. A description will be given of each figure. FIG.2illustrates an input formatting block according to one embodiment of the present invention.FIG.2shows an input formatting module when the input signal is a single input stream. The input formatting block illustrated inFIG.2corresponds to an embodiment of the input formatting block1000described with reference toFIG.1. The input to the physical layer may be composed of one or multiple data streams. Each data stream is carried by one DP. The mode adaptation modules slice the incoming data stream into data fields of the baseband frame (BBF). The system supports three types of input data streams: MPEG2-TS, Internet protocol (IP) and Generic stream (GS). MPEG2-TS is characterized by fixed length (188 byte) packets with the first byte being a sync-byte (0x47). An IP stream is composed of variable length IP datagram packets, as signaled within IP packet headers. The system supports both IPv4 and IPv6 for the IP stream. GS may be composed of variable length packets or constant length packets, signaled within encapsulation packet headers. (a) shows a mode adaptation block2000and a stream adaptation2010for signal DP and (b) shows a PLS generation block2020and a PLS scrambler2030for generating and processing PLS data. A description will be given of the operation of each block. The Input Stream Splitter splits the input TS, IP, GS streams into multiple service or service component (audio, video, etc.) streams. The mode adaptation module2010is comprised of a CRC Encoder, BB (baseband) Frame Slicer, and BB Frame Header Insertion block. The CRC Encoder provides three kinds of CRC encoding for error detection at the user packet (UP) level, i.e., CRC-8, CRC-16, and CRC-32. The computed CRC bytes are appended after the UP. CRC-8 is used for TS stream and CRC-32 for IP stream. If the GS stream doesn't provide the CRC encoding, the proposed CRC encoding should be applied. BB Frame Slicer maps the input into an internal logical-bit format. The first received bit is defined to be the MSB. The BB Frame Slicer allocates a number of input bits equal to the available data field capacity. To allocate a number of input bits equal to the BBF payload, the UP packet stream is sliced to fit the data field of BBF. BB Frame Header Insertion block can insert fixed length BBF header of 2 bytes is inserted in front of the BB Frame. The BBF header is composed of STUFFI (1 bit), SYNCD (13 bits), and RFU (2 bits). In addition to the fixed 2-Byte BBF header, BBF can have an extension field (1 or 3 bytes) at the end of the 2-byte BBF header. The stream adaptation2010is comprised of stuffing insertion block and BB scrambler. The stuffing insertion block can insert stuffing field into a payload of a BB frame. If the input data to the stream adaptation is sufficient to fill a BB-Frame, STUFFI is set to ‘0’ and the BBF has no stuffing field. Otherwise STUFFI is set to ‘1’ and the stuffing field is inserted immediately after the BBF header. The stuffing field comprises two bytes of the stuffing field header and a variable size of stuffing data. The BB scrambler scrambles complete BBF for energy dispersal. The scrambling sequence is synchronous with the BBF. The scrambling sequence is generated by the feed-back shift register. The PLS generation block2020can generate physical layer signaling (PLS) data. The PLS provides the receiver with a means to access physical layer DPs. The PLS data consists of PLS1data and PLS2data. The PLS1data is a first set of PLS data carried in the FSS symbols in the frame having a fixed size, coding and modulation, which carries basic information about the system as well as the parameters needed to decode the PLS2data. The PLS1data provides basic transmission parameters including parameters required to enable the reception and decoding of the PLS2data. Also, the PLS1data remains constant for the duration of a frame-group. The PLS2data is a second set of PLS data transmitted in the FSS symbol, which carries more detailed PLS data about the system and the DPs. The PLS2contains parameters that provide sufficient information for the receiver to decode the desired DP. The PLS2signaling further consists of two types of parameters, PLS2Static data (PLS2-STAT data) and PLS2dynamic data (PLS2-DYN data). The PLS2Static data is PLS2data that remains static for the duration of a frame-group and the PLS2dynamic data is PLS2data that may dynamically change frame-by-frame. Details of the PLS data will be described later. The PLS scrambler2030can scramble the generated PLS data for energy dispersal. The above-described blocks may be omitted or replaced by blocks having similar or identical functions. FIG.3illustrates an input formatting block according to another embodiment of the present invention. The input formatting block illustrated inFIG.3corresponds to an embodiment of the input formatting block1000described with reference toFIG.1. FIG.3shows a mode adaptation block of the input formatting block when the input signal corresponds to multiple input streams. The mode adaptation block of the input formatting block for processing the multiple input streams can independently process the multiple input streams. Referring toFIG.3, the mode adaptation block for respectively processing the multiple input streams can include an input stream splitter3000, an input stream synchronizer3010, a compensating delay block3020, a null packet deletion block3030, a head compression block3040, a CRC encoder3050, a BB frame slicer3060and a BB header insertion block3070. Description will be given of each block of the mode adaptation block. Operations of the CRC encoder3050, BB frame slicer3060and BB header insertion block3070correspond to those of the CRC encoder, BB frame slicer and BB header insertion block described with reference toFIG.2and thus description thereof is omitted. The input stream splitter3000can split the input TS, IP, GS streams into multiple service or service component (audio, video, etc.) streams. The input stream synchronizer3010may be referred as ISSY. The ISSY can provide suitable means to guarantee Constant Bit Rate (CBR) and constant end-to-end transmission delay for any input data format. The ISSY is always used for the case of multiple DPs carrying TS, and optionally used for multiple DPs carrying GS streams. The compensating delay block3020can delay the split TS packet stream following the insertion of ISSY information to allow a TS packet recombining mechanism without requiring additional memory in the receiver. The null packet deletion block3030, is used only for the TS input stream case. Some TS input streams or split TS streams may have a large number of null-packets present in order to accommodate VBR (variable bit-rate) services in a CBR TS stream. In this case, in order to avoid unnecessary transmission overhead, null-packets can be identified and not transmitted. In the receiver, removed null-packets can be re-inserted in the exact place where they were originally by reference to a deleted null-packet (DNP) counter that is inserted in the transmission, thus guaranteeing constant bit-rate and avoiding the need for time-stamp (PCR) updating. The head compression block3040can provide packet header compression to increase transmission efficiency for TS or IP input streams. Because the receiver can have a priori information on certain parts of the header, this known information can be deleted in the transmitter. For Transport Stream, the receiver has a-priori information about the sync-byte configuration (0x47) and the packet length (188 Byte). If the input TS stream carries content that has only one PID, i.e., for only one service component (video, audio, etc.) or service sub-component (SVC base layer, SVC enhancement layer, MVC base view or MVC dependent views), TS packet header compression can be applied (optionally) to the Transport Stream. IP packet header compression is used optionally if the input steam is an IP stream. The above-described blocks may be omitted or replaced by blocks having similar or identical functions. FIG.4illustrates an input formatting block according to another embodiment of the present invention. The input formatting block illustrated inFIG.4corresponds to an embodiment of the input formatting block1000described with reference toFIG.1. FIG.4illustrates a stream adaptation block of the input formatting module when the input signal corresponds to multiple input streams. Referring toFIG.4, the mode adaptation block for respectively processing the multiple input streams can include a scheduler4000, an 1-Frame delay block4010, a stuffing insertion block4020, an in-band signaling4030, a BB Frame scrambler4040, a PLS generation block4050and a PLS scrambler4060. Description will be given of each block of the stream adaptation block. Operations of the stuffing insertion block4020, the BB Frame scrambler4040, the PLS generation block4050and the PLS scrambler4060correspond to those of the stuffing insertion block, BB scrambler, PLS generation block and the PLS scrambler described with reference toFIG.2and thus description thereof is omitted. The scheduler4000can determine the overall cell allocation across the entire frame from the amount of FECBLOCKs of each DP. Including the allocation for PLS, EAC and FIC, the scheduler generate the values of PLS2-DYN data, which is transmitted as in-band signaling or PLS cell in FSS of the frame. Details of FECBLOCK, EAC and FIC will be described later. The 1-Frame delay block4010can delay the input data by one transmission frame such that scheduling information about the next frame can be transmitted through the current frame for in-band signaling information to be inserted into the DPs. The in-band signaling4030can insert un-delayed part of the PLS2data into a DP of a frame. The above-described blocks may be omitted or replaced by blocks having similar or identical functions. FIG.5illustrates a BICM block according to an embodiment of the present invention. The BICM block illustrated inFIG.5corresponds to an embodiment of the BICM block1010described with reference toFIG.1. As described above, the apparatus for transmitting broadcast signals for future broadcast services according to an embodiment of the present invention can provide a terrestrial broadcast service, mobile broadcast service, UHDTV service, etc. Since QoS (quality of service) depends on characteristics of a service provided by the apparatus for transmitting broadcast signals for future broadcast services according to an embodiment of the present invention, data corresponding to respective services needs to be processed through different schemes. Accordingly, the a BICM block according to an embodiment of the present invention can independently process DPs input thereto by independently applying SISO, MISO and MIMO schemes to the data pipes respectively corresponding to data paths. Consequently, the apparatus for transmitting broadcast signals for future broadcast services according to an embodiment of the present invention can control QoS for each service or service component transmitted through each DP. (a) shows the BICM block shared by the base profile and the handheld profile and (b) shows the BICM block of the advanced profile. The BICM block shared by the base profile and the handheld profile and the BICM block of the advanced profile can include plural processing blocks for processing each DP. A description will be given of each processing block of the BICM block for the base profile and the handheld profile and the BICM block for the advanced profile. A processing block5000of the BICM block for the base profile and the handheld profile can include a Data FEC encoder5010, a bit interleaver5020, a constellation mapper5030, an SSD (Signal Space Diversity) encoding block5040and a time interleaver5050. The Data FEC encoder5010can perform the FEC encoding on the input BBF to generate FECBLOCK procedure using outer coding (BCH), and inner coding (LDPC). The outer coding (BCH) is optional coding method. Details of operations of the Data FEC encoder5010will be described later. The bit interleaver5020can interleave outputs of the Data FEC encoder5010to achieve optimized performance with combination of the LDPC codes and modulation scheme while providing an efficiently implementable structure. Details of operations of the bit interleaver5020will be described later. The constellation mapper5030can modulate each cell word from the bit interleaver5020in the base and the handheld profiles, or cell word from the Cell-word demultiplexer5010-1in the advanced profile using either QPSK, QAM-16, non-uniform QAM (NUQ-64, NUQ-256, NUQ-1024) or non-uniform constellation (NUC-16, NUC-64, NUC-256, NUC-1024) to give a power-normalized constellation point, e1. This constellation mapping is applied only for DPs. Observe that QAM-16 and NUQs are square shaped, while NUCs have arbitrary shape. When each constellation is rotated by any multiple of 90 degrees, the rotated constellation exactly overlaps with its original one. This “rotation-sense” symmetric property makes the capacities and the average powers of the real and imaginary components equal to each other. Both NUQs and NUCs are defined specifically for each code rate and the particular one used is signaled by the parameter DP_MOD filed in PLS2data. The SSD encoding block5040can precode cells in two (2D), three (3D), and four (4D) dimensions to increase the reception robustness under difficult fading conditions. The time interleaver5050can operates at the DP level. The parameters of time interleaving (TI) may be set differently for each DP. Details of operations of the time interleaver5050will be described later. A processing block5000-1of the BICM block for the advanced profile can include the Data FEC encoder, bit interleaver, constellation mapper, and time interleaver. However, the processing block5000-1is distinguished from the processing block5000further includes a cell-word demultiplexer5010-1and a MIMO encoding block5020-1. Also, the operations of the Data FEC encoder, bit interleaver, constellation mapper, and time interleaver in the processing block5000-1correspond to those of the Data FEC encoder5010, bit interleaver5020, constellation mapper5030, and time interleaver5050described and thus description thereof is omitted. The cell-word demultiplexer5010-1is used for the DP of the advanced profile to divide the single cell-word stream into dual cell-word streams for MIMO processing. Details of operations of the cell-word demultiplexer5010-1will be described later. The MIMO encoding block5020-1can processing the output of the cell-word demultiplexer5010-1using MIMO encoding scheme. The MIMO encoding scheme was optimized for broadcasting signal transmission. The MIMO technology is a promising way to get a capacity increase but it depends on channel characteristics. Especially for broadcasting, the strong LOS component of the channel or a difference in the received signal power between two antennas caused by different signal propagation characteristics makes it difficult to get capacity gain from MIMO. The proposed MIMO encoding scheme overcomes this problem using a rotation-based pre-coding and phase randomization of one of the MIMO output signals. MIMO encoding is intended for a 2×2 MIMO system requiring at least two antennas at both the transmitter and the receiver. Two MIMO encoding modes are defined in this proposal; full-rate spatial multiplexing (FR-SM) and full-rate full-diversity spatial multiplexing (FRFD-SM). The FR-SM encoding provides capacity increase with relatively small complexity increase at the receiver side while the FRFD-SM encoding provides capacity increase and additional diversity gain with a great complexity increase at the receiver side. The proposed MIMO encoding scheme has no restriction on the antenna polarity configuration. MIMO processing is required for the advanced profile frame, which means all DPs in the advanced profile frame are processed by the MIMO encoder. MIMO processing is applied at DP level. Pairs of the Constellation Mapper outputs NUQ (e1,jand e2,i) are fed to the input of the MIMO Encoder. Paired MIMO Encoder output (g1,i and g2,i) is transmitted by the same carrier k and OFDM symbol I of their respective TX antennas. The above-described blocks may be omitted or replaced by blocks having similar or identical functions. FIG.6illustrates a BICM block according to another embodiment of the present invention. The BICM block illustrated inFIG.6corresponds to an embodiment of the BICM block1010described with reference toFIG.1. FIG.6illustrates a BICM block for protection of physical layer signaling (PLS), emergency alert channel (EAC) and fast information channel (FIC). EAC is a part of a frame that carries EAS information data and FIC is a logical channel in a frame that carries the mapping information between a service and the corresponding base DP. Details of the EAC and FIC will be described later. Referring toFIG.6, the BICM block for protection of PLS, EAC and FIC can include a PLS FEC encoder6000, a bit interleaver6010, a constellation mapper6020and a time interleaver6030. Also, the PLS FEC encoder6000can include a scrambler, BCH encoding/zero insertion block, LDPC encoding block and LDPC parity punturing block. Description will be given of each block of the BICM block. The PLS FEC encoder6000can encode the scrambled PLS1/2data, EAC and FIC section. The scrambler can scramble PLS1data and PLS2data before BCH encoding and shortened and punctured LDPC encoding. The BCH encoding/zero insertion block can perform outer encoding on the scrambled PLS1/2data using the shortened BCH code for PLS protection and insert zero bits after the BCH encoding. For PLS1data only, the output bits of the zero insertion may be permutted before LDPC encoding. The LDPC encoding block can encode the output of the BCH encoding/zero insertion block using LDPC code. To generate a complete coded block, CIdpc, parity bits, PIdpcare encoded systematically from each zero-inserted PLS information block, IIdpcand appended after it. Cldcp=[IldpcPldpc]=[i0,i1, . . . ,iKldpc−1,p0,p1, . . . ,pNldpc−Kldpc−1] [Expression 1] The LDPC code parameters for PLS1and PLS2are as following table 4. TABLE 4SignalingKldpccodeTypeKsigKbchNbch—parity(=Nbch)NldpcNldpc—parityrateQldpcPLS13421020601080432032401/436PLS2<1021>102021002160720050403/1056 The LDPC parity punturing block can perform puncturing on the PLS1data and PLS2data. When shortening is applied to the PLS1data protection, some LDPC parity bits are punctured after LDPC encoding. Also, for the PLS2data protection, the LDPC parity bits of PLS2are punctured after LDPC encoding. These punctured bits are not transmitted. The bit interleaver6010can interleave the each shortened and punctured PLS1data and PLS2data. The constellation mapper6020can map the bit interleaved PLS1data and PLS2data onto constellations. The time interleaver6030can interleave the mapped PLS1data and PLS2data. The time interleaver6030can be omitted. The above-described blocks may be omitted or replaced by blocks having similar or identical functions. FIG.7illustrates a frame building block according to one embodiment of the present invention. The frame building block illustrated inFIG.7corresponds to an embodiment of the frame building block1020described with reference toFIG.1. Referring toFIG.7, the frame building block can include a delay compensation block7000, a cell mapper7010and a frequency interleaver7020. Description will be given of each block of the frame building block. The delay compensation block7000can adjust the timing between the data pipes and the corresponding PLS data to ensure that they are co-timed at the transmitter end. The PLS data is delayed by the same amount as data pipes are by addressing the delays of data pipes caused by the Input Formatting block and BICM block. The delay of the BICM block is mainly due to the time interleaver5050. In-band signaling data carries information of the next TI group so that they are carried one frame ahead of the DPs to be signaled. The Delay Compensating block delays in-band signaling data accordingly. The cell mapper7010can map PLS, EAC, FIC, DPs, auxiliary streams and dummy cells into the active carriers of the OFDM symbols in the frame. The basic function of the cell mapper7010is to map data cells produced by the TIs for each of the DPs, PLS cells, and EAC/FIC cells, if any, into arrays of active OFDM cells corresponding to each of the OFDM symbols within a frame. Service signaling data (such as PSI (program specific information)/SI) can be separately gathered and sent by a data pipe. The Cell Mapper operates according to the dynamic information produced by the scheduler and the configuration of the frame structure. Details of the frame will be described later. The frequency interleaver7020can randomly interleave data cells received from the cell mapper7010to provide frequency diversity. Also, the frequency interleaver7020can operate on very OFDM symbol pair comprised of two sequential OFDM symbols using a different interleaving-seed order to get maximum interleaving gain in a single frame. Details of operations of the frequency interleaver7020will be described later. The above-described blocks may be omitted or replaced by blocks having similar or identical functions. FIG.8illustrates an OFMD generation block according to an embodiment of the present invention. The OFMD generation block illustrated inFIG.8corresponds to an embodiment of the OFMD generation block1030described with reference toFIG.1. The OFDM generation block modulates the OFDM carriers by the cells produced by the Frame Building block, inserts the pilots, and produces the time domain signal for transmission. Also, this block subsequently inserts guard intervals, and applies PAPR (Peak-to-Average Power Radio) reduction processing to produce the final RF signal. Referring toFIG.8, the frame building block can include a pilot and reserved tone insertion block8000, a 2D-eSFN encoding block8010, an IFFT (Inverse Fast Fourier Transform) block8020, a PAPR reduction block8030, a guard interval insertion block8040, a preamble insertion block8050, other system insertion block8060and a DAC block8070. Description will be given of each block of the frame building block. The pilot and reserved tone insertion block8000can insert pilots and the reserved tone. Various cells within the OFDM symbol are modulated with reference information, known as pilots, which have transmitted values known a priori in the receiver. The information of pilot cells is made up of scattered pilots, continual pilots, edge pilots, FSS (frame signaling symbol) pilots and FES (frame edge symbol) pilots. Each pilot is transmitted at a particular boosted power level according to pilot type and pilot pattern. The value of the pilot information is derived from a reference sequence, which is a series of values, one for each transmitted carrier on any given symbol. The pilots can be used for frame synchronization, frequency synchronization, time synchronization, channel estimation, and transmission mode identification, and also can be used to follow the phase noise. Reference information, taken from the reference sequence, is transmitted in scattered pilot cells in every symbol except the preamble, FSS and FES of the frame. Continual pilots are inserted in every symbol of the frame. The number and location of continual pilots depends on both the FFT size and the scattered pilot pattern. The edge carriers are edge pilots in every symbol except for the preamble symbol. They are inserted in order to allow frequency interpolation up to the edge of the spectrum. FSS pilots are inserted in FSS(s) and FES pilots are inserted in FES. They are inserted in order to allow time interpolation up to the edge of the frame. The system according to an embodiment of the present invention supports the SFN network, where distributed MISO scheme is optionally used to support very robust transmission mode. The 2D-eSFN is a distributed MISO scheme that uses multiple TX antennas, each of which is located in the different transmitter site in the SFN network. The 2D-eSFN encoding block8010can process a 2D-eSFN processing to distorts the phase of the signals transmitted from multiple transmitters, in order to create both time and frequency diversity in the SFN configuration. Hence, burst errors due to low flat fading or deep-fading for a long time can be mitigated. The IFFT block8020can modulate the output from the 2D-eSFN encoding block8010using OFDM modulation scheme. Any cell in the data symbols which has not been designated as a pilot (or as a reserved tone) carries one of the data cells from the frequency interleaver. The cells are mapped to OFDM carriers. The PAPR reduction block8030can perform a PAPR reduction on input signal using various PAPR reduction algorithm in the time domain. The guard interval insertion block8040can insert guard intervals and the preamble insertion block8050can insert preamble in front of the signal. Details of a structure of the preamble will be described later. The other system insertion block8060can multiplex signals of a plurality of broadcast transmission/reception systems in the time domain such that data of two or more different broadcast transmission/reception systems providing broadcast services can be simultaneously transmitted in the same RF signal bandwidth. In this case, the two or more different broadcast transmission/reception systems refer to systems providing different broadcast services. The different broadcast services may refer to a terrestrial broadcast service, mobile broadcast service, etc. Data related to respective broadcast services can be transmitted through different frames. The DAC block8070can convert an input digital signal into an analog signal and output the analog signal. The signal output from the DAC block7800can be transmitted through multiple output antennas according to the physical layer profiles. A Tx antenna according to an embodiment of the present invention can have vertical or horizontal polarity. The above-described blocks may be omitted or replaced by blocks having similar or identical functions according to design. FIG.9illustrates a structure of an apparatus for receiving broadcast signals for future broadcast services according to an embodiment of the present invention. The apparatus for receiving broadcast signals for future broadcast services according to an embodiment of the present invention can correspond to the apparatus for transmitting broadcast signals for future broadcast services, described with reference toFIG.1. The apparatus for receiving broadcast signals for future broadcast services according to an embodiment of the present invention can include a synchronization & demodulation module9000, a frame parsing module9010, a demapping & decoding module9020, an output processor9030and a signaling decoding module9040. A description will be given of operation of each module of the apparatus for receiving broadcast signals. The synchronization & demodulation module9000can receive input signals through m Rx antennas, perform signal detection and synchronization with respect to a system corresponding to the apparatus for receiving broadcast signals and carry out demodulation corresponding to a reverse procedure of the procedure performed by the apparatus for transmitting broadcast signals. The frame parsing module9100can parse input signal frames and extract data through which a service selected by a user is transmitted. If the apparatus for transmitting broadcast signals performs interleaving, the frame parsing module9100can carry out deinterleaving corresponding to a reverse procedure of interleaving. In this case, the positions of a signal and data that need to be extracted can be obtained by decoding data output from the signaling decoding module9400to restore scheduling information generated by the apparatus for transmitting broadcast signals. The demapping & decoding module9200can convert the input signals into bit domain data and then deinterleave the same as necessary. The demapping & decoding module9200can perform demapping for mapping applied for transmission efficiency and correct an error generated on a transmission channel through decoding. In this case, the demapping & decoding module9200can obtain transmission parameters necessary for demapping and decoding by decoding the data output from the signaling decoding module9400. The output processor9300can perform reverse procedures of various compression/signal processing procedures which are applied by the apparatus for transmitting broadcast signals to improve transmission efficiency. In this case, the output processor9300can acquire necessary control information from data output from the signaling decoding module9400. The output of the output processor8300corresponds to a signal input to the apparatus for transmitting broadcast signals and may be MPEG-TSs, IP streams (v4or v6) and generic streams. The signaling decoding module9400can obtain PLS information from the signal demodulated by the synchronization & demodulation module9000. As described above, the frame parsing module9100, demapping & decoding module9200and output processor9300can execute functions thereof using the data output from the signaling decoding module9400. FIG.10illustrates a frame structure according to an embodiment of the present invention. FIG.10shows an example configuration of the frame types and FRUs in a super-frame. (a) shows a super frame according to an embodiment of the present invention, (b) shows FRU (Frame Repetition Unit) according to an embodiment of the present invention, (c) shows frames of variable PHY profiles in the FRU and (d) shows a structure of a frame. A super-frame may be composed of eight FRUs. The FRU is a basic multiplexing unit for TDM of the frames, and is repeated eight times in a super-frame. Each frame in the FRU belongs to one of the PHY profiles, (base, handheld, advanced) or FEF. The maximum allowed number of the frames in the FRU is four and a given PHY profile can appear any number of times from zero times to four times in the FRU (e.g., base, base, handheld, advanced). PHY profile definitions can be extended using reserved values of the PHY_PROFILE in the preamble, if required. The FEF part is inserted at the end of the FRU, if included. When the FEF is included in the FRU, the minimum number of FEFs is 8 in a super-frame. It is not recommended that FEF parts be adjacent to each other. One frame is further divided into a number of OFDM symbols and a preamble. As shown in (d), the frame comprises a preamble, one or more frame signaling symbols (FSS), normal data symbols and a frame edge symbol (FES). The preamble is a special symbol that enables fast Futurecast UTB system signal detection and provides a set of basic transmission parameters for efficient transmission and reception of the signal. The detailed description of the preamble will be will be described later. The main purpose of the FSS(s) is to carry the PLS data. For fast synchronization and channel estimation, and hence fast decoding of PLS data, the FSS has more dense pilot pattern than the normal data symbol. The FES has exactly the same pilots as the FSS, which enables frequency-only interpolation within the FES and temporal interpolation, without extrapolation, for symbols immediately preceding the FES. FIG.11illustrates a signaling hierarchy structure of the frame according to an embodiment of the present invention. FIG.11illustrates the signaling hierarchy structure, which is split into three main parts: the preamble signaling data11000, the PLS1data11010and the PLS2data11020. The purpose of the preamble, which is carried by the preamble symbol in every frame, is to indicate the transmission type and basic transmission parameters of that frame. The PLS1enables the receiver to access and decode the PLS2data, which contains the parameters to access the DP of interest. The PLS2is carried in every frame and split into two main parts: PLS2-STAT data and PLS2-DYN data. The static and dynamic portion of PLS2data is followed by padding, if necessary. FIG.12illustrates preamble signaling data according to an embodiment of the present invention. Preamble signaling data carries 21 bits of information that are needed to enable the receiver to access PLS data and trace DPs within the frame structure. Details of the preamble signaling data are as follows: PHY_PROFILE: This 3-bit field indicates the PHY profile type of the current frame. The mapping of different PHY profile types is given in below table 5. TABLE 5ValuePHY Profile000Base profile001Handheld profile010Advanced profiled011~110Reserved111FEF FFT_SIZE: This 2 bit field indicates the FFT size of the current frame within a frame-group, as described in below table 6. TABLE 6ValueFFT size008K FFT0116K FFT1032K FFT11Reserved GI_FRACTION: This 3 bit field indicates the guard interval fraction value in the current super-frame, as described in below table 7. TABLE 7ValueGI_FRACTION0001/50011/100101/200111/401001/801011/160110~111Reserved EAC_FLAG: This 1 bit field indicates whether the EAC is provided in the current frame. If this field is set to ‘1’, emergency alert service (EAS) is provided in the current frame. If this field set to ‘0’, EAS is not carried in the current frame. This field can be switched dynamically within a super-frame. PILOT_MODE: This 1-bit field indicates whether the pilot mode is mobile mode or fixed mode for the current frame in the current frame-group. If this field is set to ‘0’, mobile pilot mode is used. If the field is set to ‘1’, the fixed pilot mode is used. PAPR_FLAG: This 1-bit field indicates whether PAPR reduction is used for the current frame in the current frame-group. If this field is set to value ‘1’, tone reservation is used for PAPR reduction. If this field is set to ‘0’, PAPR reduction is not used. FRU_CONFIGURE: This 3-bit field indicates the PHY profile type configurations of the frame repetition units (FRU) that are present in the current super-frame. All profile types conveyed in the current super-frame are identified in this field in all preambles in the current super-frame. The 3-bit field has a different definition for each profile, as show in below table 8. TABLE 8CurrentCurrentCurrentCurrentPHY_PROFILE =PHY_PROFILE =PHY_PROFILE =PHY_PROFILE =‘000’ (base)‘001’ (handheld)‘010’ (advanced)‘111’ (FEF)FRU_CONFIGURE =Only baseOnly handheldOnly advancedOnly FEF000profile presentprofile presentprofile presentpresentFRU_CONFIGURE =HandheldBase profileBase profileBase profile1XXprofile presentpresentpresentpresentFRU_CONFIGURE =AdvancedAdvancedHandheldHandheldX1Xprofile presentprofile presentprofile presentprofile presentFRU_CONFIGURE =FEF presentFEF presentFEF presentAdvancedXX1profile present RESERVED: This 7-bit field is reserved for future use. FIG.13illustrates PLS1data according to an embodiment of the present invention. PLS1data provides basic transmission parameters including parameters required to enable the reception and decoding of the PLS2. As above mentioned, the PLS1data remain unchanged for the entire duration of one frame-group. The detailed definition of the signaling fields of the PLS1data are as follows: PREAMBLE_DATA: This 20-bit field is a copy of the preamble signaling data excluding the EAC_FLAG. NUM_FRAME_FRU: This 2-bit field indicates the number of the frames per FRU. PAYLOAD_TYPE: This 3-bit field indicates the format of the payload data carried in the frame-group. PAYLOAD_TYPE is signaled as shown in table 9. TABLE 9valuePayload type1XXTS stream is transmittedX1XIP stream is transmittedXX1GS stream is transmitted NUM_FSS: This 2-bit field indicates the number of FSS symbols in the current frame. SYSTEM_VERSION: This 8-bit field indicates the version of the transmitted signal format. The SYSTEM_VERSION is divided into two 4-bit fields, which are a major version and a minor version. Major version: The MSB four bits of SYSTEM_VERSION field indicate major version information. A change in the major version field indicates a non-backward-compatible change. The default value is ‘0000’. For the version described in this standard, the value is set to ‘0000’. Minor version: The LSB four bits of SYSTEM_VERSION field indicate minor version information. A change in the minor version field is backward-compatible. CELL_ID: This is a 16-bit field which uniquely identifies a geographic cell in an ATSC network. An ATSC cell coverage area may consist of one or more frequencies, depending on the number of frequencies used per Futurecast UTB system. If the value of the CELL_ID is not known or unspecified, this field is set to ‘0’. NETWORK_ID: This is a 16-bit field which uniquely identifies the current ATSC network. SYSTEM_ID: This 16-bit field uniquely identifies the Futurecast UTB system within the ATSC network. The Futurecast UTB system is the terrestrial broadcast system whose input is one or more input streams (TS, IP, GS) and whose output is an RF signal. The Futurecast UTB system carries one or more PHY profiles and FEF, if any. The same Futurecast UTB system may carry different input streams and use different RF frequencies in different geographical areas, allowing local service insertion. The frame structure and scheduling is controlled in one place and is identical for all transmissions within a Futurecast UTB system. One or more Futurecast UTB systems may have the same SYSTEM_ID meaning that they all have the same physical layer structure and configuration. The following loop consists of FRU_PHY_PROFILE, FRU_FRAME_LENGTH, FRU_GI_FRACTION, and RESERVED which are used to indicate the FRU configuration and the length of each frame type. The loop size is fixed so that four PHY profiles (including a FEF) are signaled within the FRU. If NUM_FRAME_FRU is less than 4, the unused fields are filled with zeros. FRU_PHY_PROFILE: This 3-bit field indicates the PHY profile type of the (i+1)th(i is the loop index) frame of the associated FRU. This field uses the same signaling format as shown in the table 8. FRU_FRAME_LENGTH: This 2-bit field indicates the length of the (i+1)thframe of the associated FRU. Using FRU_FRAME_LENGTH together with FRU_GI_FRACTION, the exact value of the frame duration can be obtained. FRU_GI_FRACTION: This 3-bit field indicates the guard interval fraction value of the (i+1)thframe of the associated FRU. FRU_GI_FRACTION is signaled according to the table 7. RESERVED: This 4-bit field is reserved for future use. The following fields provide parameters for decoding the PLS2data. PLS2_FEC_TYPE: This 2-bit field indicates the FEC type used by the PLS2protection. The FEC type is signaled according to table 10. The details of the LDPC codes will be described later. TABLE 10ContentPLS2 FEC type004K-1/4 and 7K-3/10 LDPC codes01~11Reserved PLS2_MOD: This 3-bit field indicates the modulation type used by the PLS2. The modulation type is signaled according to table 11. TABLE 11ValuePLS2_MODE000BPSK001QPSK010QAM-16011NUQ-64100~111Reserved PLS2_SIZE_CELL: This 15 bit field indicates Ctotal_partial_block, the size (specified as the number of QAM cells) of the collection of full coded blocks for PLS2that is carried in the current frame-group. This value is constant during the entire duration of the current frame-group. PLS2_STAT_SIZE_BIT: This 14-bit field indicates the size, in bits, of the PLS2-STAT for the current frame-group. This value is constant during the entire duration of the current frame-group. PLS2_DYN_SIZE_BIT: This 14-bit field indicates the size, in bits, of the PLS2-DYN for the current frame-group. This value is constant during the entire duration of the current frame-group. PLS2_REP_FLAG: This 1-bit flag indicates whether the PLS2repetition mode is used in the current frame-group. When this field is set to value ‘1’, the PLS2repetition mode is activated. When this field is set to value ‘0’, the PLS2repetition mode is deactivated. PLS2_REP_SIZE_CELL: This 15-bit field indicates Ctotal_partial_block, the size (specified as the number of QAM cells) of the collection of partial coded blocks for PLS2carried in every frame of the current frame-group, when PLS2repetition is used. If repetition is not used, the value of this field is equal to 0. This value is constant during the entire duration of the current frame-group. PLS2_NEXT_FEC_TYPE: This 2-bit field indicates the FEC type used for PLS2that is carried in every frame of the next frame-group. The FEC type is signaled according to the table 10. PLS2_NEXT_MOD: This 3-bit field indicates the modulation type used for PLS2that is carried in every frame of the next frame-group. The modulation type is signaled according to the table 11. PLS2_NEXT_REP_FLAG: This 1-bit flag indicates whether the PLS2repetition mode is used in the next frame-group. When this field is set to value ‘1’, the PLS2repetition mode is activated. When this field is set to value ‘0’, the PLS2repetition mode is deactivated. PLS2_NEXT_REP_SIZE_CELL: This 15-bit field indicates Ctotal_full_block, The size (specified as the number of QAM cells) of the collection of full coded blocks for PLS2that is carried in every frame of the next frame-group, when PLS2repetition is used. If repetition is not used in the next frame-group, the value of this field is equal to 0. This value is constant during the entire duration of the current frame-group. PLS2_NEXT_REP_STAT_SIZE_BIT: This 14-bit field indicates the size, in bits, of the PLS2-STAT for the next frame-group. This value is constant in the current frame-group. PLS2_NEXT_REP_DYN_SIZE_BIT: This 14-bit field indicates the size, in bits, of the PLS2-DYN for the next frame-group. This value is constant in the current frame-group. PLS2_AP_MODE: This 2-bit field indicates whether additional parity is provided for PLS2in the current frame-group. This value is constant during the entire duration of the current frame-group. The below table 12 gives the values of this field. When this field is set to ‘00’, additional parity is not used for the PLS2in the current frame-group. TABLE 12ValuePLS2-AP mode00AP is not provided01AP1 mode10~11Reserved PLS2_AP_SIZE_CELL: This 15-bit field indicates the size (specified as the number of QAM cells) of the additional parity bits of the PLS2. This value is constant during the entire duration of the current frame-group. PLS2_NEXT_AP_MODE: This 2-bit field indicates whether additional parity is provided for PLS2signaling in every frame of next frame-group. This value is constant during the entire duration of the current frame-group. The table 12 defines the values of this field PLS2_NEXT_AP_SIZE_CELL: This 15-bit field indicates the size (specified as the number of QAM cells) of the additional parity bits of the PLS2in every frame of the next frame-group. This value is constant during the entire duration of the current frame-group. RESERVED: This 32-bit field is reserved for future use. CRC_32: A 32-bit error detection code, which is applied to the entire PLS1signaling. FIG.14illustrates PLS2data according to an embodiment of the present invention. FIG.14illustrates PLS2-STAT data of the PLS2data. The PLS2-STAT data are the same within a frame-group, while the PLS2-DYN data provide information that is specific for the current frame. The details of fields of the PLS2-STAT data are as follows: FIC_FLAG: This 1-bit field indicates whether the FIC is used in the current frame-group. If this field is set to ‘1’, the FIC is provided in the current frame. If this field set to ‘0’, the FIC is not carried in the current frame. This value is constant during the entire duration of the current frame-group. AUX_FLAG: This 1-bit field indicates whether the auxiliary stream(s) is used in the current frame-group. If this field is set to ‘1’, the auxiliary stream is provided in the current frame. If this field set to ‘0’, the auxiliary stream is not carried in the current frame. This value is constant during the entire duration of current frame-group. NUM_DP: This 6-bit field indicates the number of DPs carried within the current frame. The value of this field ranges from 1 to 64, and the number of DPs is NUM_DP+1. DP_ID: This 6-bit field identifies uniquely a DP within a PHY profile. DP_TYPE: This 3-bit field indicates the type of the DP. This is signaled according to the below table 13. TABLE 13ValueDP Type000DP Type 1001DP Type 2010~111reserved DP_GROUP_ID: This 8-bit field identifies the DP group with which the current DP is associated. This can be used by a receiver to access the DPs of the service components associated with a particular service, which will have the same DP_GROUP_ID. BASE_DP_ID: This 6-bit field indicates the DP carrying service signaling data (such as PSI/SI) used in the Management layer. The DP indicated by BASE_DP_ID may be either a normal DP carrying the service signaling data along with the service data or a dedicated DP carrying only the service signaling data DP_FEC_TYPE: This 2-bit field indicates the FEC type used by the associated DP. The FEC type is signaled according to the below table 14. TABLE 14ValueFEC_TYPE0016K LDPC0164K LDPC10~11Reserved DP_COD: This 4-bit field indicates the code rate used by the associated DP. The code rate is signaled according to the below table 15. TABLE 15ValueCode rate00005/1500016/1500107/1500118/1501009/15010110/15011011/15011112/15100013/151001~1111Reserved DP_MOD: This 4-bit field indicates the modulation used by the associated OP. The modulation is signaled according to the below table 16. TABLE 16ValueModulation0000QPSK0001QAM-160010NUQ-640011NUQ-2560100NUQ-10240101NUC-160110NUC-640111NUC-2561000NUC-10241001~1111reserved DP_SSD_FLAG: This 1-bit field indicates whether the SSD mode is used in the associated OP. If this field is set to value ‘1’, SSD is used. If this field is set to value ‘0’, SSD is not used. The following field appears only if PHY_PROFILE is equal to ‘010’, which indicates the advanced profile: DP_MIMO: This 3-bit field indicates which type of MIMO encoding process is applied to the associated DP. The type of MIMO encoding process is signaled according to the table 17. TABLE 17ValueMIMO encoding0000FR-SM0001FRFD-SM010~111reserved DP_TI_TYPE: This 1-bit field indicates the type of time-interleaving. A value of ‘0’ indicates that one TI group corresponds to one frame and contains one or more TI-blocks. A value of ‘1’ indicates that one TI group is carried in more than one frame and contains only one TI-block. DP_TI_LENGTH: The use of this 2-bit field (the allowed values are only 1, 2, 4, 8) is determined by the values set within the DP_TI_TYPE field as follows: If the DP_TI_TYPE is set to the value ‘1’, this field indicates PI, the number of the frames to which each TI group is mapped, and there is one TI-block per TI group (NTI,=1). The allowed PIvalues with 2-bit field are defined in the below table 18. If the DP_TI_TYPE is set to the value ‘0’, this field indicates the number of TI-blocks NTIper TI group, and there is one TI group per frame (PI=1). The allowed PIvalues with 2-bit field are defined in the below table 18. TABLE 182-bit fieldPINTI0011012210431184 DP_FRAME_INTERVAL: This 2-bit field indicates the frame interval (IJump) within the frame-group for the associated DP and the allowed values are 1, 2, 4, 8 (the corresponding 2-bit field is ‘00’, ‘01’, ‘10’, or ‘11’, respectively). For DPs that do not appear every frame of the frame-group, the value of this field is equal to the interval between successive frames. For example, if a DP appears on the frames1,5,9,13, etc., this field is set to 4. For DPs that appear in every frame, this field is set to ‘1’. DP_TI_BYPASS: This 1-bit field determines the availability of time interleaver5050. If time interleaving is not used for a DP, it is set to ‘1’. Whereas if time interleaving is used it is set to ‘0’. DP_FIRST_FRAME_IDX: This 5-bit field indicates the index of the first frame of the super-frame in which the current DP occurs. The value of DP_FIRST_FRAME_IDX ranges from 0 to 31 DP_NUM_BLOCK_MAX: This 10-bit field indicates the maximum value of DP_NUM_BLOCKS for this DP. The value of this field has the same range as DP_NUM_BLOCKS. DP_PAYLOAD_TYPE: This 2-bit field indicates the type of the payload data carried by the given DP. DP_PAYLOAD_TYPE is signaled according to the below table 19. TABLE 19ValuePayload Type00TS.01IP10GS11reserved DP_INBAND_MODE: This 2-bit field indicates whether the current DP carries in-band signaling information. The in-band signaling type is signaled according to the below table 20. TABLE 20ValueIn-band mode00In-band signaling is not carried.01INBAND-PLS is carried only10INBAND-ISSY is carried only11INBAND-PLS and INBAND-ISSY are carried DP_PROTOCOL_TYPE: This 2-bit field indicates the protocol type of the payload carried by the given DP. It is signaled according to the below table 21 when input payload types are selected. TABLE 21If DP_PAY-If DP_PAY-If DP_PAY-LOAD_TYPELOAD_TYPELOAD_TYPEValueIs TSIs IPIs GS00MPEG2-TSIPv4(Note)01ReservedIPv6Reserved10ReservedReservedReserved11ReservedReservedReserved DP_CRC_MODE: This 2-bit field indicates whether CRC encoding is used in the Input Formatting block. The CRC mode is signaled according to the below table 22. TABLE 22ValueCRC mode00Not used01CRC-810CRC-1611CRC-32 DNP_MODE: This 2-bit field indicates the null-packet deletion mode used by the associated DP when DP_PAYLOAD_TYPE is set to TS (‘00’). DNP_MODE is signaled according to the below table 23. If DP_PAYLOAD_TYPE is not TS (‘00’), DNP_MODE is set to the value ‘00’. TABLE 23ValueNull-packet deletion mode00Not used01DNP-NORMAL10DNP-OFFSET11reserved ISSY_MODE: This 2-bit field indicates the ISSY mode use by the associated DP when DP_PAYLOAD_TYPE is set to TS (‘00’). The ISSY_MODE is signaled according to the below table 24 If DP_PAYLOAD_TYPE is not TS (‘00’), ISSY_MODE is set to the value ‘00’. TABLE 24ValueISSY mode00Not used01ISSY-UP10ISSY-BBF11reserved HC_MODE_TS: This 2-bit field indicates the TS header compression mode used by the associated DP when DP_PAYLOAD_TYPE is set to TS (‘00’). The HC_MODE_TS is signaled according to the below table 25. TABLE 25ValueHeader compression mode00HC_MODE_TS 101HC_MODE_TS 210HC_MODE_TS 311HC_MODE_TS 4 HC_MODE_IP: This 2-bit field indicates the IP header compression mode when DP_PAYLOAD_TYPE is set to IP (‘01’). The HC_MODE_IP is signaled according to the below table 26. TABLE 26ValueHeader compression mode00No compression01HC_MODE_IP 110-11reserved PID: This 13-bit field indicates the PID number for TS header compression when DP_PAYLOAD_TYPE is set to TS (‘00’) and HC_MODE_TS is set to ‘01’ or ‘10’. RESERVED: This 8-bit field is reserved for future use. The following field appears only if FIC_FLAG is equal to ‘1’: FIC_VERSION: This 8-bit field indicates the version number of the FIC. FIC_LENGTH_BYTE: This 13-bit field indicates the length, in bytes, of the FIC. RESERVED: This 8-bit field is reserved for future use. The following field appears only if AUX_FLAG is equal to ‘1’: NUM_AUX: This 4-bit field indicates the number of auxiliary streams. Zero means no auxiliary streams are used. AUX_CONFIG_RFU: This 8-bit field is reserved for future use. AUX_STREAM_TYPE: This 4-bit is reserved for future use for indicating the type of the current auxiliary stream. AUX_PRIVATE_CONFIG: This 28-bit field is reserved for future use for signaling auxiliary streams. FIG.15illustrates PLS2data according to another embodiment of the present invention. FIG.15illustrates PLS2-DYN data of the PLS2data. The values of the PLS2-DYN data may change during the duration of one frame-group, while the size of fields remains constant. The details of fields of the PLS2-DYN data are as follows: FRAME_INDEX: This 5-bit field indicates the frame index of the current frame within the super-frame. The index of the first frame of the super-frame is set to ‘0’. PLS_CHANGE_COUNTER: This 4-bit field indicates the number of super-frames ahead where the configuration will change. The next super-frame with changes in the configuration is indicated by the value signaled within this field. If this field is set to the value ‘0000’, it means that no scheduled change is foreseen: e.g., value ‘1’ indicates that there is a change in the next super-frame. FIC_CHANGE_COUNTER: This 4-bit field indicates the number of super-frames ahead where the configuration (i.e., the contents of the FIC) will change. The next super-frame with changes in the configuration is indicated by the value signaled within this field. If this field is set to the value ‘0000’, it means that no scheduled change is foreseen: e.g. value ‘0001’ indicates that there is a change in the next super-frame. RESERVED: This 16-bit field is reserved for future use. The following fields appear in the loop over NUM_DP, which describe the parameters associated with the DP carried in the current frame. (a) DP_ID: This 6-bit field indicates uniquely the DP within a PHY profile. DP_START: This 15-bit (or 13-bit) field indicates the start position of the first of the DPs using the DPU addressing scheme. The DP_START field has differing length according to the PHY profile and FFT size as shown in the below table 27. TABLE 27DP_START field sizePHY profile64K16KBase13 bit15 bitHandheld—13 bitAdvanced13 bit15 bit DP_NUM_BLOCK: This 10-bit field indicates the number of FEC blocks in the current TI group for the current DP. The value of DP_NUM_BLOCK ranges from 0 to 1023 (b) RESERVED: This 8-bit field is reserved for future use. The following fields indicate the FIC parameters associated with the EAC. EAC_FLAG: This 1-bit field indicates the existence of the EAC in the current frame. This bit is the same value as the EAC_FLAG in the preamble. EAS_WAKE_UP_VERSION_NUM: This 8-bit field indicates the version number of a wake-up indication. If the EAC_FLAG field is equal to ‘1’, the following 12 bits are allocated for EAC_LENGTH_BYTE field. If the EAC_FLAG field is equal to ‘0’, the following 12 bits are allocated for EAC_COUNTER. EAC_LENGTH_BYTE: This 12-bit field indicates the length, in byte, of the EAC. EAC_COUNTER: This 12-bit field indicates the number of the frames before the frame where the EAC arrives. The following field appears only if the AUX_FLAG field is equal to ‘1’: (c) AUX_PRIVATE_DYN: This 48-bit field is reserved for future use for signaling auxiliary streams. The meaning of this field depends on the value of AUX_STREAM_TYPE in the configurable PLS2-STAT. CRC_32: A 32-bit error detection code, which is applied to the entire PLS2. FIG.16illustrates a logical structure of a frame according to an embodiment of the present invention. As above mentioned, the PLS, EAC, FIC, DPs, auxiliary streams and dummy cells are mapped into the active carriers of the OFDM symbols in the frame. The PLS1and PLS2are first mapped into one or more FSS(s). After that, EAC cells, if any, are mapped immediately following the PLS field, followed next by FIC cells, if any. The DPs are mapped next after the PLS or EAC, FIC, if any. Type 1 DPs follows first, and Type 2 DPs next. The details of a type of the DP will be described later. In some case, DPs may carry some special data for EAS or service signaling data. The auxiliary stream or streams, if any, follow the DPs, which in turn are followed by dummy cells. Mapping them all together in the above mentioned order, i.e. PLS, EAC, FIC, DPs, auxiliary streams and dummy data cells exactly fill the cell capacity in the frame. FIG.17illustrates PLS mapping according to an embodiment of the present invention. PLS cells are mapped to the active carriers of FSS(s). Depending on the number of cells occupied by PLS, one or more symbols are designated as FSS(s), and the number of FSS(s) NFSS is signaled by NUM_FSS in PLS1. The FSS is a special symbol for carrying PLS cells. Since robustness and latency are critical issues in the PLS, the FSS(s) has higher density of pilots allowing fast synchronization and frequency-only interpolation within the FSS. PLS cells are mapped to active carriers of the NFSS FSS(s) in a top-down manner as shown in an example inFIG.17. The PLS1cells are mapped first from the first cell of the first FSS in an increasing order of the cell index. The PLS2cells follow immediately after the last cell of the PLS1and mapping continues downward until the last cell index of the first FSS. If the total number of required PLS cells exceeds the number of active carriers of one FSS, mapping proceeds to the next FSS and continues in exactly the same manner as the first FSS. After PLS mapping is completed, DPs are carried next. If EAC, FIC or both are present in the current frame, they are placed between PLS and “normal” DPs. FIG.18illustrates EAC mapping according to an embodiment of the present invention. EAC is a dedicated channel for carrying EAS messages and links to the DPs for EAS. EAS support is provided but EAC itself may or may not be present in every frame. EAC, if any, is mapped immediately after the PLS2cells. EAC is not preceded by any of the FIC, DPs, auxiliary streams or dummy cells other than the PLS cells. The procedure of mapping the EAC cells is exactly the same as that of the PLS. The EAC cells are mapped from the next cell of the PLS2in increasing order of the cell index as shown in the example inFIG.18. Depending on the EAS message size, EAC cells may occupy a few symbols, as shown inFIG.18. EAC cells follow immediately after the last cell of the PLS2, and mapping continues downward until the last cell index of the last FSS. If the total number of required EAC cells exceeds the number of remaining active carriers of the last FSS mapping proceeds to the next symbol and continues in exactly the same manner as FSS(s). The next symbol for mapping in this case is the normal data symbol, which has more active carriers than a FSS. After EAC mapping is completed, the FIC is carried next, if any exists. If FIC is not transmitted (as signaled in the PLS2field), DPs follow immediately after the last cell of the EAC. FIG.19illustrates FIC mapping according to an embodiment of the present invention. (d) shows an example mapping of FIC cell without EAC and (b) shows an example mapping of FIC cell with EAC. FIC is a dedicated channel for carrying cross-layer information to enable fast service acquisition and channel scanning. This information primarily includes channel binding information between DPs and the services of each broadcaster. For fast scan, a receiver can decode FIC and obtain information such as broadcaster ID, number of services, and BASE_DP_ID. For fast service acquisition, in addition to FIC, base DP can be decoded using BASE_DP_ID. Other than the content it carries, a base DP is encoded and mapped to a frame in exactly the same way as a normal DP. Therefore, no additional description is required for a base DP. The FIC data is generated and consumed in the Management Layer. The content of FIC data is as described in the Management Layer specification. The FIC data is optional and the use of FIC is signaled by the FIC_FLAG parameter in the static part of the PLS2. If FIC is used, FIC_FLAG is set to ‘1’ and the signaling field for FIC is defined in the static part of PLS2. Signaled in this field are FIC_VERSION, and FIC_LENGTH_BYTE. FIC uses the same modulation, coding and time interleaving parameters as PLS2. FIC shares the same signaling parameters such as PLS2_MOD and PLS2_FEC. FIC data, if any, is mapped immediately after PLS2or EAC if any. FIC is not preceded by any normal DPs, auxiliary streams or dummy cells. The method of mapping FIC cells is exactly the same as that of EAC which is again the same as PLS. Without EAC after PLS, FIC cells are mapped from the next cell of the PLS2in an increasing order of the cell index as shown in an example in (a). Depending on the FIC data size, FIC cells may be mapped over a few symbols, as shown in (b). FIC cells follow immediately after the last cell of the PLS2, and mapping continues downward until the last cell index of the last FSS. If the total number of required FIC cells exceeds the number of remaining active carriers of the last FSS, mapping proceeds to the next symbol and continues in exactly the same manner as FSS(s). The next symbol for mapping in this case is the normal data symbol which has more active carriers than a FSS. If EAS messages are transmitted in the current frame, EAC precedes FIC, and FIC cells are mapped from the next cell of the EAC in an increasing order of the cell index as shown in (b). After FIC mapping is completed, one or more DPs are mapped, followed by auxiliary streams, if any, and dummy cells. FIG.20illustrates a type of DP according to an embodiment of the present invention. (e) shows type 1 DP and (b) shows type 2 DP. After the preceding channels, i.e., PLS, EAC and FIC, are mapped, cells of the DPs are mapped. A DP is categorized into one of two types according to mapping method: Type 1 DP: DP is mapped by TDM Type 2 DP: DP is mapped by FDM The type of DP is indicated by DP_TYPE field in the static part of PLS2.FIG.20illustrates the mapping orders of Type 1 DPs and Type 2 DPs. Type 1 DPs are first mapped in the increasing order of cell index, and then after reaching the last cell index, the symbol index is increased by one. Within the next symbol, the DP continues to be mapped in the increasing order of cell index starting from p=0. With a number of DPs mapped together in one frame, each of the Type 1 DPs are grouped in time, similar to TDM multiplexing of DPs. Type 2 DPs are first mapped in the increasing order of symbol index, and then after reaching the last OFDM symbol of the frame, the cell index increases by one and the symbol index rolls back to the first available symbol and then increases from that symbol index. After mapping a number of DPs together in one frame, each of the Type 2 DPs are grouped in frequency together, similar to FDM multiplexing of DPs. Type 1 DPs and Type 2 DPs can coexist in a frame if needed with one restriction; Type 1 DPs always precede Type 2 DPs. The total number of OFDM cells carrying Type 1 and Type 2 DPs cannot exceed the total number of OFDM cells available for transmission of DPs: DDP1+DDP2≤DDP[Expression 2] where DDP1is the number of OFDM cells occupied by Type 1 DPs, DDP2is the number of cells occupied by Type 2 DPs. Since PLS, EAC, FIC are all mapped in the same way as Type 1 DP, they all follow “Type 1 mapping rule”. Hence, overall, Type 1 mapping always precedes Type 2 mapping. FIG.21illustrates DP mapping according to an embodiment of the present invention. (f) shows an addressing of OFDM cells for mapping type 1 DPs and (b) shows an an addressing of OFDM cells for mapping for type 2 DPs. Addressing of OFDM cells for mapping Type 1 DPs (0, . . . , DDP1−1) is defined for the active data cells of Type 1 DPs. The addressing scheme defines the order in which the cells from the TIs for each of the Type 1 DPs are allocated to the active data cells. It is also used to signal the locations of the DPs in the dynamic part of the PLS2. Without EAC and FIC, address0refers to the cell immediately following the last cell carrying PLS in the last FSS. If EAC is transmitted and FIC is not in the corresponding frame, address0refers to the cell immediately following the last cell carrying EAC. If FIC is transmitted in the corresponding frame, address0refers to the cell immediately following the last cell carrying FIC. Address0for Type 1 DPs can be calculated considering two different cases as shown in (a). In the example in (a), PLS, EAC and FIC are assumed to be all transmitted. Extension to the cases where either or both of EAC and FIC are omitted is straightforward. If there are remaining cells in the FSS after mapping all the cells up to FIC as shown on the left side of (a). Addressing of OFDM cells for mapping Type 2 DPs (0, . . . , DDP2−1) is defined for the active data cells of Type 2 DPs. The addressing scheme defines the order in which the cells from the TIs for each of the Type 2 DPs are allocated to the active data cells. It is also used to signal the locations of the DPs in the dynamic part of the PLS2. Three slightly different cases are possible as shown in (b). For the first case shown on the left side of (b), cells in the last FSS are available for Type 2 DP mapping. For the second case shown in the middle, FIC occupies cells of a normal symbol, but the number of FIC cells on that symbol is not larger than CFSS. The third case, shown on the right side in (b), is the same as the second case except that the number of FIC cells mapped on that symbol exceeds CFSS. The extension to the case where Type 1 DP(s) precede Type 2 DP(s) is straightforward since PLS, EAC and FIC follow the same “Type 1 mapping rule” as the Type 1 DP(s). A data pipe unit (DPU) is a basic unit for allocating data cells to a DP in a frame. A DPU is defined as a signaling unit for locating DPs in a frame. A Cell Mapper7010may map the cells produced by the TIs for each of the DPs. A Time interleaver5050outputs a series of TI-blocks and each TI-block comprises a variable number of XFECBLOCKs which is in turn composed of a set of cells. The number of cells in an XFECBLOCK, Ncells, is dependent on the FECBLOCK size, Nldpc, and the number of transmitted bits per constellation symbol. A DPU is defined as the greatest common divisor of all possible values of the number of cells in a XFECBLOCK, Ncells, supported in a given PHY profile. The length of a DPU in cells is defined as LDPU. Since each PHY profile supports different combinations of FECBLOCK size and a different number of bits per constellation symbol, LDPUis defined on a PHY profile basis. FIG.22illustrates an FEC structure according to an embodiment of the present invention. FIG.22illustrates an FEC structure according to an embodiment of the present invention before bit interleaving. As above mentioned, Data FEC encoder may perform the FEC encoding on the input BBF to generate FECBLOCK procedure using outer coding (BCH), and inner coding (LDPC). The illustrated FEC structure corresponds to the FECBLOCK. Also, the FECBLOCK and the FEC structure have same value corresponding to a length of LDPC codeword. The BCH encoding is applied to each BBF (Kbchbits), and then LDPC encoding is applied to BCH-encoded BBF (Kldpcbits=Nbchbits) as illustrated inFIG.22. The value of Nldpcis either 64800 bits (long FECBLOCK) or 16200 bits (short FECBLOCK). The below table 28 and table 29 show FEC encoding parameters for a long FECBLOCK and a short FECBLOCK, respectively. TABLE 28BCH errorcorrectionLDPC RateNldpcKldpcKbchcapabilityNbch− Kbch5/15648002160021408121926/1525920257287/1530240300488/1534560343689/15388803868810/15432004300811/15475204732812/15518405164813/155616055968 TABLE 29BCH errorcorrectionLDPC RateNldpcKldpcKbchcapabilityNbch− Kbch5/151620054005232121686/15648063127/15756073928/15864084729/159720955210/15108001063211/15118801171212/15129601279213/151404013872 The details of operations of the BCH encoding and LDPC encoding are as follows: A 12-error correcting BCH code is used for outer encoding of the BBF. The BCH generator polynomial for short FECBLOCK and long FECBLOCK are obtained by multiplying together all polynomials. LDPC code is used to encode the output of the outer BCH encoding. To generate a completed Bldpc, (FECBLOCK), Pldpc(parity bits) is encoded systematically from each Ildpc(BCH-encoded BBF), and appended to Ildpc. The completed Bldpc(FECBLOCK) are expressed as follow Expression. Bldpc=[IldpcPldpc]=[i0,i1, . . . ,iKldpc−1,pNldpc−Kldpc−1] [Expression 3] The parameters for long FECBLOCK and short FECBLOCK are given in the above table 28 and 29, respectively. The detailed procedure to calculate Nldpc−Kldpcparity bits for long FECBLOCK, is as follows: 1) Initialize the parity bits, p0=p1=p2= . . . =pNldpc−Kldpc−1=0 [Expression 4] 2) Accumulate the first information bit—i0, at parity bit addresses specified in the first row of an addresses of parity check matrix. The details of addresses of parity check matrix will be described later. For example, for rate13/15: p983=p983⊕i0p2815=p2815⊕i0p4837=p4837⊕i0p4989=p4989⊕i0p6138=p6138⊕i0p6458=p6458⊕i0p6921=p6921⊕i0p6974=p6974⊕i0p7572=p7572⊕i0p8260=p8260⊕i0p8496=p8496⊕i0[Expression5] 3) For the next 359 information bits, is, s=1, 2, . . . , 359 accumulate isat parity bit addresses using following Expression. {x+(smod 360)×Qldpc}mod(Nldpc−Kldpc) [Expression 6] where x denotes the address of the parity bit accumulator corresponding to the first bit i0, and Qldpcis a code rate dependent constant specified in the addresses of parity check matrix. Continuing with the example, Qldpc=24 for rate13/15, so for information bit i1, the following operations are performed: p1007=p1007⊕i1p2839=p2839⊕i1p4861=p4861⊕i1p5013=p5013⊕i1p6162=p6162⊕i1p6482=p6482⊕i1p6945=p6945⊕i1p6998=p6998⊕i1p7596=p7596⊕i1p8284=p8284⊕i1p8520=p8520⊕i1[Expression7] 4) For the 361stinformation bit i360, the addresses of the parity bit accumulators are given in the second row of the addresses of parity check matrix. In a similar manner the addresses of the parity bit accumulators for the following 359 information bits is, s=361, 362, . . . , 719 are obtained using the Expression 6, where x denotes the address of the parity bit accumulator corresponding to the information bit i360, i.e., the entries in the second row of the addresses of parity check matrix. 5) In a similar manner, for every group of 360 new information bits, a new row from addresses of parity check matrixes used to find the addresses of the parity bit accumulators. After all of the information bits are exhausted, the final parity bits are obtained as follows: 6) Sequentially perform the following operations starting with i=1 pi=pi⊕pi−1,i=1,2, . . . ,Nldpc−Kldpc−1 [Expression 8] where final content of pi, i=0,1, . . . Nldpc−Kldpc−1 is equal to the parity bit pi. TABLE 30Code RateQldpc5/151206/151087/15968/15849/157210/156011/154812/153613/1524 This LDPC encoding procedure for a short FECBLOCK is in accordance with t LDPC encoding procedure for the long FECBLOCK, except replacing the table 30 with table 31, and replacing the addresses of parity check matrix for the long FECBLOCK with the addresses of parity check matrix for the short FECBLOCK. TABLE 31Code RateQldpc5/15306/15277/15248/15219/151810/151511/151212/15913/156 FIG.23illustrates a bit interleaving according to an embodiment of the present invention. The outputs of the LDPC encoder are bit-interleaved, which consists of parity interleaving followed by Quasi-Cyclic Block (QCB) interleaving and inner-group interleaving. (g) shows Quasi-Cyclic Block (QCB) interleaving and (b) shows inner-group interleaving. The FECBLOCK may be parity interleaved. At the output of the parity interleaving, the LDPC codeword consists of 180 adjacent QC blocks in a long FECBLOCK and 45 adjacent QC blocks in a short FECBLOCK. Each QC block in either a long or short FECBLOCK consists of 360 bits. The parity interleaved LDPC codeword is interleaved by QCB interleaving. The unit of QCB interleaving is a QC block. The QC blocks at the output of parity interleaving are permutated by QCB interleaving as illustrated inFIG.23, where Ncells=64800/ηmodor 16200/ηmodaccording to the FECBLOCK length. The QCB interleaving pattern is unique to each combination of modulation type and LDPC code rate. After QCB interleaving, inner-group interleaving is performed according to modulation type and order (ηmod) which is defined in the below table 32. The number of QC blocks for one inner-group, NQCB_IG, is also defined. TABLE 32Modulation typeηmodNQCB—IGQAM-1642NUC-1644NUQ-6463NUC-6466NUQ-25684NUC-25688NUQ-1024105NUC-10241010 The inner-group interleaving process is performed with NQCB_IGQC blocks of the QCB interleaving output. Inner-group interleaving has a process of writing and reading the bits of the inner-group using 360 columns and NQCB_IGrows. In the write operation, the bits from the QCB interleaving output are written row-wise. The read operation is performed column-wise to read out m bits from each row, where m is equal to 1 for NUC and 2 for NUQ. FIG.24illustrates a cell-word demultiplexing according to an embodiment of the present invention. (h) shows a cell-word demultiplexing for 8 and 12 bpcu MIMO and (b) shows a cell-word demultiplexing for 10 bpcu MIMO. Each cell word (c0,I, c1,I, . . . , cηmod-1,I) of the bit interleaving output is demultiplexed into (d1,0,m, d1,1,m. . . , d1,ηmod-1,m) and (d2,0,m, d2,1,m. . . , d2,ηmod-1,m) as shown in (a), which describes the cell-word demultiplexing process for one XFECBLOCK. For the 10 bpcu MIMO case using different types of NUQ for MIMO encoding, the Bit Interleaver for NUQ-1024 is re-used. Each cell word (c0,I, c1,I, . . . , c9,I) of the Bit Interleaver output is demultiplexed into (d1,0,m, d1,1,m. . . , d1,3,m) and (d2,0,m, d2,1,m. . . , d2,5,m), as shown in (b). FIG.25illustrates a time interleaving according to an embodiment of the present invention. (i) to (c) show examples of TI mode. The time interleaver operates at the DP level. The parameters of time interleaving (TI) may be set differently for each DP. The following parameters, which appear in part of the PLS2-STAT data, configure the TI: DP_TI_TYPE (allowed values: 0 or 1): Represents the TI mode; ‘0’ indicates the mode with multiple TI blocks (more than one TI block) per TI group. In this case, one TI group is directly mapped to one frame (no inter-frame interleaving). ‘1’ indicates the mode with only one TI block per TI group. In this case, the TI block may be spread over more than one frame (inter-frame interleaving). DP_TI_LENGTH: If DP_TI_TYPE=‘0’, this parameter is the number of TI blocks NTIper TI group. For DP_TI_TYPE=‘1’, this parameter is the number of frames PIspread from one TI group. DP_NUM_BLOCK_MAX (allowed values: 0 to 1023): Represents the maximum number of XFECBLOCKs per TI group. DP_FRAME_INTERVAL (allowed values: 1, 2, 4, 8): Represents the number of the frames IJUMPbetween two successive frames carrying the same DP of a given PHY profile. DP_TI_BYPASS (allowed values: 0 or 1): If time interleaving is not used for a DP, this parameter is set to ‘1’. It is set to ‘0’ if time interleaving is used. Additionally, the parameter DP_NUM_BLOCK from the PLS2-DYN data is used to represent the number of XFECBLOCKs carried by one TI group of the DP. When time interleaving is not used for a DP, the following TI group, time interleaving operation, and TI mode are not considered. However, the Delay Compensation block for the dynamic configuration information from the scheduler will still be required. In each DP, the XFECBLOCKs received from the SSD/MIMO encoding are grouped into TI groups. That is, each TI group is a set of an integer number of XFECBLOCKs and will contain a dynamically variable number of XFECBLOCKs. The number of XFECBLOCKs in the TI group of index n is denoted by NxBLOCK Group(n) and is signaled as DP_NUM_BLOCK in the PLS2-DYN data. Note that NxBLOCK_Group(n) may vary from the minimum value of 0 to the maximum value NxBLOCK_Group_MAX(corresponding to DP_NUM_BLOCK_MAX) of which the largest value is 1023. Each TI group is either mapped directly onto one frame or spread over PIframes. Each TI group is also divided into more than one TI blocks(NTI), where each TI block corresponds to one usage of time interleaver memory. The TI blocks within the TI group may contain slightly different numbers of XFECBLOCKs. If the TI group is divided into multiple TI blocks, it is directly mapped to only one frame. There are three options for time interleaving (except the extra option of skipping the time interleaving) as shown in the below table 33. TABLE 33ModesDescriptionsOption-1Each TI group contains one TI block and is mappeddirectly to one frame as shown in (a). This option issignaled in the PLS2-STAT by DP_TI_TYPE= ‘0’ andDP_TI_LENGTH = ‘1’ (NTI= 1).Option-2Each TI group contains one TI block and is mapped tomore than one frame, (b) shows an example, where one TIgroup is mapped to two frames, i.e., DP_TI_LENGTH = ‘2’(PI= 2) and DP_FRAME_INTERVAL (IJUMP= 2). Thisprovides greater time diversity for low data-rateservices. This option is signaled in the PLS2-STAT byDP_TI_TYPE = ‘1’.Option-3Each TI group is divided into multiple TI blocks and ismapped directly to one frame as shown in (c). Each TIblock may use full TI memory, so as to provide themaximum bit-rate for a DP.This option is signaled in the PLS2-STAT signaling byDP_TI_TYPE= ‘0’ and DP_TI_LENGTH = NTI, whilePI= 1. In each DP, the TI memory stores the input XFECBLOCKs (output XFECBLOCKs from the SSD/MIMO encoding block). Assume that input XFECBLOCKs are defined as (dn,s,0,0,dn,s,0,1,…,dn,s,0,Ncells-1,dn,s,1,0,…,dn,s,1,Ncells-1,…,dn,s,NxBLOCK_TI(n,s)-1,0,…,dn,s,NxBLOCK_TI(n,s)-1,Ncells-1), where dn,s,r,qis the qthcell of the rthXFECBLOCK in the sthTI block of the nthTI group and represents the outputs of SSD and MIMO encodings as follows dn,s,r,q={fn,s,r,q,theoutputofSSD…encodinggn,s,r,q,theoutputofMIMOencoding. In addition, assume that output XFECBLOCKs from the time interleaver5050are defined as (hn,s,0,hn,s,1,…,hn,s,i,…,hn,s,NxBLOCK_TI(n,s)×Ncells-1), where hn,s,iis the ithoutput cell (for i=0, . . . , NxBLOCK_TI(n,s)×Ncells−1) in the sthTI block of the nthTI group. Typically, the time interleaver will also act as a buffer for DP data prior to the process of frame building. This is achieved by means of two memory banks for each DP. The first TI-block is written to the first bank. The second TI-block is written to the second bank while the first bank is being read from and so on. The TI is a twisted row-column block interleaver. For the sthTI block of the nthTI group, the number of rows Nrof a TI memory is equal to the number of cells Ncells, i.e., Nr=Ncellswhile the number of columns Ncis equal to the number NxBLOCK_TI(n,s). FIG.26illustrates the basic operation of a twisted row-column block interleaver according to an embodiment of the present invention. (j) shows a writing operation in the time interleaver5050and (b) shows a reading operation in the time interleaver5050. The first XFECBLOCK is written column-wise into the first column of the TI memory, and the second XFECBLOCK is written into the next column, and so on as shown in (a). Then, in the interleaving array, cells are read out diagonal-wise. During diagonal-wise reading from the first row (rightwards along the row beginning with the left-most column) to the last row, Nrcells are read out as shown in (b). In detail, assuming zn,s,i(i=0, . . . , NrNc) as the TI memory cell position to be read sequentially, the reading process in such an interleaving array is performed by calculating the row index Rn,s,i, the column index Cn,s,i, and the associated twisting parameter Tn,s,ias follows expression. GENERATE(Rn,s,i,Cn,s,i)={Rn,s,i=mod(i,Nr),Tn,s,i=mod(Sshift×Rn,s,i,Nc),,Cn,s,i=mod(Tn,s,i+⌊iNr⌋,Nc)}[Expression9]where Sshiftis a common shift value for the diagonal-wise reading process regardless of NxBLOCK_TI(n,s), and it is determined by NxBLOCK_TI_MAXgiven in the PLS2-STAT as follows expression. for{NxBLOCK_TI_MAX′=NxBLOCK_TI_MAX+1,ifNxBLOCK_TI_MAXmod2=0NxBLOCK_TI_MAX′=NxBLOCK_TI_MAX,ifNxBLOCK_TI_MAXmod2=1,[Expression10]Sshift=NxBLOCK_TI_MAX′-12. As a result, the cell positions to be read are calculated by a coordinate as zn,s,i=NrCn,s,i+Rn,s,i. FIG.27illustrates an operation of a twisted row-column block interleaver according to another embodiment of the present invention. More specifically,FIG.27illustrates the interleaving array in the TI memory for each TI group, including virtual XFECBLOCKs when NxBLOCK_TI(0,0)=3, NxBLOCK_TI(1,0)=6 NxBLOCK TI(2,0)=5 The variable number NxBLOCK_TI(n,s)=Nrwill be less than or equal to N′xBLOCKTI_MAX. Thus, in order to achieve a single-memory deinterleaving at the receiver side, regardless of NxBLOCK_TI(n,s), the interleaving array for use in a twisted row-column block interleaver is set to the size of Nr×Nc=Ncells×NxBLOCK_TI_MAXby inserting the virtual XFECBLOCKs into the TI memory and the reading process is accomplished as follow expression. [Expression 11]p = 0;for i = 0;i < NcellsN′xBLOCK—TI—MAX;i = i + 1{GENERATE (Rn,s,i,Cn,s,i);Vi= NrCn,s,j+ Rn,s,jif Vi< NcellsNxBLOCK—TI(n,s){Zn,s,p= Vi; p = p + 1;}} The number of TI groups is set to 3. The option of time interleaver is signaled in the PLS2-STAT data by DP_TI_TYPE=‘0’, DP_FRAME_INTERVAL=‘1’, and DP_TI_LENGTH=‘1’, i.e., NTI, =1, IJUMP=1, and P1=1. The number of XFECBLOCKs, each of which has Ncells=30 cells, per TI group is signaled in the PLS2-DYN data by NxBLOCK_TI(0,0)=3, NxBLOCK_TI(1,0)=6, and NxBLOCK_TI(2,0)=5, respectively. The maximum number of XFECBLOCK is signaled in the PLS2-STAT data by NxBLOCK_Group_MAX, which leads to └NxBLOCK_Group_MAX/NTI┘=NxBLOCK_TI_MAX=6. FIG.28illustrates a diagonal-wise reading pattern of a twisted row-column block interleaver according to an embodiment of the present invention. More specificallyFIG.28shows a diagonal-wise reading pattern from each interleaving array with parameters of N′xBLOCK_TI_MAX=7 and Sshift=(7−1)/2=3. Note that in the reading process shown as pseudocode above, if Vi≥NcellsNxBLOCK_TI(n,s), the value of Viis skipped and the next calculated value of Viis used. FIG.29illustrates interleaved XFECBLOCKs from each interleaving array according to an embodiment of the present invention. FIG.29illustrates the interleaved XFECBLOCKs from each interleaving array with parameters of N′xBLOCK_TI_MAX=7 and Sshift=3. Hereinafter, a frequency interleaving procedure according to an embodiment of the present invention will be described. The purpose of the frequency interleaver7020in the present invention, which operates on a single OFDM symbol, is to provide frequency diversity by randomly interleaving data cells received from the cell mapper7010. In order to get maximum interleaving gain in a single signal frame (or frame), a different interleaving-seed is used for every OFDM symbol pair comprised of two sequential OFDM symbols. The frequency interleaver7020may interleave cells in a transport block as a unit of a signal frame to acquire additional diversity gain. According to an embodiment of the present invention, the frequency interleaver7020may apply different interleaving seeds to at least one OFDM sysmbol or apply different interleaving seeds to a frame including a plurality of OFDM symbols. In the present invention, the aforementioned frequency interleaving method may be referred to as random frequency interleaving (random FI). In addition, according to an embodiment of the present invention, the random FI may be applied to a super-frame structure including a plurality of signal frames with a plurality of OFDM symbols. As described above, a broadcast signal transmitting apparatus or a frequency interleaver7020therein according to an embodiment of the present invention may apply different interleaving seeds (or interleaving patterns) for at least one OFDM symbol, that is, for each OFDM symbol or each of pair-wise OFDM symbols (or each OFDM symbol pair) and perform the random FI, thereby acquiring frequency diversity. In addition, the frequency interleaver7020according to an embodiment of the present invention may apply different interleaving seed for each respective signal frame and perform the random FI, thereby acquiring additional frequency diversity. Accordingly, a broadcast transmitting apparatus or a frequency interleaver7020according to an embodiment of the present invention may have a ping-pong frequency interleaver7020structure that perform frequency interleaving in units of one pair of consecutive OFDM symbols (pair-wise OFDM symbol) using two memory banks. Hereinafter, an interleaving operation of the frequency interleaver7020according to an embodiment of the present invention may be referred to as pair-wise symbol FI (or pair-wise FI) or ping-pong FI (ping-pong interleaving). The aforementioned interleaving operation corresponds to an embodiment of the random FI, which can be changed according to a designer's intention. Even-indexed pair-wise OFDM symbols and odd pair-wise OFDM symbols may be intermittently interleaved via different FI memory banks. In addition, the frequency interleaver7020according to an embodiment of the present invention may simultaneously perform reading and writing operations on one pair of consecutive OFDM symbols input to each memory bank using an arbitrary interleaving seed. A detailed operation will be described below. In addition, according to an embodiment of the present invention, as a logical frequency interleaving operation for logically and effectively interleaving all OFDM symbols in a super-frame, an interleaving seed is basically changed in units of one pair of OFDM symbols. In this case, according to an embodiment of the present invention, the interleaving seed may be generated by an arbitrary random generator or a random generator formed by a combination of various random generators. In addition, according to an embodiment of the present invention, various interleaving seeds may be generated by cyclic-shifting one main interleaving seed in order to effectively change an interleaving seed. In this case, a cyclic-shifting rule may be hierarchically defined in consideration of OFDM symbol and signal frame units. That is different interleaving seed to be used every OFDM symbol pair can be generated by cyclic-shifting one interleaving seed (main interleaving seed). Therefore, the symbol offset according to the present invention may be referred as a cyclic shifting value. This can be changed according to a designer's intention, which will be described in detail. A broadcast signal receiving apparatus according to an embodiment of the present invention may perform an inverse procedure of the aforementioned random frequency interleaving. In this case, the broadcast signal receiving apparatus or a frequency deinterleaver thereof according to an embodiment of the present invention may not use a ping-pong structure using a double-memory and may perform deinterleaving on consecutive input OFDM symbols via a single-memory. Accordingly, memory use efficiency can be enhanced. In addition, reading and writing operations are still required, which is called as a single-memory deinterleaving operation. Such a deinterleaving scheme is very efficient in a memory-use aspect. FIG.30is a view illustrating an operation of a frequency interleaver7020according to an embodiment of the present invention. FIG.30illustrates the basic operation of the frequency interleaver7020using two memory banks at the transmitter, which enables a single-memory deinterleaving at the receiver. As described above, the frequency interleaver7020according to an embodiment of the present invention may perform a ping-pong interleaving operation. Typically, ping-pong interleaving operation is accomplished by means of two memory banks. In the proposed FI operation, two memory banks are for each pair-wise OFDM symbol. The maximum memory ROM (Read Only Memory) size for interleaving is approximately two times to a maximum FFT size. At a transmit side, the ROM size increase is rather less critical, compared to a receiver side. As described above, odd pair-wise OFDM symbols and odd pair-wise OFDM symbols may be intermittently interleaved via different FI memory-banks. That is, the second (odd-indexed) pair-wise OFDM symbol is interleaved in the second bank, while the first (even-indexed) pair-wise OFDM symbol is interleaved in the first bank and so on. For each pair-wise OFDM symbol, a single interleaving seed is used. Based on the interleaving seed and reading-writing (or writing-reading) operation, two OFDM symbols are sequentially interleaved. Reading-writing operations according to an embodiment of the present invention are simultaneously accomplished without a collision. Writing-reading operations according to an embodiment of the present invention are simultaneously accomplished without a collision. FIG.30illustrates an operation of the aforementioned frequency interleaver7020. As illustrated inFIG.30, the frequency interleaver7020may include a demux16000, two memory banks, a memory bank-A16100and a memory bank-B16200, and a demux16300. First, the frequency interleaver7020according to an embodiment of the present invention may perform a demultiplexing processing to the input sequential OFDM symbols for the pair-wise OFDM symbol FI. Then the frequency interleaver7020according to an embodiment of the present invention performs a reading-writing FI operation in each memory bank A and B with a single interleaving seed. As shown inFIG.30, two memory banks are used for each OFDM symbol pair. Operationally, the first (even-indexed) OFDM symbol pair is interleaved in memory bank-A, while the second (odd-indexed) OFDM symbol pair is interleaved in memory bank-B and so on, alternating between A and B. Then the frequency interleaver7020according to an embodiment of the present invention may perform a multiplexing processing to ping-pong FI outputs for sequential OFDM symbol transmission. FIG.31illustrates a basic switch model for MUX and DEMUX procedures according to an embodiment of the present invention. FIG.31illustrates simple operations the DEMUX and MUX applied input and output of memory-bank-A/-B in the aforementioned ping-pong FI structure. The DEMUX and MUX may control the input sequential OFDM symbols to be interleaved, and the output OFDM symbol pair to be transmitted, respectively. Different interleaving seeds are used for every OFDM symbol pair. Hereinafter, reading-writing operations of frequency interleaving according to an embodiment of the present invention will be described. A frequency interleaver7020according to an embodiment of the present invention may select or use a single interleaving see and use the interleaving seed in writing and reading operations for the first and second OFDM symbols, respectively. That is, the frequency interleaver7020according to an embodiment of the present invention may use the one selected arbitrary interleaving seed in an operation of writing a first OFDM symbol of a pair-wise OFDM symbol, and use a second OFDM symbol in a reading operation, thereby achieving effective interleaving. Virtually, it seems like that two different interleaving seeds are applied to two OFDM symbols, respectively. Details of the reading-writing operation according to an embodiment of the present invention are as follows: For the first OFDM symbol, the frequency interleaver7020according to an embodiment of the present invention may perform random writing into memory (according to an interleaving seed) and perform then linear reading. For the second OFDM symbol, the frequency interleaver7020according to an embodiment of the present invention may perform linear writing into memory, (affected by the linear reading operation for the first OFDM symbol), simultaneously. Also, the frequency interleaver7020according to an embodiment of the present invention may perform then random reading (according to an interleaving seed). As described above, the broadcast signal receiving apparatus according to an embodiment of the present invention may continuously transmit a plurality of frames on the time axis. In the present invention, a set of signal frames transmitted for a predetermined period of time may be referred to as a super-frame. Accordingly, one super-frame may include N signal frames and each signal frame may include a plurality of OFDM symbols. FIG.32is a view illustrating a concept of frequency interleaving applied to a single super-frame according to an embodiment of the present invention. A frequency interleaver7020according to an embodiment of the present invention may change interleaving seed every pair-wise OFDM symbol in a single signal frame (symbol index reset) and change interleaving seed to be used in a single signal frame by every frame (frame index reset). Consequently, the frequency interleaver7020according to an embodiment of the present invention may change interleaving seed in a super-frame (super-frame index reset). Accordingly, the frequency interleaver7020according to an embodiment of the present may logically and effectively interleave all OFDM symbols in a super-frame. FIG.33is a view illustrating logical operation mechanism of frequency interleaving applied to a single super-frame according to an embodiment of the present invention. FIG.33illustrates logical operation mechanism of a frequency interleaver7020and related parameter thereof, for effectively changing interleaving seeds to be used the one super-frame described with reference toFIG.32. As described above, in the present invention, various interleaving seeds may be effectively generated by cyclic-shifting one main interleaving seed by as much as an arbitrary offset. As illustrated inFIG.33, according to an embodiment of the present invention, the aforementioned offset may be differently generated for each frame and each of pair-wise OFDM symbol to generate different interleaving seeds. Hereinafter, the logical operation mechanism will be described. As illustrated in a lower block ofFIG.33, a frequency interleaver7020according to an embodiment of the present invention may randomly generate a frame offset for each signal frame using an input frame index. The frame offset according to an embodiment of the present invention may be generated by a frame offset generator included in a frequency interleaver7020. In this case, when super-frame index is reset, a frame offset applied to each frame is generated for each signal frame in each super-frame identified according to a super-frame index. As illustrated in a middle block ofFIG.33, a frequency interleaver7020according to an embodiment of the present invention may randomly generate a symbol offset to be applied to each OFDM symbol included in each signal frame using an input symbol index. The symbol offset according to an embodiment of the present invention may be generated by a symbol offset generator included in a frequency interleaver7020. In this case, when a frame index is reset, a symbol offset for each symbol is generated for symbols in each signal frame identified according to a frame index. In addition, the frequency interleaver7020according to an embodiment of the present invention may generate various interleaving seeds by cyclic-shifting a main interleaving seed on each OFDM symbol by as much as a symbol offset. Then, as illustrated in an upper block ofFIG.33, a frequency interleaver7020according to an embodiment of the present invention may perform random FI on cells included in each OFDM symbol using an input cell index. A random FI parameter according to an embodiment of the present invention may be generated by a random FI generator included in the frequency interleaver7020. FIG.34illustrates expressions of logical operation mechanism of frequency interleaving applied to a single super-frame according to an embodiment of the present invention. In detail,FIG.34illustrates a correlation of the aforementioned frame offset parameter, symbol offset, parameter, and random FI applied to a cell included in each OFDM. As illustrated inFIG.34, an offset to be used in an OFDM symbol may be generated through a hierarchical structure of the aforementioned frame offset generator and the aforementioned symbol offset generator. In this case, the frame offset generator and the symbol offset generator may be designed using an arbitrary random generator. FIG.35illustrates an operation of a memory bank according to an embodiment of the present invention. As described above, two memory banks according to an embodiment of the present invention may apply an arbitrary interleaving seed generated via the aforementioned procedure to each pair-wise OFDM symbol. In addition, each memory bank may change interleaving seed every pair-wise OFDM symbol. FIG.36illustrates a frequency deinterleaving procedure according to an embodiment of the present invention. A broadcast signal receiving apparatus according to an embodiment of the present invention may perform an inverse procedure of the aforementioned frequency interleaving procedure.FIG.36illustrates single-memory deinterleaving (FDI) for input sequential OFDM symbols. Basically, frequency deinterleaving operation follows to the inverse processing of frequency interleaving operation. For a single-memory use, no further processing is required. When pair-wise OFDM symbols illustrated in a left portion ofFIG.36are input, the broadcast signal receiving apparatus according to an embodiment of the present invention may perform the aforementioned reading and writing operation using a single memory, as illustrated in a right portion ofFIG.36. In this case, the broadcast signal receiving apparatus according to an embodiment of the present invention may generate a memory-index and perform frequency deinterleaving (reading and writing) corresponding to an inverse procedure of frequency interleaving (writing and reading) performed by a broadcast signal transmitting apparatus. The benefit is inherently caused by the proposed pair-wise ping-pong interleaving architecture. The following mathematical formulae show the aforementioned reading-writing operation. for j=0, 1, . . . , Nsymand k=0, 1, . . . , Ndata Fj(Cj(k))=Xj(k) [Expression 12]where Cj(k) is a random seed generated by a random generator,in the ith pair−wise OFDM symbol Fj=[Fj(0), Fj(1), . . . , Fj(Ndata−2), Fj(Ndata−1)], where Ndatais the number of data cells Xj=[Xj(0),Xj(1), . . . ,Xj(Ndata−2),Xj(Ndata−1)] for j=0, 1, . . . , Nsymand k=0, 1, . . . , Ndata Fj(k)=Xj(Cj(k)) [Expression 13]where Cj(k) is the same random seed used for the first symbol Fj=[Fj(0), Fj(1), . . . , Fj(Ndata−2), Fj(Ndata−1)], where Ndatais the number of data cells Xj=[Xj(0),Xj(1), . . . ,Xj(Ndata−2),Xj(Ndata−1)] The above expression 12 is for the first OFDM symbol, i.e., (j mod 2)=0 of the ith pair-wise OFDM symbol. The above expression 13 is for the second OFDM symbol, i.e., (j mod 2)=1 of the ith pair-wise OFDM symbol. Fj denotes an interleaved vector of the jth OFDM symbol (vector) and Xjdenotes an input vector of the jth OFDM symbol (vector). As shown in the expressions, the reading-writing operation according to an embodiment of the present invention may be performed by applying one random seed generated by an arbitrary random generator to a pair-wise OFDM symbol. FIG.37is a view illustrates concept of frequency interleaving applied to a single signal frame according to an embodiment of the present invention. As described above, a frequency interleaver7020according to an embodiment of the present invention may change interleaving seed every pair-wise OFDM symbol in a single frame. Details thereof will be described below. FIG.38is a view illustrating logical operation mechanism of frequency interleaving applied to a single signal frame according to an embodiment of the present invention. FIG.38illustrates logical operation mechanism of a frequency interleaver7020and related parameter thereof, for effectively changing interleaving seeds to be used the one single signal frame described with reference toFIG.37. As described above, in the present invention, various interleaving seed can be effectively generated by cyclic-shifting one main interleaving seed by as much as an arbitrary symbol offset. As illustrated inFIG.38, according to an embodiment of the present invention, the aforementioned symbol offset may be differently generated for each pair-wise OFDM symbol to generate different interleaving seeds. In this case, the symbol offset may be differently generated for each pair-wise OFDM symbol using an arbitrary random symbol offset generator. Hereinafter, the logical operation mechanism will be described. As illustrated in a lower block ofFIG.38, a frequency interleaver7020according to an embodiment of the present invention may randomly generate a symbol offset to be applied to each OFDM symbol included in each signal frame using an input symbol index. The symbol offset (or a random symbol offset) according to an embodiment of the present invention may be generated by an arbitrary random generator (or a symbol offset generator) included in a frequency interleaver7020. In this case, when a frame index is reset, the symbol offset for each symbol is generated for symbols in each signal frame identified according to a frame index. In addition, the frequency interleaver7020according to an embodiment of the present invention may generate various interleaving seeds by cyclic-shifting a main interleaving seed for each OFDM symbol by as much as the generated symbol offset. Then, as illustrated in an upper block ofFIG.38, a frequency interleaver7020according to an embodiment of the present invention may perform random FI on cells included in each OFDM symbol using an input cell index. A random FI parameter according to an embodiment of the present invention may be generated by a random FI generator included in a frequency interleaver7020. FIG.39illustrates expressions of logical operation mechanism of frequency interleaving applied to a single signal frame according to an embodiment of the present invention. FIG.39illustrates a correlation of the aforementioned symbol offset parameter and a parameter of random FI applied to a cell included in each OFDM. As illustrated inFIG.39, an offset to be used in each OFDM symbol may be generated through a hierarchical structure of the aforementioned symbol offset generator. In this ca se, the symbol offset generator may be designed using an arbitrary random generator. The following expression shows a change procedure of interleaving seed in each of the aforementioned memory banks. for j=0, 1, . . . , Nsymand for k=0, 1, . . . , Ndata, Fj(Cj(k))=Xj(k)where Cj(k)=(T(k)+S└j/2┘)modNdataT(k) is a main interleaving seed generated by a random generator, used in the main FIS└j/2┘is a random symbol offset generated by a random generator,used in the jth pair−wise OFDM symbol for j=0, 1, . . . , Nsymand k=0, 1, . . . Ndata Fj(k)=Xj(Cj(k)) [Expression 15]where Cj(k) is the same random seed used for the first symbol The above expression 14 is for the first OFDM symbol, i.e., (j mod 2)=0 of the ith pair-wise OFDM symbol and the above expression 15 is for the second OFDM symbol, i.e., (j mod 2)=1 of the ith pair-wise OFDM symbol. FIG.40is a view illustrating single-memory deinterleaving for input sequential OFDM symbols. FIG.40is a view illustrating concept of a broadcast signal receiving apparatus or a frequency deinterleaver thereof, for applying interleaving seed used in a broadcast signal transmitting apparatus (or a frequency interleaver7020) to each pair-wise OFDM symbol to perform deinterleaving. As described above, the broadcast signal receiving apparatus according to an embodiment of the present invention may perform an inverse procedure of the aforementioned frequency interleaving procedure using a single memory.FIG.40illustrates an operation of the broadcast signal receiving apparatus for processing single-memory deinterleaving (FDI) for input sequential OFDM symbols. The broadcast signal receiving apparatus according to an embodiment of the present invention may perform an inverse procedure of the aforementioned operation of a frequency interleaver7020. Thus, deinterleaving seeds correspond to the aforementioned interleaving seed. As described above, an OFDM generation block1030may perform FFT transformation on input data. According to an embodiment of the present invention, an FFT size may be 4K, 8K, 16K, 32K, or the like, and an FFT mode indicating the FFT size may be defined. The aforementioned FFT mode may be signaled via a preamble (or a preamble signal, a preamble symbol) in a signal frame or signal via PLS-pre or PLS-prost. The FFT size may be changed according to a designer's intention. A frequency interleaver7020or an interleaving seed generator included therein according to an embodiment of the present invention may perform an operation according to the aforementioned FFT mode. In addition, an interleaving seed generator according to an embodiment of the present invention may include a random seed generator or a quasi-random interleaving seed generator. The quasi-random interleaving seed generator may be an embodiment of the random seed generator. The random seed generator and the quasi-random interleaving seed generator may be referred as an interleaving address generator and it may be changed by the designer's intention. Also, both of the random seed generator and the quasi-random interleaving seed generator may include a first generator and a second generator. The first generator is for generating a main interleaving seed generator and the second generator is for generating a symbol offset. The name of the first generator and the second generator can be changed according to the designer's indention. Hereinafter, an operation of the interleaving seed generator according to each FFT mode is divided into an operation of the random seed generator and an operation of the quasi-random interleaving seed generator and will be described. Hereinafter, the random seed generator for a 4K FFT mode will be described. As described above, the random seed generator according to an embodiment of the present invention may apply different interleaving seeds to respective OFDM symbols to acquire frequency diversity. Logical composition of the random seed generator may include a random main-seed generator (or a random main interleaving-seed generator) (Cj(K)) for interleaving cells in a single OFDM symbol and a random symbol-offset generator (S└j/2┘) for changing a symbol offset. The random main-seed generator may generate the aforementioned random FI parameter. That is, the random main-seed generator may generate seed for interleaving cells in a single OFDM symbol. The random main-seed generator according to an embodiment of the present invention may include a spreader and a randomizer and perform rendering a full randomness in frequency-domain. According to an embodiment of the present invention, in the case of 4K FFT mode, the random main-seed generator may include a 1 bit spreader and an 11 bit-randomizer. The random main-seed generator or the randomizer according to an embodiment of the present invention may be referred as a main-PRBS (Pseudo Random Bit Stream) generator which is defined based on the 11-bit binary word sequence (or binary sequence). The random symbol-offset generator according to an embodiment of the present invention may change a symbol offset of each OFDM symbol. That is, the random symbol-offset generator may generate the aforementioned symbol offset. The random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer and perform rendering a spreading as much as 2kcases, in time-domain. X may be differently set for the respective FFT modes. According to an embodiment of the present invention, in the case of 4K FFT mode, a (12-k) bit-randomizer may be used. The (X-k) bits-randomizer according to an embodiment of the present invention may be referred as a sub-PRBS generator which is defined based on (12-k) bit binary word sequence (or binary sequence). The aforementioned spreader and randomizer may be used to achieve spreading and random effects during generation of the interleaving seed. FIG.41is a view illustrating an output signal of a time interleaver according to an embodiment of the present invention. As above described, the time interleaver according to an embodiment of the present invention may perform a column-wise writing operation and a row-wise reading operation on one FEC block, as illustrated in a left portion ofFIG.41. A right block ofFIG.41indicates an output signal of the time interleaver and the output signal is input to a frequency interleaver7020according to an embodiment of the present invention. Thus, one FEC block is periodically spread in each FI block. Accordingly, in order to increase the robustness of a channel with strong periodic properties, the aforementioned random interleaving seed generator may be used. FIG.42is a view of a 4K FFT mode random seed generator according to an embodiment of the present invention. The 4K FFT mode random seed generator according to an embodiment of the present invention may include a spreader (1-bit toggling), a randomizer, a memory-index check, a random symbol-offset generator, and a modulo operator. As described above, the random main-seed generator may include a spreader and a randomizer. Hereinafter, an operation of each block will be described. The (cell) spreader may be operated using an upper portion of n-bit of total 12-bit and may function as a multiplexer based on a look-up table. In the case of 4K FFT mode, the (cell) spreader may be a 1-bit multiplexer (or toggling). The randomizer may be operated via a PN generator and may provide full randomness during interleaving. As described above, in the case of 4K FFT mode, the randomizer may be a PN generator that considers 11-bit. This can be changed according to a designer's intention. Also the spreader and the randomizer are operated through multiplexer and PN generator, respectively. The memory-index check may not use seed when a memory-index generated by the spreader and the randomizer is greater than Ndataand may repeatedly operate the spreader and the randomizer to adjust the output memory-index such that the output memory-index does not exceed Ndata. The Ndataaccording to the embodiment of the present invention is equal to the number of the data cells. The random symbol-offset generator may generate a symbol-offset for cyclic-shifting main interleaving-seed generated by the main-interleaving seed generator for each pair-wise OFDM symbol. A detailed operation will be described below. The modulo operator may be operated when a result value, obtained by adding a symbol-offset output by the random symbol-offset generator for each pair-wise OFDM symbol to the memory-index output by the memory-index check, exceeds Ndata. Locations of the illustrated memory-index check and modulo operator can be changed according to a designer's intention. FIG.43illustrates expressions representing an operation of a 4K FFT mode random seed generator according to an embodiment of the present invention. The expressions illustrated in an upper portion ofFIG.43show initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 11thprimitive polynomial and the initial value may be changed by arbitrary values. The expressions illustrated in a lower portion ofFIG.43show procedures of calculating and outputting main-interleaving seed for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset may be applied to each pair-wise OFDM in the same way. FIG.44is a view illustrating a 4K FFT mode random symbol-offset generator according to an embodiment of the present invention. As above described, the random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer. Hereinafter, each block will be described. The k bits-spreader may be operated through a 2kmultiplexer and may be optimally designed to maximize inter-symbol spreading properties (or to minimize correlation properties). The randomizer may be operated through a N bits-PN generator and designed to provide randomness. The 4K FFT mode random symbol-offset generator may include a 0/1/2 bits-spreader and a 12/11/10 bits-random generator (or a PN generator). Details will be described below. FIG.45illustrates expressions showing operations of a random symbol-offset generator and a random Symbol-offset generator for 4K FFT mode including a 0 bits-spreader and a 12 bits-PN generator according to an embodiment of the present invention. (a) illustrates a random symbol-offset generator including a 0 bits-spreader and a 12 bits-PN generator. (b) illustrates an operation of a 4K FFT mode random Symbol-offset generator. The random symbol-offset generator illustrated in (a) may be operated for each pair-wise OFDM symbol. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of the randomizer. In this case, the primitive polynomial may be 12thprimitive polynomial and the initial value may be changed by arbitrary values. An expression illustrated in a lower portion of (b) shows a procedure for calculating and outputting a symbol-offset for output signals of a spreader and a randomizer. As illustrated in the expression, the random symbol-offset generator may be operated for each pair-wise OFDM symbol. Accordingly, the length of an entire output offset may correspond to half of the length of an entire OFDM symbol. FIG.46illustrates expressions illustrating operations of a random symbol-offset generator and a random Symbol-offset generator for 4K FFT mode including a 1 bits-spreader and an 11 bits-PN generator according to an embodiment of the present invention. (a) shows the random symbol-offset generator including a 1 bits-spreader and an 11 bits-PN generator. (b) shows an expression representing an operation of a 4K FFT mode random symbol-offset generator. The random symbol-offset generator illustrated in (a) may be operated for each pair-wise OFDM symbol. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 11thprimitive polynomial and the initial value may be changed by arbitrary values. An expression illustrated in a lower portion of (b) shows a procedure of calculating and outputting a symbol-offset for an output signal of the spreader and the randomizer. As illustrated in the expression, the random symbol-offset generator may be operated for each pair-wise OFDM symbol. Accordingly, the length of an entire output offset may correspond to half of the length of an entire OFDM symbol. FIG.47illustrates expressions illustrating operations of a random symbol-offset generator and a random Symbol-offset generator for 4K FFT mode including a 2 bits-spreader and a 10 bits-PN generator according to an embodiment of the present invention. (a) shows the random Symbol-offset generator including a 2 bits-spreader and a 10 bits-PN generator. (b) shows an expression representing an operation of a 4K FFT mode random symbol-offset generator. The random symbol-offset generator illustrated in (a) may be operated for each pair-wise OFDM symbol. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 10thprimitive polynomial and the initial value may include arbitrary values. An expression illustrated in a lower portion of (b) shows a procedure of calculating and outputting a symbol-offset for an output signal of the spreader and the randomizer. As illustrated in the expression, the random symbol-offset generator may be operated for each pair-wise OFDM symbol. Accordingly, the length of an entire output offset may correspond to half of the length of an entire OFDM symbol. FIG.48is a view illustrating logical composition of a 4K FFT mode random seed generator according to an embodiment of the present invention. As described above, the 4K FFT mode random seed generator according to an embodiment of the present invention may include a random main interleaving-seed generator, a random symbol-offset generator, a memory index check, and a modulo operator. FIG.48illustrates the logical composition of a 4K FFT mode random seed generator formed by combining a random main interleaving-seed generator and a random symbol-offset generator.FIG.48illustrates an embodiment of the random main interleaving-seed generator including a 1 bit-spreader and an 11 bits-randomizer, and an embodiment of the random symbol-offset generator including a 2 bits-spreader and a 10 bits-randomizer. Details thereof have been described above and thus will be omitted here. Hereinafter, a quasi-random interleaving seed generator for 4K FFT mode will be described. As described above, the quasi-random interleaving seed generator according to an embodiment of the present invention may apply different interleaving seeds to respective OFDM symbols to acquire frequency diversity. The logical composition of the quasi-random interleaving seed generator may include a main quasi-random seed generator ((or quasi-random main interleaving-seed generator) (Cj(K)) for interleaving cells in a single OFDM symbol and a random symbol-offset generator (S└j/2┘) for changing a symbol offset. The main quasi-random seed generator may generate the aforementioned random FI parameter. That is, the main quasi-random seed generator may generate seed for interleaving cells in a single OFDM symbol. The main quasi-random seed generator according to an embodiment of the present invention may include a spreader and a randomizer and perform rendering a full randomness in frequency-domain. According to an embodiment of the present invention, in the case of 4K FFT mode, the main quasi-random seed generator may include a 3 bit spreader and a 9 bit-randomizer. The main quasi-random seed generator or the randomizer according to an embodiment of the present invention may be referred as a main-PRBS generator which is defined based on the 11-bit binary word sequence (or binary sequence). The random symbol-offset generator according to an embodiment of the present invention may change a symbol offset for each OFDM symbol. That is, the random symbol-offset generator may generate the aforementioned symbol offset. The random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer and perform rendering a spreading as much as 2kcases, in time-domain. X may be differently set for respective FFT modes. According to an embodiment of the present invention, in the case of 4K FFT mode, a (12-k) bits-randomizer may be used. The (X-k) bits-randomizer according to an embodiment of the present invention may be referred as a sub-PRBS generator which is defined based on (12-k) bit binary word sequence (or binary sequence). The main roles of the spreader and the randomizer are as follows. Spreader: rendering a spreading effect to frequency interleaving (FI) Randomizer: rendering a random effect to FI FIG.49is a view illustrating an output signal of a time interleaver according to another embodiment of the present invention. The time interleaver according to an embodiment of the present invention may perform a column-wise writing operation and a row-wise reading operation on each FEC block with a size of 5, as illustrated in a left portion ofFIG.49. A right block ofFIG.49indicates an output signal of the time interleaver and the output signal is input to a frequency interleaver7020according to an embodiment of the present invention. Thus, one FEC block has a length of 5 in each FI block and agglomerate in a burst form. Thus, in order to increase the robustness of a channel with strong burst error properties, interleaving seed having high spreading properties as well as high randomness is required. Accordingly, the aforementioned quasi-random interleaving seed generator may be used. FIG.50is a view illustrating a 4K FFT mode quasi-random interleaving-seed generator according to an embodiment of the present invention. The 4K FFT mode quasi-random interleaving-seed generator according to an embodiment of the present invention may include a spreader (3-bit toggling), a randomizer, a memory-index check, a random symbol-offset generator, and a modulo operator. As described above, the quasi-random main interleaving-seed generator may include a spreader and a randomizer. Hereinafter, an operation of each block will be described. The spreader may be operated through an n-bit multiplexer and may maximize (or minimize inter-cell correlation) inter-cell spreading. In the case of 4K FFT mode, the spreader may use a look-up table that considers 3-bit. The randomizer may be operated as a (12-n) bits-PN generator and may provide randomness (or correlation properties). The randomizer according to an embodiment of the present invention may include bit shuffling. The bit shuffling optimizes spreading properties or random properties and is designed in consideration of Ndata. In the case of 4K FFT mode, the bit shuffling may use a 9-bit PN generator, which can be changed. The memory-index check may not use seed when a memory-index generated by the spreader and the randomizer is greater than Ndataand may repeatedly operate the spreader and the randomizer to adjust the output memory-index such that the output memory-index does not exceed Ndata. The random symbol-offset generator may generate a symbol-offset for cyclic-shifting main interleaving-seed generated by the main-interleaving seed generator for each pair-wise OFDM symbol. A detailed operation has been described with regard to the 4K FFT mode random main-seed generator and is not described again here. The modulo operator may be operated when a result value, obtained by adding a symbol-offset output by the random symbol-offset generator for each pair-wise OFDM symbol to the memory-index output by the memory-index check, exceeds Ndata. Locations of the illustrated memory-index check and modulo operator can be changed according to a designer's intention. FIG.51is expressions representing operations of 4K FFT mode bit shuffling and 4K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention. (a) illustrates an expression representing an operation of the 4K FFT mode bit shuffling and (b) illustrates an expression representing an operation of the 4K FFT mode quasi-random interleaving seed generator. As illustrated in (a), the 4K FFT mode bit shuffling may mix bits of registers of a PN generator during calculation of a memory-index. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 9thprimitive polynomial and the initial value may be changed by arbitrary values. An expression illustrated in a lower portion of (b) shows a procedure of calculating and outputting main-interleaving seed for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset may be applied to each pair-wise OFDM symbol in the same way. FIG.52is a view illustrating logical composition of a 4K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention. As described above, the 4K FFT mode quasi-random main interleaving-seed generator according to an embodiment of the present invention may include a quasi-random main interleaving-seed generator, a random symbol-offset generator, a memory index check, and a modulo operator. FIG.52illustrates the logical composition of a 4K FFT mode quasi-random interleaving seed generator formed by combining a quasi-random main interleaving-seed generator and a random symbol-offset generator.FIG.52illustrates an embodiment of the quasi-random main interleaving-seed generator including a 3 bit-spreader and a 9 bits-randomizer and an embodiment of the random symbol-offset generator including a 2 bits-spreader and a 10 bits-randomizer. Details thereof have been described above and thus will be omitted here. Hereinafter, the random seed generator for an 8K FFT mode will be described. As described above, the random seed generator according to an embodiment of the present invention may apply different interleaving seeds to respective OFDM symbols to acquire frequency diversity. Logical composition of the random seed generator may include a random main-seed generator (or a random main interleaving-seed generator) (Cj(K)) for interleaving cells in a single OFDM symbol and a random symbol-offset generator (S└j/2┘) for changing a symbol offset. The random main-seed generator may generate the aforementioned random FI parameter. That is, the random main-seed generator may generate seed for interleaving cells in a single OFDM symbol. The random main-seed generator according to an embodiment of the present invention may include a spreader and a randomizer and perform rendering a full randomness in frequency-domain. According to an embodiment of the present invention, in the case of 8K FFT mode, the random main-seed generator may include a 1 bit spreader and an 12 bit-randomizer. The random main-seed generator or the randomizer according to an embodiment of the present invention may be referred as a main-PRBS generator which is defined based on the 12-bit binary word sequence (or binary sequence). The random symbol-offset generator according to an embodiment of the present invention may change a symbol offset of each OFDM symbol. That is, the random symbol-offset generator may generate the aforementioned symbol offset. The random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer and perform rendering a spreading as much as 2kcases, in time-domain. X may be differently set for the respective FFT modes. According to an embodiment of the present invention, in the case of 8K FFT mode, a (13-k) bit-randomizer may be used. The (X-k) bits-randomizer according to an embodiment of the present invention may be referred as a sub-PRBS generator which is defined based on (13-k) bit binary word sequence (or binary sequence). The aforementioned spreader and randomizer may be used to achieve spreading and random effects during generation of the interleaving seed. Details of the output signal of a time interleaver according to an embodiment of the present invention have been described above. FIG.53is a view of an 8K FFT mode random seed generator according to an embodiment of the present invention. The 8K FFT mode random seed generator according to an embodiment of the present invention may include a spreader (1-bit toggling), a randomizer, a memory-index check, a random symbol-offset generator, and a modulo operator. As described above, the random main-seed generator may include a spreader and a randomizer. Hereinafter, an operation of each block will be described. The (cell) spreader may be operated using an upper portion of n-bit of total 13-bit and may function as a multiplexer based on a look-up table. In the case of 8K FFT mode, the (cell) spreader may be a 1-bit multiplexer (or toggling). The randomizer may be operated via a PN generator and may provide full randomness during interleaving. As described above, in the case of 8K FFT mode, the randomizer may be a PN generator that considers 12-bit. This can be changed according to a designer's intention. Also the spreader and the randomizer are operated through multiplexer and PN generator, respectively. The memory-index check may not use seed when a memory-index generated by the spreader and the randomizer is greater than Ndataand may repeatedly operate the spreader and the randomizer to adjust the output memory-index such that the output memory-index does not exceed Ndata. The Ndataaccording to the embodiment of the present invention is equal to the number of the data cells. The random symbol-offset generator may generate a symbol-offset for cyclic-shifting main interleaving-seed generated by the main-interleaving seed generator for each pair-wise OFDM symbol. A detailed operation will be described below. The modulo operator may be operated when a result value, obtained by adding a symbol-offset output by the random symbol-offset generator for each pair-wise OFDM symbol to the memory-index output by the memory-index check, exceeds Ndata. Locations of the illustrated memory-index check and modulo operator can be changed according to a designer's intention. FIG.54illustrates expressions representing an operation of an 8K FFT mode random seed generator according to an embodiment of the present invention. The expressions illustrated in an upper portion ofFIG.54show initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 12thprimitive polynomial and the initial value may be changed by arbitrary values. The expressions illustrated in a lower portion ofFIG.54show procedures of calculating and outputting main-interleaving seed for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset may be applied to each pair-wise OFDM in the same way. FIG.55is a view illustrating an 8K FFT mode random symbol-offset generator according to an embodiment of the present invention. As above described, the random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer. Hereinafter, each block will be described. The k bits-spreader may be operated through a 2kmultiplexer and may be optimally designed to maximize inter-symbol spreading properties (or to minimize correlation properties). The randomizer may be operated through a N bits-PN generator and designed to provide randomness. The 8K FFT mode random symbol-offset generator may include a 0/1/2 bits-spreader and a 13/12/11 bits-random generator (or a PN generator). Details will be described below. FIG.56illustrates expressions showing operations of a random symbol-offset generator and a random Symbol-offset generator for 8K FFT mode including a 0 bits-spreader and a 13 bits-PN generator according to an embodiment of the present invention. (a) illustrates a random symbol-offset generator including a 0 bits-spreader and a 13 bits-PN generator. (b) illustrates an operation of an 8K FFT mode random Symbol-offset generator. The random symbol-offset generator illustrated in (a) may be operated for each pair-wise OFDM symbol. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of the randomizer. In this case, the primitive polynomial may be 13thprimitive polynomial and the initial value may be changed by arbitrary values. An expression illustrated in a lower portion of (b) shows a procedure for calculating and outputting a symbol-offset for output signals of a spreader and a randomizer. As illustrated in the expression, the random symbol-offset generator may be operated for each pair-wise OFDM symbol. Accordingly, the length of an entire output offset may correspond to half of the length of an entire OFDM symbol. FIG.57illustrates expressions illustrating operations of a random symbol-offset generator and a random Symbol-offset generator for 8K FFT mode including a 1 bits-spreader and an 12 bits-PN generator according to an embodiment of the present invention. (a) shows the random symbol-offset generator including a 1 bits-spreader and a 12 bits-PN generator. (b) shows an expression representing an operation of an 8K FFT mode random symbol-offset generator. The random symbol-offset generator illustrated in (a) may be operated for each pair-wise OFDM symbol. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 12thprimitive polynomial and the initial value may be changed by arbitrary values. An expression illustrated in a lower portion of (b) shows a procedure of calculating and outputting a symbol-offset for an output signal of the spreader and the randomizer. As illustrated in the expression, the random symbol-offset generator may be operated for each pair-wise OFDM symbol. Accordingly, the length of an entire output offset may correspond to half of the length of an entire OFDM symbol. FIG.58illustrates expressions illustrating operations of a random symbol-offset generator and a random Symbol-offset generator for 8K FFT mode including a 2 bits-spreader and an 11 bits-PN generator according to an embodiment of the present invention. (a) shows the random Symbol-offset generator including a 2 bits-spreader and an 11 bits-PN generator. (b) shows an expression representing an operation of an 8K FFT mode random symbol-offset generator. The random symbol-offset generator illustrated in (a) may be operated for each pair-wise OFDM symbol. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 11thprimitive polynomial and the initial value may include arbitrary values. An expression illustrated in a lower portion of (b) shows a procedure of calculating and outputting a symbol-offset for an output signal of the spreader and the randomizer. As illustrated in the expression, the random symbol-offset generator may be operated for each pair-wise OFDM symbol. Accordingly, the length of an entire output offset may correspond to half of the length of an entire OFDM symbol. FIG.59is a view illustrating logical composition of an 8K FFT mode random seed generator according to an embodiment of the present invention. As described above, the 8K FFT random seed generator according to an embodiment of the present invention may include a random main interleaving-seed generator, a random symbol-offset generator, a memory index check, and a modulo operator. FIG.59illustrates the logical composition of an 8K FFT mode random seed generator formed by combining a random main interleaving-seed generator and a random symbol-offset generator.FIG.59illustrates an embodiment of the random main interleaving-seed generator including a 1 bit-spreader and a 12 bits-randomizer, and an embodiment of the random symbol-offset generator including a 2 bits-spreader and an 11 bits-randomizer. Details thereof have been described above and thus will be omitted here. Hereinafter, a quasi-random interleaving seed generator for 8K FFT mode will be described. As described above, the quasi-random interleaving seed generator according to an embodiment of the present invention may apply different interleaving seeds to respective OFDM symbols to acquire frequency diversity. The logical composition of the quasi-random interleaving seed generator may include a main quasi-random seed generator (or quasi-random main interleaving-seed generator) (Cj(K)) for interleaving cells in a single OFDM symbol and a random symbol-offset generator (S└j/2┘) for changing a symbol offset. The main quasi-random seed generator may generate the aforementioned random FI parameter. That is, the main quasi-random seed generator may generate seed for interleaving cells in a single OFDM symbol. The main quasi-random seed generator according to an embodiment of the present invention may include a spreader and a randomizer and perform rendering a full randomness in frequency-domain. According to an embodiment of the present invention, in the case of 8K FFT mode, the main quasi-random seed generator may include a 3 bit spreader and a 10 bit-randomizer. The quasi-random seed generator or the randomizer according to an embodiment of the present invention may be referred as a main-PRBS generator which is defined based on the 10-bit binary word sequence (or binary sequence). The random symbol-offset generator according to an embodiment of the present invention may change a symbol offset for each OFDM symbol. That is, the random symbol-offset generator may generate the aforementioned symbol offset. The random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer and perform rendering a spreading as much as 2kcases, in time-domain. X may be differently set for respective FFT modes. According to an embodiment of the present invention, in the case of 8K FFT mode, a (13-k) bits-randomizer may be used. The (X-k) bits-randomizer according to an embodiment of the present invention may be referred as a sub-PRBS generator which is defined based on (13-k) bit binary word sequence (or binary sequence). The main roles of the spreader and the randomizer are as follows. Spreader: rendering a spreading effect to frequency interleaving (FI) Randomizer: rendering a random effect to FI Details of the output signal of a time interleaver according to an embodiment of the present invention have been described above. FIG.60is a view illustrating an 8K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention. The 8K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention may include a spreader (3-bit toggling), a randomizer, a memory-index check, a random symbol-offset generator, and a modulo operator. As described above, the quasi-random main interleaving-seed generator may include a spreader and a randomizer. Hereinafter, an operation of each block will be described. The spreader may be operated through an n-bit multiplexer and may maximize (or minimize inter-cell correlation) inter-cell spreading. In the case of 8K FFT mode, the spreader may use a look-up table that considers 3-bit. The randomizer may be operated as a (13-n) bits-PN generator and may provide randomness (or correlation properties). The randomizer according to an embodiment of the present invention may include bit shuffling. The bit shuffling optimizes spreading properties or random properties and is designed in consideration of Ndata. In the case of 8K FFT mode, the bit shuffling may use a 10-bit PN generator, which can be changed. The memory-index check may not use seed when a memory-index generated by the spreader and the randomizer is greater than Ndataand may repeatedly operate the spreader and the randomizer to adjust the output memory-index such that the output memory-index does not exceed Ndata. The random symbol-offset generator may generate a symbol-offset for cyclic-shifting main interleaving-seed generated by the main-interleaving seed generator for each pair-wise OFDM symbol. A detailed operation has been described with regard to the 8K FFT mode random main-seed generator and is not described again here. The modulo operator may be operated when a result value, obtained by adding a symbol-offset output by the random symbol-offset generator for each pair-wise OFDM symbol to the memory-index output by the memory-index check, exceeds Ndata. Locations of the illustrated memory-index check and modulo operator can be changed according to a designer's intention. FIG.61is expressions representing operations of 8K FFT mode bit shuffling and 8K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention. (a) illustrates an expression representing an operation of the 8K FFT mode bit shuffling and (b) illustrates an expression representing an operation of the 8K FFT mode quasi-random interleaving seed generator. As illustrated in (a), the 8K FFT mode bit shuffling may mix bits of registers of a PN generator during calculation of a memory-index. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 10thprimitive polynomial and the initial value may be changed by arbitrary values. An expression illustrated in a lower portion of (b) shows a procedure of calculating and outputting main-interleaving seed for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset may be applied to each pair-wise OFDM symbol in the same way. FIG.62is a view illustrating logical composition of an 8K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention. As described above, the 8K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention may include a quasi-random main interleaving-seed generator, a random symbol-offset generator, a memory index check, and a modulo operator. FIG.62illustrates the logical composition of an 8K FFT mode quasi-random interleaving seed generator formed by combining a quasi-random main interleaving-seed generator and a random symbol-offset generator.FIG.62illustrates an embodiment of the quasi-random main interleaving-seed generator including a 3 bit-spreader and a 10 bits-randomizer and an embodiment of the random symbol-offset generator including a 2 bits-spreader and an 11 bits-randomizer. Details thereof have been described above and thus will be omitted here. Hereinafter, the random seed generator for a 16K FFT mode will be described. As described above, the random seed generator according to an embodiment of the present invention may apply different interleaving seeds to respective OFDM symbols to acquire frequency diversity. Logical composition of the random seed generator may include a random main-seed generator (or a random main interleaving-seed generator) (Cj(K)) for interleaving cells in a single OFDM symbol and a random symbol-offset generator (S└j/2┘) for changing a symbol offset. The random main-seed generator may generate the aforementioned random FI parameter. That is, the random main-seed generator may generate seed for interleaving cells in a single OFDM symbol. The random main-seed generator according to an embodiment of the present invention may include a spreader and a randomizer and perform rendering a full randomness in frequency-domain. According to an embodiment of the present invention, in the case of 16K FFT mode, the random main-seed generator may include a 1 bit spreader and an 13 bit-randomizer. The random main-seed generator or the randomizer according to an embodiment of the present invention may be referred as a main-PRBS generator which is defined based on the 13-bit binary word sequence (or binary sequence). The random symbol-offset generator according to an embodiment of the present invention may change a symbol offset of each OFDM symbol. That is, the random symbol-offset generator may generate the aforementioned symbol offset. The random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer and perform rendering a spreading as much as 2kcases, in time-domain. X may be differently set for the respective FFT modes. According to an embodiment of the present invention, in the case of 16K FFT mode, a (14-k) bit-randomizer may be used. The (X-k) bits-randomizer according to an embodiment of the present invention may be referred as a sub-PRBS generator which is defined based on (14-k) bit binary word sequence (or binary sequence). The aforementioned spreader and randomizer may be used to achieve spreading and random effects during generation of the interleaving seed. Details of the output signal of a time interleaver according to an embodiment of the present invention have been described above. FIG.63is a view of a 16K FFT mode random seed generator according to an embodiment of the present invention. The 16K FFT mode random seed generator according to an embodiment of the present invention may include a spreader (1-bit toggling), a randomizer, a memory-index check, a random symbol-offset generator, and a modulo operator. As described above, the random main-seed generator may include a spreader and a randomizer. Hereinafter, an operation of each block will be described. The (cell) spreader may be operated using an upper portion of n-bit of total 14-bit and may function as a multiplexer based on a look-up table. In the case of 16K FFT mode, the (cell) spreader may be a 1-bit multiplexer (or toggling). The randomizer may be operated via a PN generator and may provide full randomness during interleaving. As described above, in the case of 16K FFT mode, the randomizer may be a PN generator that considers 13-bit. This can be changed according to a designer's intention. Also the spreader and the randomizer are operated through multiplexer and PN generator, respectively. The memory-index check may not use seed when a memory-index generated by the spreader and the randomizer is greater than Ndataand may repeatedly operate the spreader and the randomizer to adjust the output memory-index such that the output memory-index does not exceed Ndata. The Ndataaccording to the embodiment of the present invention is equal to the number of the data cells. The random symbol-offset generator may generate a symbol-offset for cyclic-shifting main interleaving-seed generated by the main-interleaving seed generator for each pair-wise OFDM symbol. A detailed operation will be described below. The modulo operator may be operated when a result value, obtained by adding a symbol-offset output by the random symbol-offset generator for each pair-wise OFDM symbol to the memory-index output by the memory-index check, exceeds Ndata. Locations of the illustrated memory-index check and modulo operator can be changed according to a designer's intention. FIG.64illustrates expressions representing an operation of a 16K FFT mode random seed generator according to an embodiment of the present invention. The expressions illustrated in an upper portion ofFIG.64show initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 13thprimitive polynomial and the initial value may be changed by arbitrary values. The expressions illustrated in a lower portion ofFIG.64show procedures of calculating and outputting main-interleaving seed for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset may be applied to each pair-wise OFDM in the same way. FIG.65is a view illustrating a 16K FFT mode random symbol-offset generator according to an embodiment of the present invention. As above described, the random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer. Hereinafter, each block will be described. The k bits-spreader may be operated through a 2kmultiplexer and may be optimally designed to maximize inter-symbol spreading properties (or to minimize correlation properties). The randomizer may be operated through a N bits-PN generator and designed to provide randomness. The 16K FFT mode random symbol-offset generator may include a 0/1/2 bits-spreader and a 14/13/12 bits-random generator (or a PN generator). Details will be described below. FIG.66illustrates expressions showing operations of a random symbol-offset generator and a random Symbol-offset generator for 16K FFT mode including a 0 bits-spreader and a 14 bits-PN generator according to an embodiment of the present invention. (a) illustrates a random symbol-offset generator including a 0 bits-spreader and a 14 bits-PN generator. (b) illustrates an operation of a 16K FFT mode random Symbol-offset generator. The random symbol-offset generator illustrated in (a) may be operated for each pair-wise OFDM symbol. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of the randomizer. In this case, the primitive polynomial may be 14thprimitive polynomial and the initial value may be changed by arbitrary values. An expression illustrated in a lower portion of (b) shows a procedure for calculating and outputting a symbol-offset for output signals of a spreader and a randomizer. As illustrated in the expression, the random symbol-offset generator may be operated for each pair-wise OFDM symbol. Accordingly, the length of an entire output offset may correspond to half of the length of an entire OFDM symbol. FIG.67illustrates expressions illustrating operations of a random symbol-offset generator and a random Symbol-offset generator for 16K FFT mode including a 1 bits-spreader and a 13 bits-PN generator according to an embodiment of the present invention. (a) shows the random symbol-offset generator including a 1 bits-spreader and a 13 bits-PN generator. (b) shows an expression representing an operation of a 16K FFT mode random symbol-offset generator. The random symbol-offset generator illustrated in (a) may be operated for each pair-wise OFDM symbol. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 13thprimitive polynomial and the initial value may be changed by arbitrary values. An expression illustrated in a lower portion of (b) shows a procedure of calculating and outputting a symbol-offset for an output signal of the spreader and the randomizer. As illustrated in the expression, the random symbol-offset generator may be operated for each pair-wise OFDM symbol. Accordingly, the length of an entire output offset may correspond to half of the length of an entire OFDM symbol. FIG.68illustrates expressions illustrating operations of a random symbol-offset generator and a random Symbol-offset generator for 16K FFT mode including a 2 bits-spreader and a 12 bits-PN generator according to an embodiment of the present invention. (a) shows the random Symbol-offset generator including a 2 bits-spreader and a 12 bits-PN generator. (b) shows an expression representing an operation of a 16K FFT mode random symbol-offset generator. The random symbol-offset generator illustrated in (a) may be operated for each pair-wise OFDM symbol. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 12thprimitive polynomial and the initial value may include arbitrary values. An expression illustrated in a lower portion of (b) shows a procedure of calculating and outputting a symbol-offset for an output signal of the spreader and the randomizer. As illustrated in the expression, the random symbol-offset generator may be operated for each pair-wise OFDM symbol. Accordingly, the length of an entire output offset may correspond to half of the length of an entire OFDM symbol. FIG.69is a view illustrating logical composition of a 16K FFT mode random seed generator according to an embodiment of the present invention. As described above, the 16K FFT mode random seed generator according to an embodiment of the present invention may include a random main interleaving-seed generator, a random symbol-offset generator, a memory index check, and a modulo operator. FIG.69illustrates the logical composition of a 16K FFT mode random seed generator formed by combining a random main interleaving-seed generator and a random symbol-offset generator.FIG.69illustrates an embodiment of the random main interleaving-seed generator including a 1 bit-spreader and an 13 bits-randomizer, and an embodiment of the random symbol-offset generator including a 2 bits-spreader and a 12 bits-randomizer. Details thereof have been described above and thus will be omitted here. Hereinafter, a quasi-random interleaving seed generator for 16K FFT mode will be described. As described above, the quasi-random interleaving seed generator according to an embodiment of the present invention may apply different interleaving seeds to respective OFDM symbols to acquire frequency diversity. The logical composition of the quasi-random interleaving seed generator may include a main quasi-random seed generator (or quasi-random main interleaving-seed generator) (Cj(K)) for interleaving cells in a single OFDM symbol and a random symbol-offset generator (S└j/2┘) for changing a symbol offset. The main quasi-random seed generator may generate the aforementioned random FI parameter. That is, the main quasi-random seed generator may generate seed for interleaving cells in a single OFDM symbol. The main quasi-random seed generator according to an embodiment of the present invention may include a spreader and a randomizer and perform rendering a full randomness in frequency-domain. According to an embodiment of the present invention, in the case of 16K FFT mode, the main quasi-random seed generator may include a 3 bit spreader and an 11 bit-randomizer. The main quasi-random seed generator or randomizer according to an embodiment of the present invention may be referred as a main-PRBS generator which is defined based on the 11-bit binary word sequence (or binary sequence). The random symbol-offset generator according to an embodiment of the present invention may change a symbol offset for each OFDM symbol. That is, the random symbol-offset generator may generate the aforementioned symbol offset. The random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer and perform rendering a spreading as much as 2kcases, in time-domain. X may be differently set for respective FFT modes. According to an embodiment of the present invention, in the case of 16K FFT mode, a (14-k) bits-randomizer may be used. The (X-k) bits-randomizer according to an embodiment of the present invention may be referred as a sub-PRBS generator which is defined based on (14-k) bit binary word sequence (or binary sequence). The main roles of the spreader and the randomizer are as follows. Spreader: rendering a spreading effect to frequency interleaving (FI) Randomizer: rendering a random effect to FI Details of the output signal of a time interleaver according to an embodiment of the present invention have been described above. FIG.70is a view illustrating a 16K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention. The 16K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention may include a spreader (3-bit toggling), a randomizer, a memory-index check, a random symbol-offset generator, and a modulo operator. As described above, the random main-seed generator may include a spreader and a randomizer. Hereinafter, an operation of each block will be described. The spreader may be operated through an n-bit multiplexer and may maximize (or minimize inter-cell correlation) inter-cell spreading. In the case of 16K FFT mode, the spreader may use a look-up table that considers 3-bit. The randomizer may be operated as a (14-n) bits-PN generator and may provide randomness (or correlation properties). The randomizer according to an embodiment of the present invention may include bit shuffling. The bit shuffling optimizes spreading properties or random properties and is designed in consideration of Ndata. In the case of 16K FFT mode, the bit shuffling may use an 11-bit PN generator, which can be changed. The memory-index check may not use seed when a memory-index generated by the spreader and the randomizer is greater than Ndataand may repeatedly operate the spreader and the randomizer to adjust the output memory-index such that the output memory-index does not exceed Ndata. The random symbol-offset generator may generate a symbol-offset for cyclic-shifting main interleaving-seed generated by the main-interleaving seed generator for each pair-wise OFDM symbol. A detailed operation has been described with regard to the 16K FFT mode random main-seed generator and is not described again here. The modulo operator may be operated when a result value, obtained by adding a symbol-offset output by the random symbol-offset generator for each pair-wise OFDM symbol to the memory-index output by the memory-index check, exceeds Ndata. Locations of the illustrated memory-index check and modulo operator can be changed according to a designer's intention. FIG.71is expressions representing operations of 16K FFT mode bit shuffling and 16K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention. (a) illustrates an expression representing an operation of the 16K FFT mode bit shuffling and (b) illustrates an expression representing an operation of the 16K FFT mode quasi-random interleaving seed generator. As illustrated in (a), the 16K FFT mode bit shuffling may mix bits of registers of a PN generator during calculation of a memory-index. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be11th primitive polynomial and the initial value may be changed by arbitrary values. An expression illustrated in a lower portion of (b) shows a procedure of calculating and outputting main-interleaving seed for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset may be applied to each pair-wise OFDM symbol in the same way. FIG.72is a view illustrating logical composition of a 16K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention. As described above, the 16K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention may include a quasi-random main interleaving-seed generator, a random symbol-offset generator, a memory index check, and a modulo operator. FIG.72illustrates the logical composition of a 16K FFT mode quasi-random interleaving seed generator formed by combining a quasi-random main interleaving-seed generator and a random symbol-offset generator.FIG.72illustrates an embodiment of the quasi-random main interleaving-seed generator including a 3 bit-spreader and an 11 bits-randomizer and an embodiment of the random symbol-offset generator including a 2 bits-spreader and a 12 bits-randomizer. Details thereof have been described above and thus will be omitted here. Hereinafter, the random seed generator for a 32K FFT mode will be described. As described above, the random seed generator according to an embodiment of the present invention may apply different interleaving seeds to respective OFDM symbols to acquire frequency diversity. Logical composition of the random seed generator may include a random main-seed generator (or a random main interleaving-seed generator) (Cj(K)) for interleaving cells in a single OFDM symbol and a random symbol-offset generator (S└j/2┘) for changing a symbol offset. The random main-seed generator may generate the aforementioned random FI parameter. That is, the random main-seed generator may generate seed for interleaving cells in a single OFDM symbol. The random main-seed generator according to an embodiment of the present invention may include a spreader and a randomizer and perform rendering a full randomness in frequency-domain. According to an embodiment of the present invention, in the case of 32K FFT mode, the random main-seed generator may include a 1 bit spreader and an 14 bit-randomizer. The random main-seed generator or the randomizer according to an embodiment of the present invention may be referred as a main-PRBS generator which is defined based on the 14-bit binary word sequence (or binary sequence). The random symbol-offset generator according to an embodiment of the present invention may change a symbol offset of each OFDM symbol. That is, the random symbol-offset generator may generate the aforementioned symbol offset. The random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer and perform rendering a spreading as much as 2kcases, in time-domain. X may be differently set for the respective FFT modes. According to an embodiment of the present invention, in the case of 32K FFT mode, a (15-k) bit-randomizer may be used. The (X-k) bits-randomizer according to an embodiment of the present invention may be referred as a sub-PRBS generator which is defined based on (15-k) bit binary word sequence (or binary sequence). The aforementioned spreader and randomizer may be used to achieve spreading and random effects during generation of the interleaving seed. Details of the output signal of a time interleaver according to an embodiment of the present invention have been described above. FIG.73is a view of a 32K FFT mode random seed generator according to an embodiment of the present invention. The 32K FFT mode random seed generator according to an embodiment of the present invention may include a spreader (1-bit toggling), a randomizer, a memory-index check, a random symbol-offset generator, and a modulo operator. As described above, the random main-seed generator may include a spreader and a randomizer. Hereinafter, an operation of each block will be described. The (cell) spreader may be operated using an upper portion of n-bit of total 15-bit and may function as a multiplexer based on a look-up table. In the case of 32K FFT mode, the (cell) spreader may be a 1-bit multiplexer (or toggling). The randomizer may be operated via a PN generator and may provide full randomness during interleaving. As described above, in the case of 32K FFT mode, the randomizer may be a PN generator that considers 14-bit. This can be changed according to a designer's intention. Also the spreader and the randomizer are operated through multiplexer and PN generator, respectively. The memory-index check may not use seed when a memory-index generated by the spreader and the randomizer is greater than Ndataand may repeatedly operate the spreader and the randomizer to adjust the output memory-index such that the output memory-index does not exceed Ndata. The Ndataaccording to the embodiment of the present invention is equal to the number of the data cells. The random symbol-offset generator may generate a symbol-offset for cyclic-shifting main interleaving-seed generated by the main-interleaving seed generator for each pair-wise OFDM symbol. A detailed operation will be described below. The modulo operator may be operated when a result value, obtained by adding a symbol-offset output by the random symbol-offset generator for each pair-wise OFDM symbol to the memory-index output by the memory-index check, exceeds Ndata, Locations of the illustrated memory-index check and modulo operator can be changed according to a designer's intention. FIG.74illustrates expressions representing an operation of a 32K FFT mode random seed generator according to an embodiment of the present invention. The expressions illustrated in an upper portion ofFIG.74show initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 14thprimitive polynomial and the initial value may be changed by arbitrary values. The expressions illustrated in a lower portion ofFIG.74show procedures of calculating and outputting main-interleaving seed for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset may be applied to each pair-wise OFDM in the same way. FIG.75is a view illustrating a 32K FFT mode random symbol-offset generator according to an embodiment of the present invention. As above described, the random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer. Hereinafter, each block will be described. The k bits-spreader may be operated through a 2kmultiplexer and may be optimally designed to maximize inter-symbol spreading properties (or to minimize correlation properties). The randomizer may be operated through a N bits-PN generator and designed to provide randomness. The 32K FFT mode random symbol-offset generator may include a 0/1/2 bits-spreader and a 15/14/13 bits-random generator (or a PN generator). Details will be described below. FIG.76illustrates expressions showing operations of a random symbol-offset generator and a random Symbol-offset generator for 32K FFT mode including a 0 bits-spreader and a 15 bits-PN generator according to an embodiment of the present invention. (a) illustrates a random symbol-offset generator including a 0 bits-spreader and a 15 bits-PN generator. (b) illustrates an operation of a 32K FFT mode random Symbol-offset generator. The random symbol-offset generator illustrated in (a) may be operated for each pair-wise OFDM symbol. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of the randomizer. In this case, the primitive polynomial may be 12thprimitive polynomial and the initial value may be changed by arbitrary values. An expression illustrated in a lower portion of (b) shows a procedure for calculating and outputting a symbol-offset for output signals of a spreader and a randomizer. As illustrated in the expression, the random symbol-offset generator may be operated for each pair-wise OFDM symbol. Accordingly, the length of an entire output offset may correspond to half of the length of an entire OFDM symbol. FIG.77illustrates expressions illustrating operations of a random symbol-offset generator and a random Symbol-offset generator for 32K FFT mode including a 1 bits-spreader and an 14 bits-PN generator according to an embodiment of the present invention. (a) shows the random symbol-offset generator including a 1 bits-spreader and an 14 bits-PN generator. (b) shows an expression representing an operation of a 32K FFT mode random symbol-offset generator. The random symbol-offset generator illustrated in (a) may be operated for each pair-wise OFDM symbol. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 14thprimitive polynomial and the initial value may be changed by arbitrary values. An expression illustrated in a lower portion of (b) shows a procedure of calculating and outputting a symbol-offset for an output signal of the spreader and the randomizer. As illustrated in the expression, the random symbol-offset generator may be operated for each pair-wise OFDM symbol. Accordingly, the length of an entire output offset may correspond to half of the length of an entire OFDM symbol. FIG.78illustrates expressions illustrating operations of a random symbol-offset generator and a random Symbol-offset generator for 32K FFT mode including a 2 bits-spreader and a 13 bits-PN generator according to an embodiment of the present invention. (a) shows the random Symbol-offset generator including a 2 bits-spreader and a 13 bits-PN generator. (b) shows an expression representing an operation of a 32K FFT mode random symbol-offset generator. The random symbol-offset generator illustrated in (a) may be operated for each pair-wise OFDM symbol. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 13thprimitive polynomial and the initial value may include arbitrary values. An expression illustrated in a lower portion of (b) shows a procedure of calculating and outputting a symbol-offset for an output signal of the spreader and the randomizer. As illustrated in the expression, the random symbol-offset generator may be operated for each pair-wise OFDM symbol. Accordingly, the length of an entire output offset may correspond to half of the length of an entire OFDM symbol. FIG.79is a view illustrating logical composition of a 32K FFT mode random seed generator according to an embodiment of the present invention. As described above, the 32K FFT mode random seed generator according to an embodiment of the present invention may include a random main interleaving-seed generator, a random symbol-offset generator, a memory index check, and a modulo operator. FIG.79illustrates the logical composition of a 32K FFT mode random seed generator formed by combining a random main interleaving-seed generator and a random symbol-offset generator.FIG.79illustrates an embodiment of the random main interleaving-seed generator including a 1 bit-spreader and an 14 bits-randomizer, and an embodiment of the random symbol-offset generator including a 2 bits-spreader and a 13 bits-randomizer. Details thereof have been described above and thus will be omitted here. Hereinafter, a quasi-random interleaving seed generator for 32K FFT mode will be described. As described above, the quasi-random interleaving seed generator according to an embodiment of the present invention may apply different interleaving seeds to respective OFDM symbols to acquire frequency diversity. The logical composition of the quasi-random interleaving seed generator may include a main quasi-random seed generator (or quasi-random main interleaving-seed generator) (Cj(K)) for interleaving cells in a single OFDM symbol and a random symbol-offset generator (S└j/2┘) for changing a symbol offset. The main quasi-random seed generator may generate the aforementioned random FI parameter. That is, the main quasi-random seed generator may generate seed for interleaving cells in a single OFDM symbol. The main quasi-random seed generator according to an embodiment of the present invention may include a spreader and a randomizer and perform rendering a full randomness in frequency-domain. According to an embodiment of the present invention, in the case of 32K FFT mode, the main quasi-random seed generator may include a 3 bit spreader and an 12 bit-randomizer. The main quasi-random seed generator or the randomizer according to an embodiment of the present invention may be referred as a main-PRBS generator which is defined based on the 12-bit binary word sequence (or binary sequence). The random symbol-offset generator according to an embodiment of the present invention may change a symbol offset for each OFDM symbol. That is, the random symbol-offset generator may generate the aforementioned symbol offset. The random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer and perform rendering a spreading as much as 2kcases, in time-domain. X may be differently set for respective FFT modes. According to an embodiment of the present invention, in the case of 16K FFT mode, a (15-k) bits-randomizer may be used. The (X-k) bits-randomizer according to an embodiment of the present invention may be referred as a sub-PRBS generator which is defined based on (15-k) bit binary word sequence (or binary sequence). The main roles of the spreader and the randomizer are as follows. Spreader: rendering a spreading effect to frequency interleaving (FI) Randomizer: rendering a random effect to FI Details of the output signal of a time interleaver according to an embodiment of the present invention have been described above. FIG.80is a view illustrating a 32K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention. The 32K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention may include a spreader (3-bit toggling), a randomizer, a memory-index check, a random symbol-offset generator, and a modulo operator. As described above, the quasi-random main-seed generator may include a spreader and a randomizer. Hereinafter, an operation of each block will be described. The spreader may be operated through an n-bit multiplexer and may maximize (or minimize inter-cell correlation) inter-cell spreading. In the case of 32K FFT mode, the spreader may use a look-up table that considers 3-bit. The randomizer may be operated as a (15-n) bits-PN generator and may provide randomness (or correlation properties). The randomizer according to an embodiment of the present invention may include bit shuffling. The bit shuffling optimizes spreading properties or random properties and is designed in consideration of Ndata. In the case of 32K FFT mode, the bit shuffling may use a 9-bit PN generator, which can be changed. The memory-index check may not use seed when a memory-index generated by the spreader and the randomizer is greater than Ndataand may repeatedly operate the spreader and the randomizer to adjust the output memory-index such that the output memory-index does not exceed Ndata. The random symbol-offset generator may generate a symbol-offset for cyclic-shifting main interleaving-seed generated by the main-interleaving seed generator for each pair-wise OFDM symbol. A detailed operation has been described with regard to the 32K FFT mode random main-seed generator and is not described again here. The modulo operator may be operated when a result value, obtained by adding a symbol-offset output by the random symbol-offset generator for each pair-wise OFDM symbol to the memory-index output by the memory-index check, exceeds Ndata. Locations of the illustrated memory-index check and modulo operator can be changed according to a designer's intention. FIG.81is expressions representing operations of 32K FFT mode bit shuffling and 32K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention. (a) illustrates an expression representing an operation of the 32K FFT mode bit shuffling and (b) illustrates an expression representing an operation of the 32K FFT mode quasi-random interleaving seed generator. As illustrated in (a), the 32K FFT mode bit shuffling may mix bits of registers of a PN generator during calculation of a memory-index. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 12thprimitive polynomial and the initial value may be changed by arbitrary values. An expression illustrated in a lower portion of (b) shows a procedure of calculating and outputting main-interleaving seed for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset may be applied to each pair-wise OFDM symbol in the same way. FIG.82is a view illustrating logical composition of a 32K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention. As described above, the 32K FFT mode quasi-random interleaving seed generator according to an embodiment of the present invention may include a quasi-random main interleaving-seed generator, a random symbol-offset generator, a memory index check, and a modulo operator. FIG.82illustrates the logical composition of a 32K FFT mode quasi-random interleaving seed generator formed by combining a quasi-random main interleaving-seed generator and a random symbol-offset generator.FIG.82illustrates an embodiment of the quasi-random main interleaving-seed generator including a 3 bit-spreader and a 12 bits-randomizer and an embodiment of the random symbol-offset generator including a 2 bits-spreader and a 13 bits-randomizer. Details thereof have been described above and thus will be omitted here. FIG.83is a change procedure for an interleaving seed in each memory bank according to another embodiment of the present invention. The block illustrated in an upper portion ofFIG.83shows expressions for the first OFDM symbol, i.e., (j mod 2)=0 of the ith OFDM symbol pair. The block illustrated in a lower portion ofFIG.83shows expressions for the second OFDM symbol, i.e., (j mod 2)=1 of the ith OFDM symbol pair. The word “a random generator” illustrated each portion ofFIG.83may be a random interleaving-sequence generator described as follows. The random interleaving-sequence generator according to an embodiment of the present invention may be included in the frequency interleaver7020. T(k) illustrated in an upper portion ofFIG.83is a random sequence, it can be used as a same concept of a main random interleaving sequence or a single interleaving seed (or an interleaving seed). The Random sequence may be generated in a random interleaving-sequence generator or a random main-sequence generator which will be described later. S└j/2┘is a symbol offset and be referred as a cyclic shifting value. The cyclic shifting value can be generated based on sub PRBS sequence. The details will be described later. The interleaving process for the OFDM symbol pair in each memory bank-A/B is described as above, exploiting a single interleaving-seed. The available data cells (the output cells from the cell mapper7010) to be interleaved in one OFDM symbol. The Ndataaccording to the embodiment of the present invention is equal to the number of the data cells. The maximum value of the Ndatacan be referred as Nmaxand Nmaxis differently defined according to each FFT mode. For the OFDM symbol pair in each memory bank, the interleaved OFDM symbol pair is shown inFIG.83. Hj(k) is the interleaving address for the interleaving seed generated by a random interleaving-sequence generator for each FFT mode. The composition of the random interleaving-sequence generator will be described later. As described above, the purpose of the frequency interleaver7020, which operates on a single OFDM symbol, is to provide frequency diversity by randomly interleaving data cells. In order to get maximum interleaving gain in a single frame, a different interleaving-seed is used for every OFDM symbol pair comprised of two sequential OFDM symbols. As shown inFIG.83, different interleaving seed can be generated based on the interleaving address generated by a random interleaving-sequence generator. Also, the different interleaving seed can be generated based on the cyclic shifting value as above mentioned. That means, the different interleaving address to be used every symbol pair may be generated by using the cyclic shifting value for every OFDM symbol pair. As described above, an OFDM generation block1030may perform FFT transformation on input data. Hereinafter, an operation of the frequency interleaver7020having the random interleaving-sequence generator according to another embodiment will be described. The random interleaving-sequence generator may be another embodiment of the interleaving seed generator which is described above. Therefore, the random interleaving-sequence generator may be referred as the random seed generator or the quasi-random interleaving seed generator or the interleaving address generator and it may be changed by the designer's intention. The random interleaving-sequence generator may include a first generator and a second generator. The first generator is for generating a main interleaving seed generator and the second generator is for generating a symbol offset. The name of the first generator and the second generator can be changed according to the designer's indention. As described above, an FFT size according to an embodiment of the present invention may be 4K, 8K, 16K, 32K, or the like, and it can be changed according to the designer's intention. Hereinafter, the random interleaving-sequence generator for a 4K FFT mode will be described. The random interleaving-sequence generator according to an embodiment of the present invention may be included in the frequency interleaver7020and is similar to the random seed generator mentioned (mentioned above), the random interleaving-sequence generator has a different structure from the random seed generator. The random interleaving-sequence generator according to an embodiment of the present invention may apply different interleaving seeds to respective OFDM symbols to acquire frequency diversity. Logical composition of the random interleaving-sequence generator may include a random main-sequence generator (or a random main interleaving-sequence generator or a random main-interleaving seed generator) (Cj(K)) for interleaving cells in a single OFDM symbol in one OFDM symbol pair and a random symbol-offset generator (S└j/2┘) for changing a symbol offset (This parameter can be referred as a cyclic shifting value). The random main-sequence generator according to an embodiment of the present invention is similar to the random seed generator (mentioned above), the random main-sequence generator has a different structure from the random main-seed generator. Also, the random main-sequence generator or a randomizer in the random main-sequence generator may be referred as a main-PRBS generator and it may be changed according to the designer's attention. The random main-sequence generator may generate the aforementioned random FI parameter. That is, random main-sequence generator may generate seed for interleaving cells in a single OFDM symbol. The random main-sequence generator according to an embodiment of the present invention may include a spreader and a randomizer and perform rendering a full randomness in frequency-domain. According to an embodiment of the present invention, in the case of 4K FFT mode, the random main-sequence generator may include a 1 bit spreader and an 11 bit-randomizer. The random main-sequence generator or the randomizer according to an embodiment of the present invention may be referred as a main-PRBS generator which is defined based on the 11-bit binary word sequence (or binary sequence). The random symbol-offset generator according to an embodiment of the present invention may change a symbol offset of each OFDM symbol. That is, the random symbol-offset generator may generate the aforementioned symbol offset. The random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer and perform rendering a spreading as much as 2kcases, in time-domain. X may be differently set for the respective FFT modes. According to an embodiment of the present invention, in the case of 4K FFT mode, a (12-k) bit-randomizer may be used. The (X-k) bits-randomizer according to an embodiment of the present invention may be referred as a sub-PRBS generator which is defined based on (12-k) bit binary word sequence (or binary sequence). The aforementioned spreader and randomizer may be used to achieve spreading and random effects during generation of the interleaving seed. In this embodiment, in generating of interleaving-value, PRBS operation order is modified to cope with the case of that the number of active carriers vary at start and last OFDM symbols within a single frame. FIG.84is a view of a 4K FFT mode random interleaving-sequence generator according to an embodiment of the present invention. The 4K FFT mode random interleaving-sequence generator according to an embodiment of the present invention may include a spreader (1-bit toggling), a randomizer, a random symbol-offset generator, a modulo operator and a memory-index check. As described above, the random main-sequence generator may include a spreader and a randomizer. As shown inFIG.84, the locations of the modulo operator and the memory-index check is changed as compared with the 4K FFT mode random main-seed generator as described above. The changed locations of the modulo operator and the memory-index check as shown inFIG.84is to increase a frequency deinterleaving performance of the frequency deinterleaver having single memory. As above described, a signal frame (or frame) according to the present invention may have normal data symbol (normal data symbol), frame edge symbol and frame signaling symbol and a length of the frame edge symbol and the frame signaling symbol may be shorter than the normal data symbol. For this reason, a frequency deinterleaving performance of the frequency deinterleaver having single memory can be decreased. In order to increase the frequency deinterleaving performance of the frequency deinterleaver with a single memory, the present invention may provide the changed locations of the modulo operator and the memory-index check. Hereinafter, an operation of each block will be described. The (cell) spreader may be operated using an upper portion of n-bit of total 12-bit and may function as a multiplexer based on a look-up table. In the case of 4K FFT mode, the (cell) spreader may be a 1-bit multiplexer (or toggling). The randomizer may be operated via a PN generator and may provide full randomness during interleaving. As described above, in the case of 4K FFT mode, the randomizer may be a PN generator that considers 11-bit. This can be changed according to a designer's intention. Also the spreader and the randomizer are operated through multiplexer and PN generator, respectively. The random symbol-offset generator may generate a symbol-offset for cyclic-shifting main interleaving-sequence generated by the main-interleaving sequence generator for each pair-wise OFDM symbol. A detailed operation is the same as those describe above and thus are not described here. The modulo operator may be operated when input value exceeds Ndataor Nmax. The maximum value of the Ndata(Nmax) for 4K FFT mode may be 4096. The memory-index check may not use output from the modulo operator when a memory-index generated by the spreader and the randomizer is greater than Ndataor the maximum value of the Ndata(Nmax) and may repeatedly operate the spreader and the randomizer to adjust the output memory-index such that the output memory-index does not exceed Ndataor the maximum value of the Ndata(Nmax). Locations of the illustrated memory-index check and modulo operator can be changed according to a designer's intention. FIG.85illustrates expressions representing an operation of a 4K FFT mode random interleaving-sequence generator according to an embodiment of the present invention. The expressions illustrated in an upper portion ofFIG.85show initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 11thprimitive polynomial and the initial value may be changed by arbitrary values. That is, the expressions illustrated in an upper portion shows binary word sequences or binary bits used to define the main-PRBS generator which can generate main-PRBS sequence. The expressions illustrated in a lower portion ofFIG.85show procedures of calculating and outputting the interleaving address for different interleaving sequence for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset (or a symbol offset or cyclic shifting value) is used to calculate the different interleaving sequence and the cyclic shifting value may be applied to each OFDM symbol pair in the same way. The expressions illustrated in a lower portion ofFIG.85show procedures of calculating and outputting main-interleaving sequence for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset may be applied to each OFDM symbol pair in the same way. As above described, the random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer. The k bits-spreader may be operated through a 2kmultiplexer and may be optimally designed to maximize inter-symbol spreading properties (or to minimize correlation properties). The randomizer may be operated through an N bits-PN generator (or N bits-sub-PRBS generator) and designed to provide randomness. The 4K FFT mode random symbol-offset generator may include a 0/1/2 bits-spreader and a 12/11/10 bits-random generator (or a PN generator). It can be changed according to the designer's intention. FIG.86is a view illustrating logical composition of a 4K FFT mode random interleaving-sequence generator according to an embodiment of the present invention. As described above, the 4K FFT mode random interleaving-sequence generator according to an embodiment of the present invention may include a random main interleaving-seed generator, a random symbol-offset generator, a memory index check, and a modulo operator. FIG.86illustrates the logical composition of a 4K FFT mode random interleaving-sequence generator formed by combining a random main interleaving-seed generator and a random symbol-offset generator.FIG.86illustrates an embodiment of the random main interleaving-seed generator including a 1 bit-spreader and an 11 bits-randomizer, and an embodiment of the random symbol-offset generator including a 2 bits-spreader and a 10 bits-randomizer. Details thereof have been described above and thus will be omitted here. Hereinafter, a random interleaving-sequence generator for 4K FFT mode according to another embodiment of the present invention will be described. the random interleaving-sequence generator for 4K FFT mode according to another embodiment of the present invention includes random main interleaving-sequence generator which have a randomizer including bit shuffling. FIG.87is a view illustrating a 4K FFT mode random interleaving-sequence generator according to another embodiment of the present invention. The 4K FFT mode random interleaving-sequence generator according to another embodiment of the present invention may include a spreader (1-bit toggling), a randomizer, a random symbol-offset generator, a modulo operator and a memory-index check. As described above, the random main interleaving-sequence generator may include a spreader and a randomizer. Details thereof except bit shuffling have been described above and thus will be omitted here. The randomizer may be operated via a PN generator and may provide full randomness during interleaving. As described above, the randomizer according to an embodiment of the present invention may include bit shuffling. The bit shuffling optimizes spreading properties or random properties and is designed in consideration of Ndata. In the case of 4K FFT mode, the bit shuffling may use a 11-bit PN generator, which can be changed. FIG.88is expressions representing operations of 4K FFT mode bit shuffling and 4K FFT mode random interleaving-sequence generator according to another embodiment of the present invention. (a) illustrates an expression representing an operation of the 4K FFT mode bit shuffling and (b) illustrates an expression representing an operation of the 4K FFT random interleaving-sequence generator. The upper portion of (a) shows an operation of the 4K FFT mode bit shuffling and the lower portion of (a) shows an embodiment of the 4K FFT mode bit shuffling for 11 bits. As illustrated in (a), the 4K FFT mode bit shuffling may mix bits of registers of a PN generator during calculation of a memory-index. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 12thprimitive polynomial and the initial value may be changed by arbitrary values. That is, the expressions illustrated in an upper portion shows binary word sequences or binary bits used to define the main-PRBS generator which can generate main-PRBS sequence. An expression illustrated in a lower portion of (b) shows procedures of calculating and outputting the interleaving address for different interleaving sequence for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset (or a symbol offset or cyclic shifting value) is used to calculate the different interleaving sequence and the cyclic shifting value may be applied to each OFDM symbol pair in the same way. Hereinafter, the random interleaving-sequence generator for an 8K FFT mode will be described. The random interleaving-sequence generator according to an embodiment of the present invention may be included in the frequency interleaver7020and is similar to the random seed generator mentioned (mentioned above), the random interleaving-sequence generator has a different structure from the random seed generator. The random main-sequence generator according to an embodiment of the present invention may include a spreader and a randomizer and perform rendering a full randomness in frequency-domain. According to an embodiment of the present invention, in the case of 8K FFT mode, the random main-sequence generator may include a 1 bit spreader and an 12 bit-randomizer. The random main-sequence generator or the randomizer according to an embodiment of the present invention may be referred as a main-PRBS generator which is defined based on the 12-bit binary word sequence (or binary sequence). The random symbol-offset generator according to an embodiment of the present invention may change a symbol offset of each OFDM symbol. That is, the random symbol-offset generator may generate the aforementioned symbol offset. The random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer and perform rendering a spreading as much as 2kcases, in time-domain. X may be differently set for the respective FFT modes. According to an embodiment of the present invention, in the case of 8K FFT mode, a (13-k) bit-randomizer may be used. The (X-k) bits-randomizer according to an embodiment of the present invention may be referred as a sub-PRBS generator which is defined based on (13-k) bit binary word sequence (or binary sequence). The aforementioned spreader and randomizer may be used to achieve spreading and random effects during generation of the interleaving seed. In this embodiment, in generating of interleaving-value, PRBS operation order is modified to cope with the case of that the number of active carriers vary at start and last OFDM symbols within a single frame. FIG.89is a view of an 8K FFT mode random interleaving-sequence generator according to an embodiment of the present invention. The 8K FFT mode random interleaving-sequence generator according to an embodiment of the present invention may include a spreader (1-bit toggling), a randomizer, a random symbol-offset generator, a modulo operator and a memory-index check. As described above, the random main-sequence generator may include a spreader and a randomizer. As shown inFIG.89, the locations of the modulo operator and the memory-index check is changed as compared with the 8K FFT mode random main-seed generator as described above. The changed locations of the modulo operator and the memory-index check as shown inFIG.89is to increase a frequency deinterleaving performance of the frequency deinterleaver having single memory. As above described, a signal frame (or frame) according to the present invention may have normal data symbol (normal data symbol), frame edge symbol and frame signaling symbol and a length of the frame edge symbol and the frame signaling symbol may be shorter than the normal data symbol. For this reason, a frequency deinterleaving performance of the frequency deinterleaver having single memory can be decreased. In order to increase the frequency deinterleaving performance of the frequency deinterleaver with a single memory, the present invention may provide the changed locations of the modulo operator and the memory-index check. Hereinafter, an operation of each block will be described. The (cell) spreader may be operated using an upper portion of n-bit of total 13-bit and may function as a multiplexer based on a look-up table. In the case of 8K FFT mode, the (cell) spreader may be a 1-bit multiplexer (or toggling). The randomizer may be operated via a PN generator and may provide full randomness during interleaving. As described above, in the case of 8K FFT mode, the randomizer may be a PN generator that considers 12-bit. This can be changed according to a designer's intention. Also the spreader and the randomizer are operated through multiplexer and PN generator, respectively. The random symbol-offset generator may generate a symbol-offset for cyclic-shifting main interleaving-sequence generated by the main-interleaving sequence generator for each pair-wise OFDM symbol. A detailed operation is the same as those describe above and thus are not described here. The modulo operator may be operated when input value exceeds Ndataor Nmax. The maximum value of the Ndata(Nmax) for 8K FFT mode may be 8192. The memory-index check may not use output from the modulo operator when a memory-index generated by the spreader and the randomizer is greater than Ndataor the maximum value of the Ndata(Nmax) and may repeatedly operate the spreader and the randomizer to adjust the output memory-index such that the output memory-index does not exceed Ndataor the maximum value of the Ndata(Nmax). Locations of the illustrated memory-index check and modulo operator can be changed according to a designer's intention. FIG.90illustrates expressions representing an operation of an 8K FFT mode random interleaving-sequence generator according to an embodiment of the present invention. The expressions illustrated in an upper portion ofFIG.90show initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 12thprimitive polynomial and the initial value may be changed by arbitrary values. That is, the expressions illustrated in an upper portion shows binary word sequences or binary bits used to define the main-PRBS generator which can generate main-PRBS sequence. The expressions illustrated in a lower portion ofFIG.90show procedures of calculating and outputting the interleaving address for different interleaving sequence for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset (or a symbol offset or cyclic shifting value) is used to calculate the different interleaving sequence and the cyclic shifting value may be applied to each OFDM symbol pair in the same way. As above described, the random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer. The k bits-spreader may be operated through a 2kmultiplexer and may be optimally designed to maximize inter-symbol spreading properties (or to minimize correlation properties). The randomizer may be operated through an N bits-PN generator (or N bits-sub-PRBS generator) and designed to provide randomness. The 8K FFT mode random symbol-offset generator may include a 0/1/2 bits-spreader and a 13/12/11 bits-random generator (or a PN generator). It can be changed according to the designer's intention. FIG.91is a view illustrating logical composition of an 8K FFT mode random interleaving-sequence generator according to an embodiment of the present invention. As described above, the 8K FFT mode random interleaving-sequence generator according to an embodiment of the present invention may include a random main interleaving-seed generator, a random symbol-offset generator, a memory index check, and a modulo operator. FIG.91illustrates the logical composition of an 8K FFT mode random interleaving-sequence generator formed by combining a random main interleaving-seed generator and a random symbol-offset generator.FIG.91illustrates an embodiment of the random main interleaving-seed generator including a 1 bit-spreader and a 12 bits-randomizer, and an embodiment of the random symbol-offset generator including a 2 bits-spreader and an 11 bits-randomizer. Details thereof have been described above and thus will be omitted here. Hereinafter, a random interleaving-sequence generator for 8K FFT mode according to another embodiment of the present invention will be described. The random interleaving-sequence generator for 8K FFT mode according to another embodiment of the present invention includes random main interleaving-sequence generator which have a randomizer including bit shuffling. FIG.92is a view illustrating an 8K FFT mode random interleaving-sequence generator according to another embodiment of the present invention. The 8K FFT mode random interleaving-sequence generator according to another embodiment of the present invention may include a spreader (1-bit toggling), a randomizer, a random symbol-offset generator, a modulo operator and a memory-index check. As described above, the random main interleaving-sequence generator may include a spreader and a randomizer. Details thereof except bit shuffling have been described above and thus will be omitted here. The randomizer may be operated via a PN generator and may provide full randomness during interleaving. As described above, the randomizer according to an embodiment of the present invention may include bit shuffling. The bit shuffling optimizes spreading properties or random properties and is designed in consideration of Ndata. In the case of 8K FFT mode, the bit shuffling may use a 12-bit PN generator, which can be changed. FIG.93is expressions representing operations of 8K FFT mode bit shuffling and 8K FFT mode random interleaving-sequence generator according to another embodiment of the present invention. (a) illustrates an expression representing an operation of the 8K FFT mode bit shuffling and (b) illustrates an expression representing an operation of the 8K FFT random interleaving-sequence generator. The upper portion of (a) shows an operation of the 8K FFT mode bit shuffling and the lower portion of (a) shows an embodiment of the 8K FFT mode bit shuffling for 12 bits. As illustrated in (a), the 8K FFT mode bit shuffling may mix bits of registers of a PN generator during calculation of a memory-index. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 12thprimitive polynomial and the initial value may be changed by arbitrary values. That is, the expressions illustrated in an upper portion shows binary word sequences or binary bits used to define the main-PRBS generator which can generate main-PRBS sequence. An expression illustrated in a lower portion of (b) shows procedures of calculating and outputting the interleaving address for different interleaving sequence for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset (or a symbol offset or cyclic shifting value) is used to calculate the different interleaving sequence and the cyclic shifting value may be applied to each OFDM symbol pair in the same way. An expression illustrated in a lower portion of (b) shows a procedure of calculating and outputting main-interleaving sequence for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset may be applied to each OFDM symbol pair in the same way. Hereinafter, the random interleaving-sequence generator for a 16K FFT mode will be described. The random interleaving-sequence generator according to an embodiment of the present invention may be included in the frequency interleaver7020and is similar to the random seed generator mentioned (mentioned above), the random interleaving-sequence generator has a different structure from the random seed generator. The random main-sequence generator according to an embodiment of the present invention may include a spreader and a randomizer and perform rendering a full randomness in frequency-domain. According to an embodiment of the present invention, in the case of 16K FFT mode, the random main-sequence generator may include a 1 bit spreader and an 13 bit-randomizer. The random main-sequence generator or the randomizer according to an embodiment of the present invention may be referred as a main-PRBS generator which is defined based on the 13-bit binary word sequence (or binary sequence). The random symbol-offset generator according to an embodiment of the present invention may change a symbol offset of each OFDM symbol. That is, the random symbol-offset generator may generate the aforementioned symbol offset. The random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer and perform rendering a spreading as much as 2kcases, in time-domain. X may be differently set for the respective FFT modes. According to an embodiment of the present invention, in the case of 16K FFT mode, a (14-k) bit-randomizer may be used. The (X-k) bits-randomizer according to an embodiment of the present invention may be referred as a sub-PRBS generator which is defined based on (14-k) bit binary word sequence (or binary sequence). The aforementioned spreader and randomizer may be used to achieve spreading and random effects during generation of the interleaving seed. In this embodiment, in generating of interleaving-value, PRBS operation order is modified to cope with the case of that the number of active carriers vary at start and last OFDM symbols within a single frame. FIG.94is a view of a 16K FFT mode random interleaving-sequence generator according to an embodiment of the present invention. The 16K FFT mode random interleaving-sequence generator according to an embodiment of the present invention may include a spreader (1-bit toggling), a randomizer, a random symbol-offset generator, a modulo operator and a memory-index check. As described above, the random main-sequence generator may include a spreader and a randomizer. As shown inFIG.94, the locations of the modulo operator and the memory-index check is changed as compared with the 16K FFT mode random main-seed generator as described above. The changed locations of the modulo operator and the memory-index check as shown inFIG.94is to increase a frequency deinterleaving performance of the frequency deinterleaver having single memory. As above described, a signal frame (or frame) according to the present invention may have normal data symbol (normal data symbol), frame edge symbol and frame signaling symbol and a length of the frame edge symbol and the frame signaling symbol may be shorter than the normal data symbol. For this reason, a frequency deinterleaving performance of the frequency deinterleaver having single memory can be decreased. In order to increase the frequency deinterleaving performance of the frequency deinterleaver with a single memory, the present invention may provide the changed locations of the modulo operator and the memory-index check. Hereinafter, an operation of each block will be described. The (cell) spreader may be operated using an upper portion of n-bit of total 14-bit and may function as a multiplexer based on a look-up table. In the case of 16K FFT mode, the (cell) spreader may be a 1-bit multiplexer (or toggling). The randomizer may be operated via a PN generator and may provide full randomness during interleaving. As described above, in the case of 16K FFT mode, the randomizer may be a PN generator that considers 13-bit. This can be changed according to a designer's intention. Also the spreader and the randomizer are operated through multiplexer and PN generator, respectively. The random symbol-offset generator may generate a symbol-offset for cyclic-shifting main interleaving-sequence generated by the main-interleaving sequence generator for each pair-wise OFDM symbol. A detailed operation is the same as those describe above and thus are not described here. The modulo operator may be operated when input value exceeds Ndataor Nmax. The maximum value of the Ndata(Nmax) for 16K FFT mode may be 16384. The memory-index check may not use output from the modulo operator when a memory-index generated by the spreader and the randomizer is greater than Ndataor the maximum value of the Ndata(Nmax) and may repeatedly operate the spreader and the randomizer to adjust the output memory-index such that the output memory-index does not exceed Ndataor the maximum value of the Ndata(Nmax). Locations of the illustrated memory-index check and modulo operator can be changed according to a designer's intention. FIG.95illustrates expressions representing an operation of a 16K FFT mode random interleaving-sequence generator according to an embodiment of the present invention. The expressions illustrated in an upper portion ofFIG.95show initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 13thprimitive polynomial and the initial value may be changed by arbitrary values. That is, the expressions illustrated in an upper portion shows binary word sequences or binary bits used to define the main-PRBS generator which can generate main-PRBS sequence. The expressions illustrated in a lower portion ofFIG.95show procedures of calculating and outputting the interleaving address for different interleaving sequence for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset (or a symbol offset or cyclic shifting value) is used to calculate the different interleaving sequence and the cyclic shifting value may be applied to each OFDM symbol pair in the same way. As above described, the random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer. The k bits-spreader may be operated through a 2kmultiplexer and may be optimally designed to maximize inter-symbol spreading properties (or to minimize correlation properties). The randomizer may be operated through an N bits-PN generator (or N bits-sub-PRBS generator) and designed to provide randomness. The 16K FFT mode random symbol-offset generator may include a 0/1/2 bits-spreader and a 14/13/12 bits-random generator (or a PN generator). It can be changed according to the designer's intention. FIG.96is a view illustrating logical composition of a 16K FFT mode random interleaving-sequence generator according to an embodiment of the present invention. As described above, the 16K FFT mode random interleaving-sequence generator according to an embodiment of the present invention may include a random main interleaving-seed generator, a random symbol-offset generator, a memory index check, and a modulo operator. FIG.96illustrates the logical composition of a 16K FFT mode random interleaving-sequence generator formed by combining a random main interleaving-seed generator and a random symbol-offset generator.FIG.96illustrates an embodiment of the random main interleaving-seed generator including a 1 bit-spreader and a 13 bits-randomizer, and an embodiment of the random symbol-offset generator including a 2 bits-spreader and a 12 bits-randomizer. Details thereof have been described above and thus will be omitted here. Hereinafter, a random interleaving-sequence generator for 16K FFT mode according to another embodiment of the present invention will be described. the random interleaving-sequence generator for 16K FFT mode according to another embodiment of the present invention includes random main interleaving-sequence generator which have a randomizer including bit shuffling. FIG.97is a view illustrating a 16K FFT mode random interleaving-sequence generator according to another embodiment of the present invention. The 16K FFT mode random interleaving-sequence generator according to another embodiment of the present invention may include a spreader (1-bit toggling), a randomizer, a random symbol-offset generator, a modulo operator and a memory-index check. As described above, the random main interleaving-sequence generator may include a spreader and a randomizer. Details thereof except bit shuffling have been described above and thus will be omitted here. The randomizer may be operated via a PN generator and may provide full randomness during interleaving. As described above, the randomizer according to an embodiment of the present invention may include bit shuffling. The bit shuffling optimizes spreading properties or random properties and is designed in consideration of Ndata. In the case of 16K FFT mode, the bit shuffling may use a 13-bit PN generator, which can be changed. FIG.98is expressions representing operations of 16K FFT mode bit shuffling and 16K FFT mode random interleaving-sequence generator according to another embodiment of the present invention. (a) illustrates an expression representing an operation of the 16K FFT mode bit shuffling and (b) illustrates an expression representing an operation of the 16K FFT random interleaving-sequence generator. The upper portion of (a) shows an operation of the 16K FFT mode bit shuffling and the lower portion of (a) shows an embodiment of the 16K FFT mode bit shuffling for 13 bits. As illustrated in (a), the 16K FFT mode bit shuffling may mix bits of registers of a PN generator during calculation of a memory-index. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 13thprimitive polynomial and the initial value may be changed by arbitrary values. That is, the expressions illustrated in an upper portion shows binary word sequences or binary bits used to define the main-PRBS generator which can generate main-PRBS sequence. An expression illustrated in a lower portion of (b) shows procedures of calculating and outputting the interleaving address for different interleaving sequence for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset (or a symbol offset or cyclic shifting value) is used to calculate the different interleaving sequence and the cyclic shifting value may be applied to each OFDM symbol pair in the same way. An expression illustrated in a lower portion of (b) shows a procedure of calculating and outputting main-interleaving sequence for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset may be applied to each OFDM symbol pair in the same way. Hereinafter, the random interleaving-sequence generator for a 32K FFT mode will be described. The random interleaving-sequence generator according to an embodiment of the present invention may be included in the frequency interleaver7020and is similar to the random seed generator mentioned (mentioned above), the random interleaving-sequence generator has a different structure from the random seed generator. The random main-sequence generator according to an embodiment of the present invention may include a spreader and a randomizer and perform rendering a full randomness in frequency-domain. According to an embodiment of the present invention, in the case of 32K FFT mode, the random main-sequence generator may include a 1 bit spreader and a 14 bit-randomizer. The random main-sequence generator or the randomizer according to an embodiment of the present invention may be referred as a main-PRBS generator which is defined based on the 14-bit binary word sequence (or binary sequence). The random symbol-offset generator according to an embodiment of the present invention may change a symbol offset of each OFDM symbol. That is, the random symbol-offset generator may generate the aforementioned symbol offset. The random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer and perform rendering a spreading as much as 2kcases, in time-domain. X may be differently set for the respective FFT modes. According to an embodiment of the present invention, in the case of 32K FFT mode, a (15-k) bit-randomizer may be used. The (X-k) bits-randomizer according to an embodiment of the present invention may be referred as a sub-PRBS generator which is defined based on (15-k) bit binary word sequence (or binary sequence). The aforementioned spreader and randomizer may be used to achieve spreading and random effects during generation of the interleaving seed. In this embodiment, in generating of interleaving-value, PRBS operation order is modified to cope with the case of that the number of active carriers vary at start and last OFDM symbols within a single frame. FIG.99is a view of a 32K FFT mode random interleaving-sequence generator according to an embodiment of the present invention. The 32K FFT mode random interleaving-sequence generator according to an embodiment of the present invention may include a spreader (1-bit toggling), a randomizer, a random symbol-offset generator, a modulo operator and a memory-index check. As described above, the random main-sequence generator may include a spreader and a randomizer. As shown inFIG.99, the locations of the modulo operator and the memory-index check is changed as compared with the 32K FFT mode random main-seed generator as described above. The changed locations of the modulo operator and the memory-index check as shown inFIG.99is to increase a frequency deinterleaving performance of the frequency deinterleaver having single memory. As above described, a signal frame (or frame) according to the present invention may have normal data symbol (normal data symbol), frame edge symbol and frame signaling symbol and a length of the frame edge symbol and the frame signaling symbol may be shorter than the normal data symbol. For this reason, a frequency deinterleaving performance of the frequency deinterleaver having single memory can be decreased. In order to increase the frequency deinterleaving performance of the frequency deinterleaver with a single memory, the present invention may provide the changed locations of the modulo operator and the memory-index check. Hereinafter, an operation of each block will be described. The (cell) spreader may be operated using an upper portion of n-bit of total 15-bit and may function as a multiplexer based on a look-up table. In the case of 32K FFT mode, the (cell) spreader may be a 1-bit multiplexer (or toggling). The randomizer may be operated via a PN generator and may provide full randomness during interleaving. As described above, in the case of 32K FFT mode, the randomizer may be a PN generator that considers 14-bit. This can be changed according to a designer's intention. Also the spreader and the randomizer are operated through multiplexer and PN generator, respectively. The random symbol-offset generator may generate a symbol-offset for cyclic-shifting main interleaving-sequence generated by the main-interleaving sequence generator for each pair-wise OFDM symbol. A detailed operation is the same as those describe above and thus are not described here. The modulo operator may be operated when input value exceeds Ndataor Nmax. The maximum value of the Ndata(Nmax) for 32K FFT mode may be 32768. The memory-index check may not use output from the modulo operator when a memory-index generated by the spreader and the randomizer is greater than Ndataor the maximum value of the Ndata(Nmax) and may repeatedly operate the spreader and the randomizer to adjust the output memory-index such that the output memory-index does not exceed Ndataor the maximum value of the Ndata(Nmax). Locations of the illustrated memory-index check and modulo operator can be changed according to a designer's intention. FIG.100illustrates expressions representing an operation of a 32K FFT mode random interleaving-sequence generator according to an embodiment of the present invention. The expressions illustrated in an upper portion ofFIG.100show initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 14thprimitive polynomial and the initial value may be changed by arbitrary values. That is, the expressions illustrated in an upper portion shows binary word sequences or binary bits used to define the main-PRBS generator which can generate main-PRBS sequence. The expressions illustrated in a lower portion ofFIG.100show procedures of calculating and outputting the interleaving address for different interleaving sequence for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset (or a symbol offset or cyclic shifting value) is used to calculate the different interleaving sequence and the cyclic shifting value may be applied to each OFDM symbol pair in the same way. As above described, the random symbol-offset generator according to an embodiment of the present invention may include k bits-spreader and (X-k) bits-randomizer. The k bits-spreader may be operated through a 2kmultiplexer and may be optimally designed to maximize inter-symbol spreading properties (or to minimize correlation properties). The randomizer may be operated through an N bits-PN generator (or N bits-sub-PRBS generator) and designed to provide randomness. The 32K FFT mode random symbol-offset generator may include a 0/1/2 bits-spreader and a 15/14/13 bits-random generator (or a PN generator). It can be changed according to the designer's intention. FIG.101is a view illustrating logical composition of a 32K FFT mode random interleaving-sequence generator according to an embodiment of the present invention. As described above, the 32K FFT mode random interleaving-sequence generator according to an embodiment of the present invention may include a random main interleaving-seed generator, a random symbol-offset generator, a memory index check, and a modulo operator. FIG.101illustrates the logical composition of a 32 KFFT mode random interleaving-sequence generator formed by combining a random main interleaving-seed generator and a random symbol-offset generator.FIG.101illustrates an embodiment of the random main interleaving-seed generator including a 1 bit-spreader and a 14 bits-randomizer, and an embodiment of the random symbol-offset generator including a 2 bits-spreader and an 13 bits-randomizer. Details thereof have been described above and thus will be omitted here. Hereinafter, a random interleaving-sequence generator for 32K FFT mode according to another embodiment of the present invention will be described. the random interleaving-sequence generator for 32K FFT mode according to another embodiment of the present invention includes random main interleaving-sequence generator which have a randomizer including bit shuffling. FIG.102is a view illustrating a 32 KFFT mode random interleaving-sequence generator according to another embodiment of the present invention. The 32K FFT mode random interleaving-sequence generator according to another embodiment of the present invention may include a spreader (1-bit toggling), a randomizer, a random symbol-offset generator, a modulo operator and a memory-index check. As described above, the random main interleaving-sequence generator may include a spreader and a randomizer. Details thereof except bit shuffling have been described above and thus will be omitted here. The randomizer may be operated via a PN generator and may provide full randomness during interleaving. As described above, the randomizer according to an embodiment of the present invention may include bit shuffling. The bit shuffling optimizes spreading properties or random properties and is designed in consideration of Ndata. In the case of 8K FFT mode, the bit shuffling may use a 14-bit PN generator, which can be changed. FIG.103is expressions representing operations of 32K FFT mode bit shuffling and 32K FFT mode random interleaving-sequence generator according to another embodiment of the present invention. (a) illustrates an expression representing an operation of the 32K FFT mode bit shuffling and (b) illustrates an expression representing an operation of the 32K FFT random interleaving-sequence generator. The upper portion of (a) shows an operation of the 32K FFT mode bit shuffling and the lower portion of (a) shows an embodiment of the 32K FFT mode bit shuffling for 14 bits. As illustrated in (a), the 32K FFT mode bit shuffling may mix bits of registers of a PN generator during calculation of a memory-index. An expression illustrated in an upper portion of (b) shows initial value setting and primitive polynomial of a randomizer. In this case, the primitive polynomial may be 14thprimitive polynomial and the initial value may be changed by arbitrary values. That is, the expressions illustrated in an upper portion shows binary word sequences or binary bits used to define the main-PRBS generator which can generate main-PRBS sequence. An expression illustrated in a lower portion of (b) shows procedures of calculating and outputting the interleaving address for different interleaving sequence for an output signal of the spreader and the randomizer. As illustrated in the expression, one random symbol-offset (or a symbol offset or cyclic shifting value) is used to calculate the different interleaving sequence and the cyclic shifting value may be applied to each OFDM symbol pair in the same way. FIG.104is a flowchart illustrating a method for transmitting broadcast signals according to an embodiment of the present invention. The apparatus for transmitting broadcast signals according to an embodiment of the present invention can encode service data (S104000). As described above, service data is transmitted through a data pipe which is a logical channel in the physical layer t hat carries service data or related metadata, which may carry one or multiple service(s) or service component(s). Data carried on a data pipe can be referred to as the DP data or the service data. The detailed process of step S104000is as described inFIG.1orFIG.5-6,FIG.22. The apparatus for transmitting broadcast signals according to an embodiment of the present invention can may map the encoded service data into a plurality of OFDM symbols to build at least one signal frame (S104010). The detailed process of this step is as described inFIG.7,FIG.10-21. Then, the apparatus for transmitting broadcast signals according to an embodiment of the present invention can may use a different interleaving-seed which is used for every OFDM symbol pair comprised of two sequential OFDM symbols. as above described, the basic function of the cell mapper7010is to map data cells for each of the DPs, PLS data, if any, into arrays of active OFDM cells corresponding to each of the OFDM symbols within a signal frame. Then, the frequency interleaver7020may operate on a single OFDM symbol basis, provide frequency diversity by randomly interleaving the cells received from the cell mapper7010. The purpose of the frequency interleaver7020in the present invention, which operates on a single OFDM symbol, is to provide frequency diversity by randomly interleaving data cells received from the cell mapper7010. In order to get maximum interleaving gain in a single signal frame (or frame), a different interleaving-seed is used for every OFDM symbol pair comprised of two sequential OFDM symbols. T he detailed process of the frequency interleaving is as described inFIG.30to103. Subsequently, the apparatus for transmitting broadcast signals according to an embodiment of the present invention may modulate the frequency interleaved data by an OFDM scheme (S104030). The detailed process of this step is as described inFIG.1or8. The apparatus for transmitting broadcast signals according to an embodiment of the present invention can transmit the broadcast signals including the modulated data (S104040). The detailed process of this step is as described inFIG.1or8. FIG.105is a flowchart illustrating a method for receiving broadcast signals ac cording to an embodiment of the present invention. The flowchart shown inFIG.105corresponds to a reverse process of the broadcast signal transmission method according to an embodiment of the present invention, described with reference toFIG.104. The apparatus for receiving broadcast signals according to an embodiment of the present invention can receive broadcast signals (S105000). The apparatus for receiving broadcast signals according to an embodiment of the present invention can demodulate the received broadcast signals using an OFDM (Othogonal Frequency Division Multiplexing) scheme (S105010). Details are as described inFIG.9. The apparatus for receiving broadcast signals according to an embodiment of the present invention may frequency de-interleave the demodulated broadcast signals (S105020). In this case, the apparatus for receiving broadcast signals according to an embodiment of the present invention can perform frequency de-interleaving corresponds to a reverse process of the frequency interleaving as shown in the above. The detailed process of the frequency interleaving is as described inFIG.30to103. Subsequently, the apparatus for receiving broadcast signals according to an embodiment of the present invention may de-map service data from at least one signal frame in the frequency de-interleaved broadcast signals (S105030). Details are as described inFIG.9. Subsequently, the apparatus for receiving broadcast signals according to an embodiment of the present invention can decode the demapped service data (S105040). Details are as described inFIG.9. As described above, service data is transmitted through a data pipe which is a logical channel in the physical layer that carries service data or related metadata, which may carry one or multiple service(s) or service component(s). Data carried on a data pipe can be referred to as the DP data or the service data. It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention covers the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents. | 244,667 |
11863204 | DETAILED DESCRIPTION OF THE INVENTION The present invention generally relates to communication systems and integrated circuit (IC) devices. More particularly, the present invention relates to improved methods and devices for energy-efficient decoders and their implementations in communication systems. The following description is presented to enable one of ordinary skill in the art to make and use the invention and to incorporate it in the context of particular applications. Various modifications, as well as a variety of uses in different applications will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to a wide range of embodiments. Thus, the present invention is not intended to be limited to the embodiments presented, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. In the following detailed description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without necessarily being limited to these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention. The reader's attention is directed to all papers and documents which are filed concurrently with this specification and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference. All the features disclosed in this specification, (including any accompanying claims, abstract, and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features. Furthermore, any element in a claim that does not explicitly state “means for” performing a specified function, or “step for” performing a specific function, is not to be interpreted as a “means” or “step” clause as specified in 35 U.S.C. Section 112, Paragraph 6. In particular, the use of “step of” or “act of” in the Claims herein is not intended to invoke the provisions of 35 U.S.C. 112, Paragraph 6. Please note, if used, the labels left, right, front, back, top, bottom, forward, reverse, clockwise and counter clockwise have been used for convenience purposes only and are not intended to imply any particular fixed direction. Instead, they are used to reflect relative locations and/or directions between various portions of an object. According to various examples, the present invention provides methods and structures for energy-efficient decoders and related forward error correction (FEC) implementations. In an example, an apparatus is proposed to lower power consumption in iterative decoder schemes. This apparatus uses a technique that is applied to soft-decision decoders based on low-density parity-check (LDPC) codes but can also be used with any other error correction code (ECC), such as Turbo Codes, Polar Codes, BCH/RS Codes, Braided Codes, and the like. The apparatus comprises a plurality of decoders that work in a specific order and conditioned to the result of the previous decoders. In its simplest approach, the apparatus involves a low power consumption decoder with a low word error rate (WER) configured at the operation point which generally cannot achieve the target performance followed by a more complex (i.e., high-performance) decoder that is able to achieve the expected performance. In this example, the received data is first processed by a low power decoder, if this decoder cannot correct the errors in the received data, this data is then processed by a high-performance decoder at the expense of more power, otherwise this second decoder remains in sleep state. Because the low-power decoder corrects most of the codewords only a very small set is decoded by the high-performance decoder. Note that the use or concatenation of several decoders as here proposed does not imply the use of “concatenated codes”; rather the idea is that several decoders algorithms operate over the same code. On the contrary, the classical concatenated codes scheme makes use of decoders that operate over different codewords and that interchange information between them in a scheme that is usually known as Turbo Codes. For the present invention, the idea to produce decoders with very low power consumption is accomplished by taking advantage of the fact that a high percentage (>90%) the received data does not require a high-performance decoder to eliminate the errors of the data. In addition, examples of the present invention take advantage that a low-power decoder can be implemented using suboptimal algorithms with low switching activity, which is the main contributor to the power dissipation. FIGS.1A to1Care simplified block diagrams illustrating different topologies of decoder devices involving this gross classification between two types of decoders (i.e., low-power and high-performance).FIG.1Ashows a device101with a low-power decoder module110followed by a high-performance decoder module120. As discussed previously, the low-power decoder module110eliminates the errors of most of the received codewords (>90%) but it cannot achieve the target performance (e.g., WER<1e-15). The portion of the received data that cannot be processed adequately by the low-power decoder module110is then processed by the high-performance decoder module120to reach the target performance. Alternatively,FIG.1Bshows a device102that is in a reversed configuration compared to device101where the high-performance decoder module120is followed by the low-power decoder module110. In this case, the high-performance decoder module120processes a portion of received data to reduce the number of errors into a range in which the low-power decoder module110can process the remaining received data. The implementations shown inFIGS.1A and1Bcan be expanded to include more than two decoder modules connected in series that use different decoding algorithms and have varying levels of performance and power consumption. FIG.1Cshows device103with a different configuration in which both the low-power decoder module110and the high-performance decoder module120are configured with a classifier module130. As shown, the classifier130receives the incoming data signal and controls a first switch131and a second switch132. The first switch131controls the input path to the low-power decoder110and the high-performance decoder120while the second switch132controls the output path from these decoders110,120. In this configuration, the classifier130can direct the incoming data signal (or portions of the incoming data) through the low-power decoder110or the high-performance decoder120depending on positions of the switches131,132. In a specific example, the classifier130evaluates the incoming data signal to determine a plurality of portions of the received codewords and directs a portion of the codewords to the low-power decoder110and another portion of the codewords to the high-performance decoder130. The classifier can determine which portion of the codewords goes to which decoder module based on pre-FEC bit error rate (BER) metrics, mutual information metrics, or the like. This implementation can be expanded to include more than two decoder modules using different decoding algorithms and having varying levels of performance and power consumption. The topologies shown inFIGS.1A to1Care only particular examples of the present invention, and those of ordinary skill in the art will recognize other variations, modifications, and alternatives. Further details regarding the implementation of these decoder topologies are discussed below, including a generalization of the power reduction concept involving a plurality of different algorithms or decoders. In the context of the decoder implementations, the parameter that determines the rate of decoding for each decoder is the word error rate (WER).FIG.2is a simplified logarithmic scaled graph of WER versus signal-to-noise ratio (SNR) of a decoder device according to an example of the present invention. This graph200shows that as the SNR increases, the WER decreases. If the WER of the first decoder (WERfirst) at the operation point is 0.01, then only a 1% of the incoming codewords are processed by the second decoder. Thus, the total power consumption is represented by Ptotal=Pfirst+WERfirst×Psecond, where Pfirstis the power consumption of the first decoder and Psecondis the power consumption of the second decoder assuming that all received data is processed in both decoders. In order to provide a way to specify when a codeword is successfully decoded a satisfied party check equation is used. If the parity check equation is not sufficiently robust, a cyclic redundancy check (CRC) can be added to provide more robustness. An example implementation of the high-performance decoder can include details in U.S. Pat. No. 10,103,751, titled “Non-concatenated FEC Codes for Ultra-high Speed Optical Transport Networks”, which is incorporated by reference. In an example, the high-performance decoder can be a soft decision decoder, such as a soft-input soft-output (SISO) decoder, or a soft-input hard-output (SIHO) decoder, or the like. Certain details of an example implementation of the high-performance decoder are also discussed below in reference toFIGS.9to11. In an example, the low power decoder (in the case of LDPC) can be based on soft bit-flipping algorithm. This algorithm provides low power consumption since the message on going in the graph are hard bits and soft information is only stored in the variable nodes. In an example, the low-power decoder can be a hard decision decoder, such as a hard-input hard-output (HIHO) decoder, or a SIHO decoder, or the like. In a specific example, the low-power decoder can be implemented a modified version of the high-performance decoder where the resolution of the messages has been reduced to one bit. The error floor frequently present in this kind of decoder is not an issue in this invention because in the concatenated scheme the other decoder (i.e., the high-performance decoder) eliminates any undesirable error floor problem. FIG.3is a simplified graph representing a power consumption profile over SNR of a decoder device according to an example of the present invention. If both decoders support the maximum throughput, the scheme can operate in the same range of SNR as that of the high-performance decoder alone without any degradation, but with the advantage that as the SNR increases the power decreases relative to the WER. As shown in graph300, this behavior presents a profile of power consumption that drops abruptly with the increase of the SNR once the threshold of the code is reached (marked as the point where the first decoder starts to correct). Another option is to implement a reduced throughput version for the second decoder in order to reduce complexity of the overall scheme, but in that example the performance of the system is mainly settled by the performance of the first decoder. This is because the second decoder can only process a reduced fraction ρ of the received codewords (e.g., ρ=0.1), then this scheme only works if the WER of the first decoder is lower than ρ. As discussed for device102ofFIG.1B, the decoders can be combined in the reverse order, i.e., the high-performance decoder followed by the low-power decoder. This scheme is more suitable for turbo product codes, in which, for example, the first stage can use a soft-decision decoder and the last stage can use a hard-decision decoder. In this scheme, a high-performance decoder brings the performance into a range where a second hard decision decoder can operate and improve the performance. In this way, the power of the last stage is reduced by employing a less consuming hard-decision decoder and the performance remains almost the same as if all the processing had been done with the soft-decision decoder. The present invention expands on such techniques by providing methods and devices using a plurality of combinations between different decoder algorithms each one corresponding to a particular performance and power profile to get an energy efficient overall system. Depending on the types of combined decoders, the scheme might have a different topology. The most powerful codes to date are based on iterative soft decision decoding. These codes are commonly known as modern codes. The concept of modern codes refers to codes based on iterative decision decoding, particularly turbo product codes (TPC) and low-density parity-check (LDPC) codes. But these types of codes can be considered as a part of the same family of codes on graph called generalized LDPC (GLDPC). An LDPC codeis a linear block code defined by a sparse (m×n) parity check matrix H, n represents the number of bits in the block and m denotes the number of parity checks={c∈2n:Hc=0}. The matrix is considered “sparse” because the number of 1s is small compared to the number of 0s. Matrix H can be graphically represented using a Tanner graph (TG).FIG.4Ashows matrix401, which represents parity check matrix H, andFIG.4Bshows the associated TG402for matrix401. As shown, TG402is a bipartite graph composed of two types of nodes: the variable bit vinodes (representing the columns of H) and the check cinodes (representing the rows of H). A connection between nodes viand ciexists if Hj,i=1. Note that there are no connections between two check nodes or between two bit nodes. Typically, LDPC codes are iteratively decoded using simplified version of the sum product algorithm (SPA) such as the Min-Sum Algorithm (MSA), the Scaled MSA (SMSA), and the Offset MSA (OMSA). Those of ordinary skill in the art will recognize the application of the present invention using other variations, modifications, and alternatives to these decoding algorithms. In an example, the present invention uses the SMSA, which provides a good tradeoff between performance and complexity. Let biand xibe the i-th coded bit and the corresponding channel output, respectively. The input to the SPA decoder is the prior log-likelihood ratio (LLR) defined by Lia=ln(Pr{bi=0|xi}Pr{bi=1|xi}). The SPA runs over the factor graph interchanging soft information between bit and check nodes. Each iteration consists of two steps. In the first step all the bit nodes send information to the check nodes. In the second step all the check nodes send information to the bit nodes. After a maximum number of iterations Imaxis reached or when all the parity check equations are satisfied, the a posteriori LLR (Lk0) is computed. FIG.5is a TG representation of a bit-to-check message (a.k.a. variable-to-check message) operation of a decoder device according to an example of the present invention. As shown, TG500shows eight variable bit nodes (denoted v1to v8) and three check nodes (denoted c1to c3). As the decoder receives the prior LLR inputs (denoted L1αto L8α), the variable bit nodes send information to the check nodes. This bit-to-check operation can be represented as follows: Lvi→cje=Lia+∑ck∈C(vi)\cjLck→vie where C(vi)={cj:Hj,i≠0}. FIG.6is a TG representation of a check-to-bit message operation of a decoder device according to an example of the present invention. In the same format asFIG.5, TG600shows eight variable bit nodes (denoted v1to v8) and three check nodes (denoted c1to c3). Following the bit-to-check operation, the check nodes send information back to the variable bit nodes. This check-to-bit operation can be represented as follows: Lcj→vie=∏vk∈V(cj)\visign(Lvk→cje)·Mcj→viMcj→vi=α·minvk∈V(cj)\vi❘"\[LeftBracketingBar]"Lvk→cje❘"\[RightBracketingBar]" where V(cj)={vi:Hj,i≠0} and α≈0.75 In this example, the check-to-bit message calculation corresponds to the SMSA, but the same concept also applies to TPCs only that the calculation of the message in this case may involve algorithms such as the Chase-Pyndiah decoding algorithm. Of course, there can be other variations, modifications, and alternatives. FIG.7is a TG representation of a computation of the a posteriori LLR for a decoder device according to an example of the present invention. In the same format asFIGS.5and6, TG700shows eight variable bit nodes (denoted v1to v8) and three check nodes (denoted c1to c3). As discussed previously, after N iterations, the a posteriori LLR is calculated. This operation can be represented as follows: Lio=Lia+∑ck∈C(vi)Lck→vie FIGS.5to7show an example of a method of operating a decoder using an SMSA. Those of ordinary skill in the art will recognize variations, modifications, and alternatives involving other versions of the SPA, or other related algorithms. Expanding upon the methods and devices discussed previously, the present invention provides for an iterative decoding algorithm that uses a plurality of different algorithms or types of decoders to minimize power consumption. An example of such a method and device architecture are discussed below. In an example, the iterative decoding process can be decomposed in the successive application of a set of algorithms={A1, A2, . . . , AS} in which each algorithm can be used independently in each iteration. This system can be considered as a finite state machine (FSM) in which each state corresponds to an algorithm. In an example, each state can also correspond to a decoder module configured to implement a specific decoding algorithm in the set. The state machine is fully connected, i.e., any state is reachable for any other state in one step. In a specific example, the setcan include algorithms sorted by level of complexity and performance. As discussed previously, such algorithms can include variations of MSA, OMSA, SMSA, soft bit-flipping, and the like. The variations of these algorithms can be generated by varying the resolution of them messages or by using other like processes. FIG.8is a simplified FSM diagram illustrating a method of optimizing use of decoder algorithms in a decoder device according to an example of the present invention. In this example, FSM800includes five states (denoted810to850), each representing a decoder algorithm in the set or a decoder module configured to perform the decoding algorithm in the set. As discussed, the operation arrows show that each state is fully connected. In other examples, the decoder device can be configured to implement a plurality of decoder algorithms or a plurality of decoder modules, each configured to implement such decoder algorithms. There can be several conditions to transition from one state to another. For example, a transition condition can occur when a certain algorithm provides no further improvement with further iterations. Because time is limited, only a fixed total number of steps is allowed. With this consideration, the maximum number of steps for each algorithm must be determined to obtain a global optimal in terms of power and performance. Of course, the best performance can be reached by always using the best performing algorithm, but this approach would also be costly in terms of power. Instead, examples of the present invention constrain the best performance to a given power (or, equivalently, minimize power subject to a given performance). In other words, the present invention provides for a method of optimization and device implementation to maximize the decoder performance subject to a given maximum power constraint. In an example, the flow of information between steps allows Li0=Liα+αLiewhere 0≤α≤1. When α=0, this is an indication of a restart of the system with the a priori information, i.e., Li0=Liα. The number of algorithms and the type of interchanging information can be variable depending on the code involved. The number of steps or iterations in general also depends on the decoder and the type of code. In the following example, the present invention provides a criterion for power optimization based on transition probability (stochastic) matrix. If the probability of moving from i to j in one-time step or iteration at the nthiteration is Pr(j|i)=Pi,j[n], then the stochastic matrix P[n] is given by using Pi,j[n] as the ithrow and jthcolumn element, as follows: P[n]=[P1,1[n]P1,2[n]⋯P1,j[n]⋯P1,S[n]P2,1[n]P2,2[n]⋯P2,j[n]⋯P2,S[n]⋮⋮⋱⋮⋱⋮Pi,1[n]Pi,2[n]⋯Pi,j[n]⋯Pi,S[n]⋮⋮⋱⋮⋱⋮PS,1[n]PS,2[n]⋯PS,j[n]⋯PS,S[n]] where S is the number of available algorithms in the set used for the iterative decoding process. From this matrix we propose calculate the average power for the entire system as: pav=∑ℓ=1Imaxpit[ℓ]·(∏n=1ℓP[n])·s where s is a vector that represents the initial state of the stochastic state machines, i.e., s=[1 0 . . . 0]T, and pit[] represents the power consumed in each state as a function of the iteration. This vector also includes the power of the idle state, the state in which the decoder does nothing because it has already reached the desired target, but the maximum number of allowed iterations (Imax) has not been reached. Each state consumes a specific amount of power per iteration, so from the inner product between the state vector s with the probability of each state for each iteration P[n] and the vector with the power per each state, the average power pavfor the whole system can be obtained. Note the term (P[n])·s in the equation of pavrepresents the probabilities of the state vector in the intermediate steps or iterations. Thus, pavis the cost function to optimize given the desired performance and the maximum number of iterations Imax. In an example, the values from P[n] and pit[] can be obtained by simulation. Of course, there can be other variations, modifications, and alternatives. According to an example, the present invention provides a method and device for an energy-efficient decoder configuration. The decoder device can include a plurality of decoder modules configured as a fully-connected FSM. Each of the plurality of decoder modules can be associated with a state of the FSM and be associated with a decoding algorithm from a predetermined set of decoding algorithms. Each state of the FSM can have a plurality of transition conditions. The plurality of decoder modules can be configured to receive an input data signal having a plurality of FEC codewords, and to process the plurality of FEC codewords at an initial state of the FSM configured to perform a first decoding iteration according to the associated decoding algorithm of the initial state. The plurality of decoder modules can also be configured to iteratively provide the plurality of FEC codewords to subsequent transition states of the FSM according to the plurality of transition conditions of the initial state and the plurality of transition conditions of each of the subsequent transition states, and to iteratively process the plurality of FEC codewords at each of the subsequent transition states according to the associated decoding algorithm of each of the subsequent transition states. In a specific example, the plurality of transition conditions of each state of the FSM is based on different internal metrics of the decoder module associated with that state of the FSM. These metrics can be based on the number of unsatisfied parity check equations, the number of flipped bits of a decoder module associated with a previous state of the FSM, or the like and combinations thereof. The conditions based on such metrics can be determined by certain threshold values, certain ranges, or combinations thereof. In a specific example, the plurality of transition conditions of the states of the FSM can be configured to maximize the chances of successfully decoding the plurality of FEC codewords under restrictions of a maximum number of iterations (i.e., steps between states) and a maximum power dissipation. Such optimization can use factors such as the time available to decode and the speed of transmission. The maximization can be done with discrete optimization algorithms, such as a branch and bound algorithm, or the like. In a specific example, the predetermined set of decoding algorithms can be an ordered set of algorithms that is ordered by level of complexity and performance. This set can include variations of algorithms previously discussed, such as OMSA, SMSA, soft bit-flipping algorithms, and the like. The variations of these algorithms can be generated by varying the message resolution or by other similar methods. In a specific example, the plurality of decoder modules can be configured to process the plurality of FEC codewords using a transition probability stochastic matrix to minimize a cost function based on a predetermined maximum number of iterations and a predetermined target performance. Further, the plurality of decoders can be configured to iteratively process the plurality of FEC codewords such that while a decoder module associated with a state of the FSM is processing the plurality of FEC codewords, the rest of the plurality of decoder modules associated with the rest of the states of the FSM remain in a sleep-state. According to an example, the present invention provides a decoder device having a plurality of decoder modules coupled in series. The decoder device is configured to receive an input data signal having a plurality of FEC codewords. The plurality of decoder modules can include i decoder modules, where i is an integer greater than one. These decoder modules can be configured with different WERs by using different decoder architectures and different decoding algorithms. For example, a first decoder module can be configured to process all incoming codewords in the input data signal. A second decoder module can then be configured to process all of the codewords that the first decoder is not capable of processing. Then, a third decoder module can be configured to process all of the codewords that the first and second decoder are not capable of processing. The input data signal can be processed in succession by further decoder modules up to an i-th decoder module, which can be configured to process all of the codewords that the previous decoder modules were not capable of correcting. In this case, the WER of each subsequent decoder module can be less than the previous decoder module (i.e., first WER>second WER>third WER> . . . >i-th WER). This example can be considered an extension of the implementation shown inFIG.1A. Alternatively, the WER of each subsequent decoder module can be greater than the previous decoder module (i.e., first WER<second WER<third WER< . . . <i-th WER). This example can be considered an extension of the implementation shown inFIG.1B. In this case, each prior decoder module brings the performance into a range that the subsequent decoder module can operate and improve the performance. In this way, the power of each subsequent decoder module can be reduced compared to processing all of the FEC codewords using the highest performance decoder module. In an example, the decoder device can also include a codeword classifier module, as shown previously inFIG.1C. The classifier module can process the input data signal to determine a plurality of portions within the plurality of FEC codewords. In this case, the classifier module can be configured to direct certain portions of the FEC codewords to different decoder modules. For example, the first decoder module can be configured to process a first portion of the plurality of FEC codewords, the second decoder module can be configured to process a second portion of the plurality of FEC codewords, and the third decoder module can be configured to process a third portion of the plurality of FEC codewords. Each subsequent decoder module can be configured to process a subsequent portion of the plurality of FEC codewords, up to the i-th decoder module, which would be configured to process the i-th portion of the plurality of FEC codewords. In various examples, the classifier module can be configured to implement the FSM or the probability stochastic matrix discussed previously or other optimization algorithms. Of course, there can be variations, modifications, and alternatives. FIG.9is a simplified block diagram of a decoder device according to an example of the present invention. As shown, decoder900can include a variable-node processing unit (VNPU)910and a check-node processing unit (CNPU)920. The VNPU910and/or the CNPU920may each comprise a plurality of parallel processing units (e.g., q processing units). The VNPU910can be configured to compute the variable-to-check (i.e., bit-to-check) message, as discussed previously forFIG.5. The CNPU can be configured to compute the check-to-variable (i.e., check-to-bit) message, as discussed previously forFIG.6. This configuration allows for an efficient parallel decoding process. More specific details of an example CNPU and decoder architecture are provided in reference toFIGS.10and11, respectively. FIG.10is a simplified block diagram of a CNPU for processing two codewords at the same time according to an example of the present invention. As shown, CNPU1000includes a minimum computation unit1010, a sign product computation unit1020, a first message memory1030, a second message memory1040, an output computation unit1050, and a sign first-in first-out (FIFO) unit1060. The minimum computation unit1010and the sign product computation unit1020are both coupled to the first message memory1030. The first message memory is coupled to the second message memory1040, which is coupled to an output computation unit1050. The sign FIFO unit1060is also coupled to the output computation unit. These units are configured together to compute the check-to-variable message, as discussed forFIG.6. In a specific example, each of the minimum computation unit1010, the sign product computation unit1020, and the sign FIFO unit1060takes the variable-to-check message Lvk→cje from the VNPU as an input. The minimum computation unit1010computes the minimum value minvk∈V(cj)\vi❘"\[LeftBracketingBar]"Lvk→cje❘"\[RightBracketingBar]" and the sign product computation unit1020computes the sign value Πvk∈V(cj)\visign(Lvk→cje). The first and second message memories1030,1040, which are pipelined, store the results of these equations to be used by the output computation unit1050. The sign FIFO unit1060stores the signs of the input variable-to-check messages, which the output computation unit1050combines with the values stored in the message memories1030,1040to compute Lcj→vie. With this configuration, the minimum computation unit1110and the sign product computation unit1120can operate on one codeword while the output computation unit1050operates on another codeword because of the two message memories1030,1040. Those of ordinary skill in the art will recognize other variations, modifications, and alternatives. FIG.11is a simplified block diagram of a decoder device according to an example of the present invention. As shown, device1100includes multiplexers1110and1170, permutation blocks1120and1140, a plurality of CNPUs1130, a plurality of VNPUs1150, a FIFO unit1180, and a control unit1190. The first multiplexer1110is coupled to the first permutation block1120, which is coupled to the plurality of CNPUs1130. The CNPUs1130are coupled to the second (inverse) permutation block1140, which is coupled to the plurality of VNPUs1150. The plurality of VNPUs1150are coupled in a first feedback loop1162to the first multiplexer1110. The second multiplexer1170is coupled to the FIFO unit1180, which is coupled to the plurality of VNPUs1150and coupled in second feedback loop1164back to the second multiplexer1170. Both multiplexers1110and1170receive the prior LLR values as inputs, and through the computations directed by the control unit1190this decoding process can iteratively process multiple codewords in parallel. In an example, the control unit1190generates control signals used by the other blocks of decoder1100. In particular, the control unit1190controls the select lines of the multiplexers1110,1170and the permutation blocks1120,1140. The first multiplexer1110and the first permutation block1120are configured to select the appropriate inputs to the CNPUs1130, while the second (inverse) permutation block1140is configured to receive the outputs of the CNPUs1130and select the appropriate inputs to the VNPUs1150. Further, the control unit1190also turns on and off post-processing algorithms implemented by the CNPUs1130or the VNPUs1150and the computations and memories in the CNPUs1130(as described forFIG.10). The second multiplexer1170provides LLR values to the FIFO unit1180, which outputs these values for computations by the VNPUs1150that also results in the computations of a posteriori LLR values, as discussed forFIG.7. The feedback paths1162,1164provide intermediate values to the beginning of this pipelined configuration to perform additional iterations of this iterative decoding process. Of course, there can be variations, modifications, and alternatives. While the above is a full description of the specific embodiments, various modifications, alternative constructions and equivalents may be used. Therefore, the above description and illustrations should not be taken as limiting the scope of the present invention which is defined by the appended claims. | 33,638 |
11863205 | DETAILED DESCRIPTION FIG.1is a block diagram of an example of a sigma-delta analog-to-digital converter (ADC) circuit102. The sigma-delta ADC circuit102includes a flash ADC circuit104, a feedback digital-to-analog converter (DAC) circuit106, and a loop filter circuit108. The sigma-delta ADC102converts an input signal (U(s)) into a quantized output signal (V(s)) that is a continuous stream of binary numbers output at a rate determined by the sampling clock frequency. The DAC circuit106is driven by the serial output data stream to generate a feedback signal. The output of the DAC circuit106is subtracted from the input signal using a summing element110. The loop filter circuit108includes one or more integrators. FIG.2is a circuit diagram of a loop filter circuit208. The loop filter circuit208includes at least one integrator stage. The integrator stage includes an operational amplifier212and a capacitor C connected from the output of the amplifier212to an input of the amplifier212. Returning toFIG.1, the loop filter circuit108integrates the output of summing element110, and the output of the loop filter circuit108is applied to the comparators of the flash ADC circuit104. The loop filter circuit108reduces quantization noise due to the quantization by the flash ADC circuit104. The sigma-delta ADC circuit102may be a discrete time sigma-delta modulator (DTSDM) or a continuous time sigma-delta modulator (CTSDM). In a DTSDM, the input signal is sampled before the summing element110. In a CTSDM, the input signal is sampled after the loop filter circuit108. For many applications (e.g., an audio codec), the input signal for a sigma-delta modulator has a small amplitude with occasional changes to a large amplitude. The amplifier or amplifiers of the loop filter circuit108are biased with a higher bias current in anticipation of large amplitudes in the input signal to avoid distortion when the large amplitudes occur. This approach can be wasteful for a power-constrained system. An improvement would be to keep the bias current of the amplifiers low when the amplitude of the input signal is small and change to using a high bias current when the amplitude of the input signal occasionally becomes large. This adaptive biasing based on the amplitude of the input signal could reduce the static power consumed by sigma-delta modulators. FIG.3is an illustration of waveforms showing the adaptive biasing. The top waveform316is the input signal (U(s)). When the amplitude of the input signal is small, the requirements for slew rate and total harmonic distortion (THD) of the amplifiers can be relaxed. The DC bias current or currents of the amplifiers can be reduced to conserve power. When the input signal has a large amplitude, the DC bias current should be increased to high power mode to improve large signal performance. In the example ofFIG.3, the amplitude of the input signal becomes large enough at318to trigger a high power mode in the amplifier biasing. The lower waveform320shows activation of the high power mode in response to the increase in amplitude. In the example ofFIG.3, the high power mode is activated at322when the amplitude of the input signal exceeds the thresholds +Vth and −Vth. The speed of the transition from low power to high power should be quick to avoid low performance degradation. At324, the input signal amplitude decreases to less than the +Vth and −Vth thresholds. At326, the circuit transitions from high power mode to low power mode. The change from high power mode to low power mode does not happen right away. Large amplitudes in the input signal may occur in bursts. The transition to the low power mode is slow so that the high power mode is still active if another large signal amplitude follows shortly after the preceding one. An approach to adaptive biasing should provide a transition between low power mode and high power mode with a negligible impact on the stability of the sigma-delta modulators. The bias of the amplifiers should be tunable to a high power mode bias and to a low power mode bias. The approach should have a signal level detector with a fast transition to high power mode and delay in the transition back to low power mode. FIG.4is a circuit diagram of an example of a flash ADC circuit104. The flash ADC circuit may be used in the sigma-delta ADC circuit102ofFIG.1. The flash ADC circuit104receives an input signal (U) and compares the sampled input voltage to weighted reference voltages. In the example ofFIG.4, the weighted reference voltages are produced using a resistive divider circuit430. The output of a comparator is an active level (e.g., “high” or a “1”) when the input signal is greater than the weighted reference voltage. Encoding logic circuitry432generates a B-bit digital value at the output (V), with B being from 3-5 bits. For a B-bit flash ADC circuit, there are 2B−1 comparators434, and the outputs of the 2B−1 comparators434are encoded into the B-bit value. The output signal (V) of the flash ADC circuit104can be expressed as V=U+NTF*e, where U is the input signal, NTF is a noise transfer function and e is quantization noiseFIG.5is a graph of voltage versus samples (n) showing the waveforms for the input signal (U) and the output signal (V). The smooth waveform is the input signal (U). The output signal (V) toggles around the input signal. If the input signal (U) is large, the quantization noise is small due to the multiple levels of the flash ADC circuit104, and this means that the output of the flash ADC104is a good estimation of the input signal. Thus, the flash ADC circuit104can be used as a signal level detector to determine when to change the bias current for the loop filter circuit108. For example, the outputs (T<0>, T<1> . . . T<2B−1>) of the comparators434of the flash ADC circuit104can be used to gauge the level of the input signal. As an overview,FIG.6is a flow diagram of an example of a method600of controlling operation of a sigma-delta ADC, such as the sigma-delta ADC102ofFIG.1. At605, an input signal at an input of the sigma-delta ADC102. The input signal may be sampled before the summing element110or after the loop filter circuit108. At610, a digital value is determined for the input signal using the flash ADC circuit104. At615, the biasing of at least one amplifier of the loop filter circuit108is adjusted by the bias control circuit114according to outputs of the comparators of the flash ADC circuit104. FIG.7is a circuit schematic of portions of an example of the bias control circuit114ofFIG.1. The bias control circuit114includes a signal level detector circuit740and a bias circuit742. The signal level detector circuit740monitors the outputs of the comparators of the flash ADC circuit104to determine the level of the input signal. Because the comparators434of the flash ADC circuit104compare the sampled input signal to weighted reference voltages, the outputs of the comparators434form a weighted output code. For example, if the flash ADC circuit104is a 4-bit flash ADC (i.e., B=4), the weighted code is the output state of the 16 comparators T<15>, T<14> . . . T<0>. The weighted code includes a “1” in bit positions where the input signal is greater than the weighted reference voltage. The signal level detector circuit740enables a change to the bias current of one or more loop filter amplifiers according to the weighted output code. In the example ofFIG.7, the bias control circuit114includes a signal level detector circuit740. In the example ofFIG.7, the signal level detector circuit740monitors the outputs of M of the most significant bits (MSBs) of the weighted code using OR gate744and monitors the M least significant bits (LSBs) of the weighted code using NAND gate746, where M is an integer greater than one. In the example ofFIG.7, M=3 and comparator outputs T<15>, T<14>, T<13> and T<2>, T<1> T<0> are monitored. Based on the logic of the signal level detector circuit740, the signal level detector circuit740enables high power mode when any of the M MSBs is a “1” or any of the M LSBs is a “0” by setting latch circuit748. The bias circuit742includes a static bias circuit stage and a dynamic bias circuit stage. In both the low power mode and the high power mode, the bias circuit742provides a static bias current to the loop filter amplifier using the static current source of the static bias circuit stage. When high power mode is enabled, the dynamic bias circuit stage provides a dynamic bias current to the loop filter amplifier using the dynamic current source. The dynamic bias current is added to the static bias current to increase the biasing of the amplifier in the high power mode. Only one dynamic current source is shown for simplicity of the circuit schematic. Other examples can include a dynamic bias circuit stage having multiple dynamic current sources that can be enabled individually or in combination to provide a dynamic bias current that is selectable according to different levels of the input signal to implement multiple high power modes. Returning toFIG.3, it can be seen at322that the activation time to the higher bias current is fast and the high power mode is enabled as soon as the signal level detector circuit740detects that the input signal exceeds the detection threshold. It can also be seen at324that the activation time to the lower bias current is slower and the low power mode is enabled several sample times after the level of the input signal decreases below the threshold. In the example ofFIG.7, based on the logic of the signal level detector circuit740, the signal level detector circuit740deactivates the high power mode enable when the M MSBs are “1s” or and the M LSBs are “0s”. The deactivated signal propagates through shift register750. When the flip-flop circuits of the shift register750are all low, the latch circuit748is reset and the bias current of the loop filter amplifier is reduced to the static bias current. The deactivated signal propagates through the shift register750according to the sample clock. The flip-flop circuits will all be low to deactivate the high power mode if the level of the input signal remains below the detection threshold for a time long enough for the low level to propagate to all the flip-flop circuits. If the input level signal becomes greater than the detection threshold before the low level can propagate through the shift register, the high power mode remains active. The input signal needs to remain below the threshold for the number of sample clocks determined by the shift register before the bias control circuit114returns to the low power mode and the lower bias current. FIG.8is a simulation result of the bias control circuit114logic circuitry ofFIG.7. The top waveform860is the output of the flash ADC circuit104ofFIG.1orFIG.4. Waveform is the enable for low power mode and is high for enabling the low power mode. The amplitude of the waveform860is decreasing. Because of the quantization noise in the output of the flash ADC circuit104, the enable signal toggles when the output of the flash ADC circuit104is near a detection threshold. Waveform866is the output of the latch circuit748inFIG.7and is high for high power mode. The waveform866shows that the bias circuit742remains in high power mode despite the toggling of the waveform864. Waveform862is the output of the NOR gate754inFIG.7and is the enable low power mode signal. Low power mode is enabled when the signal is high. Waveform862shows the hysteresis time before the bias control circuit114transitions to the low power mode. Waveform866shows that the bias control circuit114quickly transitions to the high power mode when the input signal becomes greater than the detection threshold as shown in waveform860. This hysteresis time in the transition to low power mode improves the stability of the sigma-delta ADC102. FIG.9is a circuit diagram of an example of a loop filter amplifier912. The amplifier has a Montichelli amplifier topology with a differential input and differential output, but the loop filter amplifier912may have other topologies. The loop filter amplifier912has an adaptive bias circuit network and includes a static bias circuit stage and a dynamic bias circuit stage. The bias voltage VB3stays constant, and the static bias circuit stage provides a static bias current that stays constant. Bias voltage VB1changes according to the high power mode signal from the signal level detector circuit740, and the dynamic bias circuit stage provides a dynamic bias current according to the high power mode signal. The dynamic bias circuit stage includes a slew rate circuit956. The slew rate circuit956includes a resistor-capacitor (RC) filter to slow the slew rate of the dynamic bias current. Slowing the slew rate can reduce glitches in the output common mode of the loop filter amplifier912. FIG.10is a simulation result of the loop filter amplifier912ofFIG.9. The simulation shows the advantages of using a static bias circuit stage and a dynamic bias circuit stage instead of using only a dynamic bias circuit stage. Waveform1070shows the power mode of the biasing of the amplifier with high representing high power mode, and a transition from high to low representing a transition from high power mode to low power mode. Waveform1072is the output common mode of the loop filter amplifier912during transitions between the high power mode and low power mode with only a dynamic bias circuit stage and no static bias circuit stage. Waveform1074is the output common mode of the loop filter amplifier912during transitions between the high power mode and low power mode with both a dynamic bias circuit stage and a static bias circuit stage. A comparison of waveform1072and waveform1074shows that glitches in the output common mode are reduced by using both the static bias circuit stage and a dynamic bias circuit stage. The several devices and methods described herein improve the dynamic range of a sigma-delta ADC and reduce distortion while reducing the amount of static circuit power used to achieve the improved dynamic range and reduced distortion. Using the techniques described herein, the transition between low power mode and high power mode has a negligible impact on the stability of the sigma-delta modulators. Additional Description and Aspects A first Aspect (Aspect 1) includes subject matter (such as an electronic circuit) comprising a sigma-delta analog-to-digital converter (ADC) circuit configured to convert an analog input signal to a digital value. The sigma-delta ADC circuit including a loop filter circuit including at least one loop filter amplifier, a flash ADC circuit including multiple comparators, and a bias control circuit configured to change a biasing of the at least one loop filter amplifier according to outputs of the multiple comparators of the flash ADC circuit. In Aspect 2, the subject matter of Aspect one optionally includes multiple comparators compare an input voltage to weighted reference voltages and the outputs of the comparators form a weighted output code, and a signal level detector circuit configured to enable a change to the bias current of the at least one loop filter amplifier according to the weighted output code. In Aspect 3, the subject matter of Aspect 2, optionally includes a bias control circuit configured to change the biasing of the at least one loop filter amplifier from a lower bias current level to a higher bias current level using a high bias activation time, and change the biasing of the at least one loop filter amplifier from the higher bias current level to the lower bias current level using a low bias activation time, wherein the low bias activation is longer than the high bias activation time. In Aspect 4, the subject matter of one or both of Aspects 2 and 3 optionally includes include at least two times M (2*M) comparators, and the weighted output code includes M most significant bits (MSBs) and M least significant bits (LSBs), where M is an integer greater than one; and a bias control circuit to increase the bias current of the at least one loop filter amplifier when any one of the M MSBs is an active level or any one of the M LSBs is an inactive level. In Aspect 5, the subject matter of Aspect 4 optionally includes bias control circuit is configured to decrease the bias current of the at least one loop filter amplifier when the M MSBs are the inactive level and the M LSBs are the active level. In Aspect 6, the subject matter of one or any combination of Aspects 1-5 optionally includes at least one loop filter amplifier includes a bias circuit. The bias circuit includes a static bias circuit stage that provides a static bias current for the at least one loop filter amplifier, and a dynamic bias circuit stage that provides a dynamic bias current for the at least one loop filter amplifier, wherein the bias control circuit is configured to increase or decrease the dynamic bias current according to the output of the multiple comparators of the flash ADC circuit. In Aspect 7, the subject matter of Aspect 6 optionally includes a dynamic bias circuit stage that includes a slew rate circuit configured to slow the slew rate of the dynamic bias current. In Aspect 8, the subject matter of one or any combination of Aspects 1-7 optionally includes at least one loop filter amplifier that includes a dynamic bias circuit stage configured to provide a selectable bias current from among multiple bias currents, and a bias control circuit is configured to select the bias current according to the outputs of the multiple comparators of the flash ADC circuit. In Aspect 9, the subject matter of one or any combination of Aspects 1-8 optionally includes a sigma-delta ADC circuit is a continuous time sigma-delta modulator. In Aspect 10, the subject matter of one or any combination of Aspects 1-8 optionally includes a sigma-delta ADC circuit is a discrete time sigma-delta modulator. Aspect 11 includes subject matter (such as a method of controlling operation of a sigma-delta analog-to-digital converter (ADC)), or can optionally be combined with one or any combination of Aspects 1-10 to include such subject matter, comprising receiving an input signal at an input of the sigma-delta ADC, wherein the delta-sigma ADC includes a flash ADC circuit and a loop filter circuit that includes at least one loop filter amplifier, determining a digital value for the input signal using the flash ADC circuit, and adjusting biasing of the at least one loop filter amplifier according to outputs of multiple comparators of the flash ADC circuit. In Aspect 12, the subject matter of Aspect 11 optionally includes comparing the input signal to weighted reference voltages using the flash ADC circuit, forming a weighted output code according to the comparison, and enabling a change to the bias current of the at least one loop filter amplifier according to the weighted output code. In Aspect 13, the subject matter of Aspect 12 optionally includes changing the biasing of the at least one loop filter amplifier from a lower bias current level to a higher bias current level using a high bias activation time, and changing the biasing of the at least one loop filter amplifier from the higher bias current level to the lower bias current level using a low bias activation time, wherein the low bias activation is longer than the high bias activation time. In Aspect 14, the subject matter of one or any combination of Aspect 11-13 optionally includes maintaining a static bias current for the at least one loop filter amplifier regardless of the output of the multiple comparators of the flash ADC circuit, and increasing or decreasing a dynamic bias current for the at least one loop filter amplifier according to the output of the multiple comparators of the flash ADC circuit. In Aspect 15, the subject matter of Aspect 14 optionally includes slowing a slew rate of the dynamic bias current. In Aspect 16, the subject matter of one or both of Aspects 14 and 15 optionally includes selecting the bias current according to the outputs of the multiple comparators of the flash ADC circuit. Aspect 17 includes subject matter (such as an integrated circuit) or can optionally be combined with one or any combination of Aspects 1-16 to include such subject matter, comprising a sigma-delta analog-to-digital converter (ADC) circuit configured to convert an analog input signal to a digital value. The sigma-delta ADC includes a loop filter circuit including at least one loop filter amplifier, a flash ADC circuit having an output, a bias circuit and a bias control circuit. The bias circuit includes a static bias circuit stage that provides a static bias current for the at least one loop filter amplifier, and a dynamic bias circuit stage that provides a dynamic bias current for the at least one loop filter amplifier. The bias control circuit is configured to change the dynamic bias current according to the flash ADC circuit output. In Aspect 18, the subject matter of Aspect 17 optionally includes a bias control circuit is configured to change the biasing of the at least one loop filter amplifier from a lower bias current level to a higher bias current level using a high bias activation time, and change the biasing of the at least one loop filter amplifier from the higher bias current level to the lower bias current level using a low bias activation time, wherein the low bias activation is longer than the high bias activation time. In Aspect 19, the subject matter of one or any combination of Aspects 17 and 18 optionally includes a slew rate circuit configured to slow the slew rate of the dynamic bias current. In Aspect 20, the subject matter of one or any combination of Aspects 17-19 optionally includes at least one loop amplifier that includes a differential input, differential output amplifier circuit. These non-limiting Aspects can be combined in any permutation or combination. The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to herein as “examples.” All publications, patents, and patent documents referred to in this document are incorporated by reference herein in their entirety, as though individually incorporated by reference. In the event of inconsistent usages between this document and those documents so incorporated by reference, the usage in the incorporated reference(s) should be considered supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document controls. In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects. Method examples described herein can be machine or computer-implemented at least in part. The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is provided to comply with 37 C.F.R. § 1.72(b), to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. | 25,046 |
11863206 | DETAILED DESCRIPTION While the present teachings are described in conjunction with various embodiments and examples, it is not intended that the present teachings be limited to such embodiments. On the contrary, the present teachings encompass various alternatives and equivalents, as will be appreciated by those of skill in the art. It should be noted that, for many practical applications of integrated photonics, especially for optical phased arrays, a large number of phase shifters must be densely packed on the chip. When multiple phase shifters are used, the phase shifters must be spaced far enough apart to prevent thermal crosstalk, where one heater will change the phase of light travelling through neighboring phase shifters because the lateral spread of heat warms those waveguides as well. If the phase shifters are arrayed in a straightforward one-dimensional array (array unit vector perpendicular to the light propagation direction), they must be separated by over about 15 μm to ensure less than 10% crosstalk. Conventional configurations are also very awkwardly shaped for large arrays, as a 1024-phase-shifter array would occupy a rectangle of approximately 500 μm×16 mm. With reference toFIG.1, an optical phased array1may comprise a light source structure5, and a plurality of phase shifters10M,N, which may be arranged into a plurality of (M) columns and a plurality of (N) rows, i.e. forming a 2-dimensional (M×N) array of phase shifters10M,N. The light source structure5may comprise: 1) a single light source configured to emit a single beam and a tree of optical waveguides, which splits the single beam into a plurality of sub-beams, each sub-beam transmitted by one of a plurality of routing waveguides8optically coupled to the light source structure5; 2) a plurality of light sources, each light source optically coupled to one of the plurality of routing waveguides8; or 3) a plurality of light sources, each light source optically coupled to a plurality of routing waveguides8via a waveguide tree. Adjacent columns of phase shifters10M,Nmay be in a staggered configuration, e.g. adjacent phase shifters10M,Nin alternating columns may be vertically offset by a predetermined gap g, e.g. by at least a width of one of the phase shifters10M,N, so that adjacent phase shifters10M,Nare not directly adjacent each other, and so that input waveguides14, extending into the phase shifter10M,Nand output waveguides16extending out of each phase shifter10M,Nmay also be the predetermined gap g width apart, and therefore do not physically overlap or cause any optical crosstalk therebetween. Routing waveguides20extending to extending out between the other phase shifters10M,N. The plurality of columns C1-Mof phase shifters10M,Nand the plurality of rows R1-Nof phase shifters10M,Nare in a staggered configuration with odd numbered columns, e.g. C1, C3and C5, of phase shifters10M,Noffset, by at least a length of one of the phase shifters10M,Nwith even numbered columns, e.g. C2, C4and C6, of phase shifters10M,Nwith escape waveguides15from the even numbered columns of phase shifters10M,Nextending between the phase shifters10M,Nin the odd numbered columns of phase shifters10M,N. With reference toFIGS.2A and2B, each phase shifter10M,N, may include a substrate24, which may be comprised of silicon or other suitable material. Adjacent to, e.g. on top of, the substrate24may be layers of cladding, e.g. an upper cladding layer26aand a lower cladding layer26b, which may be comprised of a dielectric material, such as silicon dioxide. A heater22may be positioned on the upper cladding layer26a. The heater22may be any suitable device or material configured to generate heat, e.g. titanium nitride, nichrome, heavily doped silicon, silicide, titanium, and tungsten. In some embodiments, the heater22comprises a resistor, such as a metal or semiconducting wire that heats up when current is passed therethrough. There may be an optical waveguide30formed in a device or a waveguide layer positioned between the upper and lower cladding layers26aand26b, directly below the heater22. As depicted inFIG.2A, the optical waveguide30extends parallel to the substrate24, with the orientation of the optical waveguide30parallel to the heater22, shown in the top view ofFIG.2B. Accordingly, heat from the heater22spreads downward through the upper cladding layer26aand into the optical waveguide30. Heat also spreads laterally, both in the upper and lower cladding layers26aand26b, and the underlying substrate24. The distribution of heat at the optical waveguide layer falls off over several microns as the distance from the heater22and the optical waveguide30increases. The heater22may be positioned on top of or within the upper cladding26A. The heater22may be comprised of, for example, a metal, a metal alloy, e.g. nichrome, a conductive metal nitride, or a silicide. Alternatively, the heaters22may comprise doping in and/or around the optical waveguide30itself, whereby passing current through the optical waveguide layer and/or the optical waveguide30causes heating of the optical waveguide30. Other types of phase shifters10M,Nare within the scope of the invention as hereinafter described, and including those disclosed in U.S. patent application Ser. No. 16/826,051 filed Mar. 20, 2020 in the name of the Applicant. To independently drive the plurality of columns C1-Mof phase shifters10M,Nand the plurality of rows R1-Nof phase shifters10M,Nsimultaneously, the conventional way is to apply M×N independent DAC channels. However, this inevitably leads to high driver channel count as the number of phase shifters scales, which complicates the wiring layout and controls, and quickly reaches the practical limits of interface pin count. With reference toFIG.3, by taking advantage of a relatively slow thermal time constant of the heaters22and by configuring a fast switching row-column driving scheme, all the phase shifters10M,Nmay be operated simultaneously with a reduced number of digital to analog converter drivers DAC1-DACNand interface pin count. As shown inFIG.3, each heater22may comprise a diode33connected to a thermal resistor34in series forming a diode heater22, and a first contact, e.g. the anodes, of the diode heaters22of each row R1-RNof phase shifters10M,Nmay all be electrically connected to a respective common DAC channel (DAC1-DACN) via suitable electrical wire traces or tracks351to35N extending down each row, and a second contact, e.g. the cathode, of the diode heaters22of each column C1-CMof phase shifters10M,Nmay all be electrically connected to a common ground-bus361to36Mcomprising suitable electrical wire traces or tracks extending down each column C1-CMof phase shifters10M,N. The diodes23may comprise silicon PN diodes, silicon PIN diodes, Schottky diodes, germanium diodes or any other suitable diode. The forward voltage and reverse breakdown voltage of the diodes23affect system performance and efficiency. The diodes23may be configured to include an absolute reverse breakdown voltage larger than the maximum DAC drive voltage, whereby the diodes23are configured to block reverse current flow through the heaters22to other heaters22in other columns C1-CMof phase shifters10M,Nwhere it is not intended as part of the drive algorithm. Each diode23may be configured such that an anode thereof is connected towards the DAC1-DACNand a cathode is connected towards a respective one of the ground-bus361to36M(assuming positive DAC voltages). The ground-buses361to36Mof different columns of phase shifters10M,Nmay be connected to an analog multiplexer38, for example each of the ground-buses361to36Mis connected to a respective switch (SW1-SWM), that enables a controller processor40, executing instructions saved on non-transitory memory, to select only one column C1-CMof phase shifters10M,Nto connect to ground45at the same time, thereby connecting the circuit. The digital to analog converters DAC1-DACN, and the switches SW1-SWMmay not be located on the same photonic circuit chip as the phase shifters10M,Nand may connected to the photonic circuit via wire bonds or bump bonds. For a photonics process that is capable of producing the digital to analog converters DAC1-DACN, and/or the switches SW1-SWMon the same die as the photonic circuit including the phase shifters10M,N, some or all of the digital to analog converters DAC1-DACN, and the switches SW1-SWMmay be fabricated and positioned on the same die. The switches SW1-SWMin the analog multiplexer38may comprise metal oxide semiconductor field-effect transistors (MOSFETs), bipolar junction transistors (BJTs), junction field effect transistors (JFETs), or other transistors configured to form a low-resistance path to the common ground45. Particularly, it is preferable to have a resistance in each switch SW1-SWMmuch less than, e.g. typically less than one tenth of, the resistance in the thermal phase shifters10M,Nto minimize cross talk and maximize accuracy of the set phase shift. N-type field effect transistors are often preferred because they provide a low-resistance path to ground, i.e. a low on-resistance, and have very low built-in voltage across the switch SW1-SWM, i.e. the drain to source voltage, thereby allowing the ground-busses361-36Mto remain at the lowest possible voltage relative to the DAC drive voltages. The built in voltage of each switch SW1-SWMand each ground bus361-36Mis typically less than 1 Volt. It may be beneficial to configure the switches SW1-SWMusing more than one transistor per switch SW1-SWM, for example a transmission gate, or adding additional control transistors to decode a signal from the controller processor40or ensure that only one switch SW1-SWMis closed at a time, i.e. break before make switching. As shown inFIGS.4A and4B, when one of the columns, e.g. C1, of phase shifters10M,Nis selected, e.g. SW1is connected to ground45, by the controller processor40comprised of suitable hardware and software, executing instructions stored on non-transitory memory, the digital to analog converters DAC1-DACNon each row R1-RNof phase shifters, e.g.101,1-101,Nwill feed current into the diode heaters22in that column and row combination (M, N). Current does not flow in other columns of phase shifters10M,N, e.g. columns C2-CM, because the switches, e.g. switches SW2-SWN, are opened by the controller processor40and there is no path to ground45. Similarly, current flow through other unintended diode heaters22may be prevented by the reverse bias blocking behavior of the diodes. To feed current to the next column of phase shifters10M,N, the switch, e.g. switch SW2, on the next column C2of phase shifters10M,Nis selected by the controller processor40and the switch, e.g. SW1, on the previous column of phase shifters10M,N, e.g. column C1, is deselected, and the digital to analog converters DAC1-DACNon each row R1-RNof phase shifters, e.g. phase shifters102,1-102,N, is updated to a new set of DAC values that corresponds to the new column of phase shifters10M,N, e.g. column C2, and row, e.g. rows R1-RN, combination. After all columns C1_CN of phase shifters10M,Nare sequentially switched on, while the other columns of phase shifters10M,Nare switch off, the controller processor40cycles back and switches to the first column, e.g. C1, of phase shifters101,Nand repeats the process indefinitely. With reference toFIG.4A, the one cycle is called a pulse cycle. FIG.4Aalso marks the RNCMthat corresponds to when current is fed into that phase shifter10M,Nin the timing diagram. For example, the noted R2C1is when current is injected into the heater22for row 2 column 1 of phase shifters101,2, therefore, this heater22only sees one pulse of current injection for every pulse cycle. As a result, the total pulse cycle time Tpulsefor the array of phase shifters101,1to10M,Nis the number of columns M×the switch dwell time tdwellfor each switch SW1-SWMor Tpulse=tdwell*M. Note thatFIG.4Aonly shows an exemplary embodiment in which the digital to analog converters DAC1-DACNare supplying constant voltages (or currents) within the switch dwell time tdwell. In practice, the digital to analog converters DAC1-DACNmay supply a time-varying voltage (or current) waveform within the switch dwell time tdwellto each heater22. The thermal time constant of the heater22effectively averages or filters out fast changes in the output of the digital to analog converters DAC1-DACN. In one arrangement, a time-varying voltage or current may be pulse width modulated, whereby each digital to analog converters DAC1-DACNturns on to a high voltage or current for some amount of time and then turns off to a low voltage or current, and the total energy delivered to the heater22is controlled by the duration for which each digital to analog converter DAC1-DACNis turned on. To ensure a constant temperature, the pulse cycle time Tpulsemay be much shorter than the thermal time constant of the heater22, therefore, the heater temperature will rise to a constant value, with very slight ripples. The number of columns M will limit the switching speed. To minimize the ripple, the pulse cycle time Tpulseis ideally less than or equal to about 1/100th of the thermal time constant of the heater22. For example, if the thermal time constant is 100 ms, then the pulse cycle time Tpulsemay be less than or equal to 1 ms, and then the switch dwell time tdwellmay be 100 ns for 10 columns (M=10). Therefore, a switching time if 1/10 of switch dwell time tdwell=10 ns is still very manageable for common discrete transistor or integrated-circuit switches. Faster switching would allow a larger number of columns of phase shifters10M,Nfor a similar pulse cycle time Tpulse. As more rows and columns of phase shifters10M,Nare connected, a larger current will flow through each ground bus361to36Mand then through each switch SW1-SWM. Therefore, the switches SW1-SWMand the ground-buses361to36Mneed to handle higher currents as more rows of phase shifters10M,Nare added. For example, considering M=10 and N=10, if each phase shifters10M,Nconsumes 6 mW DC power for a 2p phase shift (maximum phase shifter set value for OPA), an M=10 means a momentary 10× power in the pulse, which is 60 mW. If a 3.3V drive, e.g. power source, is used, then each digital to analog converters DAC1-DACNsupplies about 18 mA, and the maximum transient current flowing through the ground-bus361to36Mand each switch SW1-SWMis 18 mA×N (rows)=180 mA. Therefore, it is usually desirable to use a higher voltage to minimize the current. For the same example, using a 10V drive lowers the maximum transient current flowing through each ground bus361to36Mto 60 mA. However, in this example, a 10×10 (100 phase shifters10M,N) OPA only requires 10 DAC channels and 20 pins to interface. The non-transitory memory saves the values (voltage or current) for the digital to analog converters DAC1-DACNassigned to each phase shifter10M,Nin each corresponding row of phase shifters10M,N. The values for each DACn(m,p) may be a constant. The values for each DACn(m,p) may be a time-varying waveform. With reference toFIG.4B, a method of operating the optical phased array driver may be described by the following steps:i) Switch the first ground bus (M=1), e.g. close the first switch SW1, to ground45, disconnecting all other switches, e.g. open switches SW2to SWM.ii) Update the DAC channels, e.g. the voltage or current values, from the non-transitory memory, in all N rows of phase shifters10M,Nto the corresponding values for the M=1 column of phase shifters101,Nand optionally the pthsteering direction (DACn(1, p)). Each row R1to RNof phase shifters10M,Nmay be set to a different voltage, and each column C1to CMof phase shifters10M,Nmay require a different set of row voltages. The phase shifters10M,Nmay be configured such that the light exiting the device would form a flat phase front pointed toward the pthsteering direction. However, because of fabrication imperfections on the chip and crosstalk between phase shifters10M,N, the phase shifter voltage/current values needed to create this output phase are typically pretty random and may need to be stored in a look up table in the non-transitory memory or even computed dynamically by the controller processor40with some sort of feedback. Analog control of each heater22by the controller processor40may be necessary because an arbitrary phase shift may be needed for any-angle beam steering in an OPA. This is complicated further because of mismatch and crosstalk between the phase shifter channels and the optical waveguides connecting them.iii) Switch to the next ground bus (m=m+1), e.g. close the second switch SW2, to ground45, all other switches open.iv) Update the DAC channels, e.g. the voltage or current, in all N rows of phase shifters10M,Nto the corresponding values, from the non-transitory memory, to the corresponding values for the m+1 column and optionally the pthsteering direction (DACn(m, p)).v) Repeat steps iii) and iv) until each column of phase shifters10M,Nhas been switched on sequentially by switches SW1-SWM, while the other columns of phase shifters10M,Nare turned off, e.g. open switches SW1-SWM.vi) steps i) to v) may be repeated while the steering direction p is updated during this indefinite loop to steer the beam into new directions. The loop of this method may be executed via the controller processor40, so that the time to execute one cycle of the loop through steps i. to v. is faster than the thermal time constant of the heaters22. This way, although not all heaters22may be simultaneously driven, their temperature will not fluctuate significantly, and thus the phase shifters10M,Nwill have relatively constant phase shift. The faster steps i. through v. are executed and the slower the thermal time constant, the smaller the ripple in phase shift. With reference toFIGS.5,6A,6B,7A and7B, each phase shifter10M,Nmay include an optical waveguide50comprised of a plurality of optical waveguide sections51, which may be straight and parallel to each other, routed adjacent to the heater22in a serpentine fashion, connected by optical waveguide bends52at each end thereof for directing light back through the subsequent one of the plurality of waveguide sections51. Similar to the embodiment shown inFIGS.2A and2B, the phase shifter10M,Ncomprises a substrate54, e.g. silicon, adjacent to a lower cladding layer55, e.g. silicon dioxide, an optical waveguide layer56, including the plurality of optical waveguide sections51, and an upper cladding layer57above the waveguide layer56. The heater22may be integrated into a strip of waveguide material in the optical waveguide layer56, although other heater arrangements are possible, such as the heaters22to the side of the waveguide50in the same waveguide material layer56, or heaters22made of a metal or ceramic material embedded in the upper cladding layer57. In the embodiments shown inFIGS.6A and6B, there are at least six waveguide sections51included in the optical waveguide50that extend parallel to the heater22. A function of the diodes23is to prevent current from each digital to analog converter DAC1-DACNfrom flowing from the selected column of phase shifters10M,Ninto the other non-selected columns of phase shifters10M,Nvia the electric traces or tracks26in each row R1-RN. Therefore, the reverse-bias breakdown voltage of the diodes23should be higher than the maximum drive voltage in any instance for all the digital to analog converter DAC1-DACNchannels. Connecting on-chip heaters22with an external pn-diode23will lower power efficiency caused by the native built-in potentials for the diodes23, which is typically around 0.7 V for a silicon pn device. When forward biasing the heater22, the pn-diode23in series will consume a constant dc power equal to itotal×Vturn-on, which generates heat, where itotalis the current flowing through both the diode23and the heater22and Vturn-onis the turn-on voltage of the pn-diode23. Moreover, there is also the series resistance associated with the pn-diode23that also consumes power and generates heat. In configurations where the diode23is physically separated from the phase shifter10M,N, this power dissipated in the diode23is lost to heat and does not cause optical effects. Silicon is referenced throughout the disclosure, but other materials, such as other optical waveguide materials are also within the scope of the invention. The power efficiency may be improved by integrating the pn-diode23close to each phase shifter10M,Nas part of the heater (diode heater)22, meaning that the heat otherwise wasted now also contributes to heating the optical waveguide sections51. Accordingly, the heater22may comprise an on-chip heater with an integrated pn-diode23. The heater22may comprise two long heating sections of heavily-doped waveguide material, e.g. silicon, with opposite polarities (p and n). A pn-diode23may be sandwiched in the center along the shorter edges of the two heating sections, where the p-doped section may be connected to the anode of the pn-diode23, and the n-side heating section is connected to the cathode of the pn-diode23. An exemplary diode heater22and an equivalent circuit are shown inFIG.8, which the diode heater22may comprise three main parts: 1) p-doped anode41, e.g. silicon with a doping level at 5e16 1/cm3to 5e18 1/cm3, 2) pn-diode, e.g. silicon,23, and 3) n-doped cathode42, e.g. silicon with a doping level at 5e16 1/cm3to 5e18 1/cm3. The pn-diode23may be sandwiched between a longer heavily p-doped section43, e.g. of silicon, including a doping material with a higher concentration of p-doping than the p-portion of the pn-diode23, and a longer heavily n-doped section44including an n-doping material with a higher concentration of n-doping than the n-potion of the pn-diode23. The heavily p-doped section43, e.g. P+ silicon, may be connected to the anode41of the pn-diode23, and the heavily n-doped section44, e.g. N+ silicon, may be connected to the cathode42of the pn-diode23. The heavily p-doped section43or the heavily n-doped section44may also include a layer of silicide formed on top to further reduce their resistivities. The silicide formation is a standard process in silicon photonics foundries that is typically used in forming ohmic contact between silicon and metals. The lengths, widths, and sheet resistivities of the heavily p-doped section43and the heavily n-doped section44dominate the overall resistance of the diode heater22, since the series resistance of the pn-diode23is typically a much smaller value. The reverse breakdown voltage of the pn-diode23may be adjusted by changing the length of the intrinsic region Li. The larger the intrinsic region Lithe larger the breakdown voltage of the pn-diode23. However, a longer intrinsic region Licomes with the price of increased series resistance, which could cause non-uniform heating mostly in the center where the pn-diode23is located. This non-uniform heating may reduce the thermal-optic efficiency. The lengths of both the p-doped portion Lpand the n-doped portion Lpin the pn-diode23will also change the turn-on characteristics and series resistance of the pn-diode23. An anode contact47and a cathode contact48may be placed on the far opposite ends of the diode heater22connecting to the heavily p-doped section43and the heavily n-doped section44, respectively to minimize heat sinking that also reduces the efficiency of the heater22. The interface between the anode contact47and the cathode contact48and the p-doped section43and the heavily n-doped section44, respectively, may have a silicide layer to ensure ohmic contact. Both the anode and cathode contacts47and48may be formed on the very edge of the heater22for electrical access. The width Wheaterof the heater22may be between 0.2 μm to 10 μm. The lengths of the heavily-doped silicon sections Lp+and Ln+are ideally between 10 μm to 1000 μm. The lengths of the p-doping portion Lpand the n-doping portion Lnin the pn-diode23may be between 0 to 2 μm. The length of the intrinsic region Liin the pn-diode23is ideally between 20 nm to 2 μm. In some embodiments, the intrinsic region Liof the diode23may be omitted, an p and n doping portions may touch directly. Ideally, the pn junction is placed in close enough proximity to the optical waveguide sections51of the phase shifter10M,Nso that power dissipated on the pn-junction heats the waveguide sections51and causes a phase shift in light transmitted therein. The heater22may be placed right next to an array of waveguide sections51, whereas each waveguide section51may be either a single waveguide or a ridge waveguide. The gaps (on both sides) between the heater22and the waveguide sections51may be between 0.4 μm to 2 μm. The optical phase shifter10M,N, as shown inFIGS.6A to7B, may be achieved through the relatively high thermo-optic coefficient in the optical waveguide material, e.g. silicon, which may be about 10 times more than the cladding layers55and57, e.g. silicon nitride, via which the refractive index of the optical waveguide material, e.g. silicon, will change according to the temperature. Therefore, by placing the diode heater22very close to the waveguide sections51, e.g. adjacent in the same waveguide layer56, as forward bias is applied and current flows through the diode heater22, the local temperature around the diode heater22, including the optical waveguide sections51, will increase, resulting in a change in the refractive index in the waveguide material. The light passing through the heated waveguide sections51then experiences an extra phase shift. Since the waveguide sections51and the diode heater22may be integral with and fabricated on the same waveguide, e.g. silicon, layer56, there may also be a slab layer, e.g. silicon (FIG.7A) connecting the diode heater22and the optical waveguide sections51that improves the thermal conduction. However, the gap between the diode heater22and the waveguide sections51and their dimensions can be carefully chosen to: 1) avoid excessive loss, and 2) reduce optical coupling between the optical waveguide sections51and the diode heater22. The heater22may also be used to heat up an alternative serpentine phase shifter10M,N, such as the one disclosed in U.S. patent application Ser. No. 16/826,051, filed Mar. 20, 2020 in the name of the Applicant, which is incorporated herein by reference. This allows heating up multiple adjacent waveguide sections51directly or indirectly adjacent to or nearby the heater22at the same time. The embodiments of thermal phase shifters10M,Nmay be arranged in a serpentine fashion, thereby increasing the total length of waveguide being heated by a singular heater22. By routing the light in this manner, such that it makes several passes under or near the same heater22, it is possible to salvage some of the heat that is otherwise wasted. This results in an increase in phase shift, associated with the increase in the heated length of waveguide, without increasing the length or the power consumption of the heater22. However, there are constraints associated with placing additional optical waveguide sections under or near the heater22, e.g. in a serpentine arrangement. Typically, the optical waveguide sections51must be spaced several microns apart to eliminate optical leakage between adjacent optical waveguide sections51. This typically-required spacing of several microns means that the optical waveguide sections51farther away from the center of the heater22have significantly less temperature change than any waveguide sections51proximate to the center of the heater22, limiting the number of passes under or adjacent to the heater22and the ultimate efficiency gain of the technique. Although thermal phase shifter configurations using a serpentine arrangement of waveguides have been proposed before, they do not address the constraints that limit the efficiency gain of the technique. For instance, some systems have proposed waveguides that are arranged in a serpentine fashion in order to increase efficiency and minimize power consumption. However, such waveguides all use the same cross sections, e.g. they are of the same width, which limits the number of passes under the heater. With reference toFIGS.9A and9B, the phase shifter10M,Nmay include a waveguide304comprised of optical waveguides sections320,322,324,326, and328, which may be straight and parallel to each other, are routed under or proximate a heater or heating element312in a serpentine fashion, with each of the optical waveguide sections320,322,324,326, and328including different widths or at least adjacent optical waveguide sections320,322,324,326, and328or at least optical waveguides spaced within twice the pitch away, including different widths, so that the waveguide sections320,322,324,326, and328have weak coupling with each other, and therefore may be placed closer together under the heating element312. Similar to the embodiment shown inFIGS.6B and7B, the phase shifter10M,NofFIGS.9A and9Bcomprises a substrate314, e.g. silicon, adjacent to a cladding layer, e.g. silicon dioxide, which may be comprised of a lower cladding layer315and an upper cladding layer316below and above, respectively, the optical waveguide sections320,322,324,326, and328. A heating element312may be mounted on top of positioned in the upper cladding layer316, although other heater arrangements are possible. In the embodiment shown inFIGS.9A and9B, there are five waveguide sections320,322,324,326, and328that run underneath the heating element312. Each of the waveguide sections320,322,324,326, and328may include a different propagation constant (ni), e.g. a different width (wi), a different thickness, a different doping concentration or a different material refractive index of all or part of the waveguide section, e.g. waveguide sections320,322,324,326, and328or the cladding surrounding, e.g. upper cladding layer316, lower cladding layer315or beside, the waveguide section, whereby adjacent parallel waveguide sections each comprise a different propagation constant, to exhibit and/or increase a wavevector mismatch between immediately adjacent straight parallel waveguide sections to decrease coupling therebetween. FIG.10illustrates a simple serpentine routing scheme that connects five straight waveguide sections512,514,516,518, and520. This routing scheme requires (N−1) bends for N passes through the heated section, with each bend section including a radius of curvature (or bend radius), e.g. half the waveguide pitch. However, routing the waveguide sections512,514,516,518, and520in such a tightly-packed serpentine structure requiring bend radii of half the waveguide pitch (below 400 nm) may cause problems, since silicon channel waveguides can typically only tolerate bend radii as small as 1 μm-2 μm without significant optical loss over the bend. FIG.11illustrates a spiral-type routing scheme for connecting the five waveguide sections532,534,536,538, and540, with some bends, e.g. the first and last outer bends having a larger radius, e.g. 1.5× the waveguide pitch or greater than 1 μm, and some bends, e.g. the second and third inner bends having a smaller radius, e.g. 0.5× the waveguide pitch or ⅓ the larger radius. In comparison to the routing scheme shown inFIG.10, the spiral-like routing presented inFIG.11increases the radius of some bends, but still requires a minimum radius of half the waveguide pitch. This is the arrangement for 5 passes/4 bends. More generally, for N bends, the first and last bend index may have the largest radius, the second and second last bends may have the next smaller radius, the third and the third last bends may have the next smaller, and so on, such that bend [i] and bend [N−i+1] have the same radii. Another way to conceptualize is to seeFIG.11as a waveguide that has been “twisted” onto itself about the center heater. FIGS.12and13illustrate a phase shifter10M,Nincluding a waveguide574, which includes a plurality of straight parallel waveguide sections, e.g. five waveguide sections552,554,556,558, and560, with a bend section that enables a waveguide pitch in the active heater region far below the minimum bend radius, e.g. less than 800 nm, enabling the five waveguide sections552,554,556,558, and560to be placed tightly together. The bend sections may comprise a first bend traversing at least 180° followed by one or more second bends including portions bending in an opposite direction to the first bend. For example, each bend section may comprise a larger-radius, e.g. 180°, circular bend564, e.g. bend radius greater than 1 μm, combined with an S-curve565to restore the narrow waveguide pitch, e.g. below 800 nm. The S-curve may comprise a concave portion extending from the 180° bend, and a concave portion extending between the concave portion and the next waveguide section. The bends sections may be nested, e.g. non adjacent each other due to the lengths of the waveguide sections552,554,556,558, and560, being different, whereby portions of each of the 180° bends564, may be disposed in a nested configuration, e.g. partially parallel, with portions of each of the adjacent S-curves565, e.g. the concave portion, so the total width of the phase shifter550is not much larger than twice the bend radius, e.g. 2 μm, which is important to lower the total chip area consumed by the phase shifter550. The bends564may include circular, semicircular,FIG.12, or elliptical,FIG.13, portions. In other words, the larger-radius 180° bends are used to route the long, straight parallel sections of the waveguide sections552,554,556,558, and560as closely together as possible. The combination of large-radius 180° bends564and S-curves565in this arrangement, when further combined with varying waveguide widths, enables the waveguide sections552,554,556,558, and560to be placed closer together than previously allowed for.FIG.13illustrates the bend plus S-curve routing scheme shown inFIG.12in the context of a chip, and a portion of the serpentine waveguide574running under a heater572within a chip576. Another embodiment of the phase shifter550includes bends564comprising a local bend radius that changes gradually and smoothly, i.e. adiabatically, along the propagation length of the bend564. This may be done in such a way so that the minimum local bend radius is never less than a predetermined chosen value rmin. A typical value for rminis 2 μm, so as to minimize radiative bend loss in the waveguide574. Using this smoothly changing technique, the bend564may be extended in a concave bend over an angle (180+x) degrees, then continued in a convex bend over an angle (−x) degrees, such that the waveguide exiting the bend564is parallel to the one entering the bend564but offset by the waveguide to waveguide distance underneath the heater572. The transition from concave to convex bend particularly depends on the smooth change of local bend radius to minimize optical loss. The local bend radius R as a function of propagation length L can follow a number of forms, e.g. linear (R∝mL) or hyperbolic tangent (R∝tanh(L)). Additionally, the section of waveguide before or after the large-radius bend is typically tapered in width from the width of the preceding waveguide to the width of the following waveguide, such that waveguide width is held constant within the bend564. Conventional phase shifters rely strictly on a very small bend radii to pack waveguides densely under a heater, and since the waveguides all use the same cross sections there is a limit on how tightly the waveguides can be packed together. In other words, those waveguides suffer from the problem of large minimum bend radius and large minimum waveguide-to-waveguide spacing. In comparison, embodiments of the present disclosure, such as thermal phase shifters550, do not have these problems because they allow for even tighter packing of the waveguides, e.g. less than 800 nm, preferably less than 700 nm, waveguide-to-waveguide pitch versus perhaps 2 um with most methods without requiring tight bends. Since the serpentine phase shifters10M,Nmake multiple passes through the heated zone of the chip but the escape waveguides8make only a single pass, the escape waveguides8effectively receive less thermal crosstalk than they otherwise would, given thermal decay alone. Thus, the escape waveguides8may be placed closer to other serpentine phase shifters10M,Nthan the required distance between two serpentine phase shifters10M,N. In other words, the spacing between a serpentine phase shifter10M,Nand an escape waveguide8may be less than spacing between two serpentine phase shifters10M,Nbecause the escape waveguides8are less sensitive to temperature change and thermal crosstalk. The allowable spacing decreases by the multiplication in efficiency given by making multiple tight passes (up to nearly 5× for a five-pass phase shifter10M,N, for example). Routing some number of escape waveguides8between each phase shifter10M,Ntherefore does not increase the area of the array. Accordingly, each escape waveguide8may be routed between neighboring phase shifters10M,Ndisposed in front of and behind each row of phase shifters10M,N, whereby the escape waveguides8are disposed closer to one another than to the neighboring phase shifters10M,N. The foregoing description of one or more embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto. | 37,795 |
11863207 | DETAILED DESCRIPTION In the following description of the various embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration various embodiments in which aspects described herein may be practiced. It is to be understood that other embodiments may be utilized and structural and functional modifications may be made without departing from the scope of the described aspects and embodiments. Aspects described herein are capable of other embodiments and of being practiced or being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein are for the purpose of description and should not be regarded as limiting. Rather, the phrases and terms used herein are to be given their broadest interpretation and meaning. The use of “including” and “comprising” and variations thereof is meant to encompass the items listed thereafter and equivalents thereof as well as additional items and equivalents thereof. The use of the terms “connected,” “coupled,” and similar terms, is meant to include both direct and indirect mounting, connecting, coupling, positioning and engaging. FIG.1illustrates one example of a network architecture and data processing device that may be used to implement one or more illustrative aspects described herein. Various network nodes103,105,107, and109may be interconnected via a wide area network (WAN)101, such as the Internet. Other networks may also or alternatively be used, including private intranets, corporate networks, LANs, wireless networks, personal networks (PAN), and the like. A network101is for illustration purposes and may be replaced with fewer or additional computer networks. A local area network (LAN) may have one or more of any known LAN topology and may use one or more of a variety of different protocols, such as Ethernet. Devices103,105,107,109and other devices (not shown) may be connected to one or more of the networks via twisted pair wires, coaxial cable, fiber optics, radio waves or other communication media. The term “network” as used herein and depicted in the drawings refers not only to systems in which remote storage devices are coupled together via one or more communication paths, but also to stand-alone devices that may be coupled, from time to time, to such systems that have storage capability. Consequently, the term “network” includes not only a “physical network” but also a “content network,” which is comprised of the data—attributable to a single entity—which resides across all physical networks. The components may include a data server103(e.g., a spatial simulation runtime, SpatialOS Runtime by Improbable Worlds Ltd.), a web server105, and client computers107,109. The data server103provides overall access, control and administration of databases and control software for performing one or more illustrative aspects described herein. The data server103may be connected to the web server105through which users interact with and obtain data as requested. Alternatively, the data server103may act as a web server itself and be directly connected to the Internet. The data server103may be connected to the web server105through the network101(e.g., the Internet), via direct or indirect connection, or via some other network. Users may interact with the data server103using the remote computers107,109, e.g., using a web browser to connect to the data server103via one or more externally exposed web sites hosted by web server105. The client computers107,109may be used in concert with the data server103to access data stored therein, or may be used for other purposes. For example, from the client device107a user may access the web server105using an Internet browser, as is known in the art, or by executing a software application that communicates with the web server105and/or the data server103over a computer network (such as the Internet). Servers and applications may be combined on the same physical machines, and retain separate virtual or logical addresses, or may reside on separate physical machines.FIG.1illustrates just one example of a network architecture that may be used, and those of skill in the art will appreciate that the specific network architecture and data processing devices used may vary, and are secondary to the functionality that they provide, as further described herein. For example, services provided by the web server105and the data server103may be combined on a single server. Each component103,105,107,109may be any type of known computer, server, or data processing device. The data server103, e.g., may include a processor111controlling overall operation of the rate server103. The data server103may further include RAM113, ROM115, network interface117, input/output interfaces119(e.g., keyboard, mouse, display, printer, etc.), and memory121. I/O119may include a variety of interface units and drives for reading, writing, displaying, and/or printing data or files. The memory121may further store operating system software123for controlling overall operation of the data processing device103, control logic125for instructing the data server103to perform aspects described herein, and other application software127providing secondary, support, and/or other functionality which may or might not be used in conjunction with other aspects described herein. The control logic may also be referred to herein as the data server software125. Functionality of the data server software may refer to operations or decisions made automatically based on rules coded into the control logic, made manually by a user providing input into the system, and/or a combination of automatic processing based on user input (e.g., queries, data updates, etc.). The memory121may also store data used in performance of one or more aspects described herein, including a first database129and a second database131. In some embodiments, the first database129may include the second database (e.g., as a separate table, report, etc.). That is, the information can be stored in a single database, or separated into different logical, virtual, or physical databases, depending on system design. The devices105,107,109may have similar or different architecture as described with respect to the device103. Those of skill in the art will appreciate that the functionality of the data processing device103(or the device105,107,109) as described herein may be spread across multiple data processing devices, for example, to distribute processing load across multiple computers, to segregate transactions based on geographic location, user access level, quality of service (QoS), etc. One or more aspects described herein may be embodied in computer-usable or readable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices as described herein. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The modules may be written in a source code programming language that is subsequently compiled for execution, or may be written in a scripting language such as (but not limited to) HTML or XML. The computer executable instructions may be stored on a computer readable medium such as a hard disk, optical disk, removable storage media, solid state memory, RAM, etc. As will be appreciated by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein. FIG.2is a block diagram illustrating an example system for lossless data compression and decompression according to one or more illustrative aspects described herein. InFIG.2, a system200may comprise a server210(e.g., the data server103, the web server105, a game server, a spatially-optimized simulator, a spatial simulation runtime) and one or more clients220(e.g., the client computers107,109, mobile phones, tablets, game terminals, worker connections). The server210may compress the data before sending the data to the client220. For example, in order for multiplayer games to be played over the Internet, a server and a client may need to exchange game data over a network. Compression may be used to reduce the amount of network bandwidth needed, thereby resulting in a better user experience. The user experience is enhanced by 1) compressing data to fit within the maximum transmission unit (MTU) size of a data packet, thereby reducing data fragmentation, 2) by reducing the total number of packets needed to send data, thereby saving bandwidth and improving speed, and/or 3) providing a more stable game experience by reducing the likelihood of data loss (stated differently, it is more likely to lose a packet when more packets are sent than it is to lose a packet when fewer packets are sent), among other benefits as further discussed herein. The server210may be a computing device configured to compress (e.g., encode data in lossless manner that takes less bandwidth to transmit than the original data) data frames and send the compressed data frames to the client220. The data frames may comprise one or more network messages (e.g., worker protocol messages) to be sent to the client220. The one or more worker protocol messages may comprise information associated with updates of a view (e.g., a piece of state shared between the server210and the client220). For example, the view may be a game state shared between the server210and the client220. In a video game, a view may comprise information indicating the positions and/or states of one or more entities (e.g., players, cars, trees) and/or artificial intelligence (AI) characters in the game world, map information associated with the one or more entities and/or other components of the game, and/or leaderboard information. The client220may be a computing device configured to receive the compressed data frames and decompress the data frames. The client220may then extract the information from the decompressed data frames and apply the updates of the view to a current view on the client220(e.g., an agreed view between the server210and the client220at a given time point). The server210may comprise a frame compressor211and a network layer212. The frame compressor211may receive data from one or more computing devices (e.g., one or more game servers, one or more client devices, the client220) and compress the data so that the compressed data may be in a form that uses fewer bytes. For example, the frame compressor211may receive worker protocol messages from one or more worker connections (e.g., a server-worker instance, a client-worker instance, worker modules). The worker protocol messages may be buffered until the end of a tick and may be grouped or otherwise included into one or more data frames before sending to the client220. A data frame may be an effective unit of compression that collects a plurality of worker protocol messages, serving as a block of data as perceived by the frame compressor211and a data sampling system. The quantity of messages to be included in a data frame may be determined based on a time period or interval during which the messages are received by the frame compressor211. The server210may operate in a tick-based cycle. A tick may refer to an iteration of a main or primary process thread. The data frames may be constructed or generated by internally aggregating the worker protocol messages within the network layer212. In addition, the network layer212may send data packets (e.g., network packets) comprising the compressed data to the client220using one or more network protocols over a communication channel (e.g., the Internet) to synchronize the views of all participating workers. The network layer212may comprise a networking library or stack (e.g., an Asio library). The network library may be responsible for transmitting the data frames to the client220over the communication channel. Additional details of the network layer and the networking library will be described in connection withFIGS.4,5A, and5B. The client220may comprise a network layer221and a frame decompressor222. The network layer221may receive the data packets from the network layer212and send the compressed data (e.g., compressed frames) to the frame decompressor222. The frame decompressor222may receive the compressed data and decompress the compressed data. In order to decompress the compressed data, the frame decompressor222may need to know the compression algorithm that was used for compressing the data. Decompressing the compressed data thus reconstructs the original data (e.g., the worker protocol messages) that was received by the frame compressor211. In this case, the system200may be a lossless system because the original data has been perfectly reconstructed from the compressed data. The compressing and decompressing techniques described herein may relate to lossless compression that allows the original data to be perfectly reconstructed from the compressed data. While “lossy” algorithms could be used, the game states might become unsynchronized as a result, so lossless compression algorithms are preferred. FIG.3shows a block diagram illustrating an example system for dictionary-based data compression and decompression according to one or more illustrative aspects described herein. A compression and decompression system300may comprise the frame compressor211, the frame decompressor222, a dictionary training module310, a server dictionary negotiator320, and a client dictionary negotiator330. As described above, the frame compressor211may compress the data frames. For example, the frame compressor211may compress the data frames based on one or more compression algorithms. A software library (e.g., ZSTD and LZ4) may bundle together compression algorithms in a way that software developers may readily use them. The software library may support one or more dictionaries that describe the characteristics of the data. For example, a dictionary may be a piece of metadata that describes common patterns in a data source, external to the frame compressor211, the frame decompressor222, and the data being compressed/decompressed. The dictionary may allow the frame compressor211and the frame decompressor222to compress and decompress the data frames according to one or more specific rules. Depending on the compression algorithm in use, the contents of a dictionary and how the compression algorithms may be different. For example, in arithmetic coding algorithms, the compression algorithm may be designed to better adapt to a stream of data. A dictionary may comprise a fake history of data to start the compression process. As another example, in Huffman coding, a dictionary might comprise one or more symbol trees. For example, “0000” may correspond to “alice,” “001” may correspond to “bob,” and “100” may correspond to “charlie.” A dictionary may use one or a combination of the coding techniques. Dictionaries may be constructed by or through a training process. The construction of a dictionary may comprise providing representative samples of data to a training algorithm. The frame compressor211may achieve a higher compression ratio if the samples are more representative of actual data that is to be sent between devices. A real-time estimate of an expected compression ratio may be calculated to determine how much data to include in a frame, with a margin of error accounting for variance in the compression ratio. The dictionary training module310may ingest samples of uncompressed data frames and periodically produce a new dictionary. Dictionaries may be trained based on network traffic from previous sessions (e.g., previous visual sessions, previous deployments of a game). But training a dictionary based on network traffic from a current session (e.g., the current deployment of a game) and then sending a new dictionary to the client220may achieve a higher compression ratio. For example, when the data frames correspond to the usage patterns of game networking, training a dictionary during runtime based on fresh data samples may achieve a higher compression ratio than using stale data from a previous game session. The current session may comprise a duration of time during which an individual user (e.g., player) stays connected to a game server without being asked to connect or reconnect to the game server. The current session may end when the user is disconnected from the game server or the user is asked to join or rejoin a game. In general, using a dictionary may improve the compression ratio and the throughput of compressors. In order to decompress data compressed using a dictionary, the frame decompressor222may need to know if the frame compressor211used a dictionary for compressing the data frames, and to have a copy of the same dictionary in order to decompress the compressed data. Stated differently, both the compressor and the decompressor need to be using the same rulebook to translate data between compressed and uncompressed states. The dictionary training module310may provide the new dictionary to the server dictionary negotiation320. The server dictionary negotiator320may send a message to the frame compressor211. The message may signal that a new dictionary has been generated and can be used for compressing data frames. The server dictionary negotiator320may also send the new dictionary to the frame compressor211. After receiving the new dictionary, the server dictionary negotiator320may synchronize the new dictionary with the client (e.g., the client220). For example, the server dictionary negotiation320may transfer (e.g., send) the new dictionary to the client dictionary negotiator330. The client dictionary negotiator330may then provide the new dictionary to the frame decompressor222. The frame decompressor222may receive the new dictionary from the client dictionary negotiator330and send a message to the server. The message may comprise information indicating that the frame decompressor222has obtained the new dictionary. Additional details of dictionary negotiation will be described in connection withFIGS.9and10. FIG.4illustrates an example of a shared state among different worker connections according to one or more illustrative aspects described herein. For example,FIG.4illustrates a shared state among different worker connections running within an instance of a network library. Each worker connection may be associated with a client device (e.g., the client220). The shared state may comprise pools of data frames collected from one or more worker connections, dictionary training logics and algorithms, and/or a dictionary archive. Because data frames may be pooled from a plurality of worker connections, a dictionary may be trained based on the pooled samples from the plurality of worker connections. The dictionary training logics and algorithms may comprise training dictionaries based on the pooled samples and/or scheduling for dictionary retraining. The dictionary archive may comprise already constructed dictionaries for use by compression logic within a given worker connection. Network protocol implementations may expose an API to enable submitting data frames to training, and accepting a new dictionary to synchronize to a client (e.g., the client220). The size of the data frames may be determined based on the network protocols or stacks used for transmitting the data frames. For example, in the transmission control protocol (TCP) or an automatic repeat request protocol, a large data frame may achieve a better compression ratio. If an MTU cannot comprise the entire compressed frame, the network packets may be fragmented. It may be easy to reconstruct the compressed frames based on the fragmented network packets. A TCP implementation may choose to compress frames before fragmentation to bring the size of data packets beneath the MTU, because reliable delivery methods reconstructing a compressed block sent as several packets may be more tractable or manageable. However, in the user datagram protocol (UDP), compressing a frame and then fragmenting the frame for later reconstruction may increase the packet loss rate, because the decompressor may need each fragment of the compressed frame to decompress the compressed frame, and UDP does not guarantee packet delivery. Losing one fragment sent via UDP (or similar “best efforts” protocol) may result in losing all fragments and/or adding additional latency and complexity to the protocol for resending the lost fragment. As a result, the size of the frame may be reduced to ensure that the size of each compressed frame is smaller than the MTU, so that each network packet may be decompressed in its entirety. For example, frames may be compressed either after or alongside fragmentation, so that packet loss might not be amplified. FIG.5illustrates an example of a dictionary-based compression and decompression structure according to one or more illustrative aspects described herein. InFIG.5, the frame compressor211may comprise a sampler510and a compressor520. The sampler510may be configured to select and determine frames to be compressed. Additional details of the sampler510will be described in connection withFIG.6. The compressor520may retrieve a dictionary ID from a dictionary archive530. The dictionary archive530may comprise one or more dictionaries for compressing data frames. The dictionary ID may reference the raw bytes corresponding to the dictionary and/or the compression settings to use with the dictionary. For example, the dictionary may be indicated by a single byte, with a value 0 corresponding to “no compression in use.” The dictionary ID may be unique per worker connection, and might not be associated with a global dictionary ID that identifies a global dictionary. If a client is connected for a long period that the client has cycled through a whole byte's worth of dictionary IDs, the dictionary ID may start again from the beginning. The number of dictionaries currently in use by a single worker connection at any one time may be small enough to be represented by a single byte or in some cases a single bit. A global dictionary may be used by all clients and may be constantly updated by the server210. But clients may use different dictionaries so that the dictionaries are more particular to the data associated with each client. The frame compressor211may be a per-worker connection structure that takes data frames as an input and outputs a string of bytes which comprise the dictionary ID and the compressed data. The frame compressor211may call the sampler510, which may or might not record a data sample of each frame before performing compression. The frame compressor211may determine (e.g., select), based on a dictionary ID, a dictionary from the dictionary archive530for compressing the data frames. The dictionary used for compression may change between compressing different sets of frames. The frame decompressor222may comprise a frame decompressor540that receives the string of bytes output by the frame compressor211. The frame decompressor222may retrieve a dictionary from local dictionaries550based on the received dictionary ID and decompress the compressed data using the dictionary referenced by the dictionary ID. The local dictionaries550may store a plurality of dictionaries that are also included in the dictionary archive530. The dictionaries stored in the dictionary archive530may be constantly synchronized with the dictionaries stored in the local dictionaries550. FIG.6is a block diagram illustrating an example system for dictionary training according to one or more illustrative aspects described herein. InFIG.6, a sampling manager610may be a program module, executed by a frame compressor (e.g., the frame compressor211) or the server210, that manages and/or controls the sampler510and a sample archive620. The sampler510may be configured to probabilistically select frames to be sent to a client for inclusion in a sample group. The sampler510may submit samples to the sample archive620. The sample archive620may store the samples and provide the samples to the dictionary training module310. In some examples, when a frame is sampled, it may belong to a sample group, and data from these sample groups may be submitted to the sample archive620. The sample archive620may then provide the data received from the sample archive620to the dictionary training module310. The number of samples that the sampler510submits to the sample archive620may depend on the dictionary training algorithms and/or libraries. For example, for ZSTD, the dictionary size may be set to be approximately 100× the size of the average uncompressed frame size. An example range of a dictionary size may be between 100 kB and 500 kB. A size of 128 kB may be set as the default dictionary size. For LZ4, the dictionary size might not exceed 64 kB. In general, the volume of the training data may be at least approximately 20× the dictionary size to achieve reasonable compression results. Compression ratios may increase with more samples, but the gains may taper off substantially or completely when the volume of the training data is more than 100-200× the dictionary size. Clients may be allowed to configure their dictionary size. A recommended default dictionary size for ZSTD may be 128 kB and a recommended default dictionary size for LZ4 may be 64 kB. In addition, clients may be allowed to configure their sample size. For example, a recommended default minimum number of samples to train a dictionary may be 5,000, and a recommended default maximum number of samples to train a dictionary may be 20,000. The sample manager610may configure and determine the sample fill rate of samplers based on (1) the target number of samples before retraining, (2) the number of connections contributing to a sample group, and/or (3) the rate of frames being sent to workers. For example, the sample manager610may configure and determine the sample fill rate of samplers to produce roughly the maximum number of samples during the time period of a retraining interval. The sample manager610may determine the retraining interval based on, for example, the minimum number of dictionary samples, the maximum number of dictionary samples, and/or the maximum dictionary size. In the case of producing too few samples, the sample group may retain old samples to meet the maximum number of samples. In the case of producing too many samples, the sample group may behave as a first in, first out (FIFO) queue (e.g., the sampler may discard or remove the oldest samples). The dictionary training module310may comprise a dictionary builder640and a low-priority scheduling module650. The dictionary builder640may change (e.g., update) the dictionary in use for a connection between a server (e.g., the server210) and one or more clients (e.g., the client220) in the same session (e.g., the same game session) without disconnecting the users (e.g., the players). Data samples selected from a session associated with one or more clients may be applied to the next session associated with the same one or more clients. A game may have many sessions running at the same time. For example, if Alice, Bob, and Charlie play in game session 1 together, and Danielle, Erika, and Freya play in game session 2 together, then dictionaries trained from the data in game session 1 may be applied to the players in game session 1, and the dictionaries trained from the data in game session 2 may be applied to the players in game session 2. This may be because the samples from session 1 may be more representative of future data in session 1, while samples from session 2 may be more representative of future data in session 2. An uptime (e.g., a game-uptime) may refer to how long a server is running without restarting, irrespective of how long users are connected. For example, massively multiplayer online role-playing game games (MMORPG), such as World of Warcraft and Runescape, may have game servers that have relatively long uptimes (e.g., restart once per week). Round-based games such as Fortnite and League of Legends may have game servers that run only for the length of the game round (e.g., 30 minutes). The duration (e.g., length) of a session may refer to how long an individual user (e.g., player) stays connected to the server. The length of a session might not be longer than the game uptime and may depend on the type of the game. For example, a player may stay connected to the large MMORPG games for several hours, so the duration of a session may be approximately several hours. In contrast, a player may stay connected to the round-based games until the round is over (or until the player is eliminated), so the duration of a session may be approximately 30 minutes or less. The dictionary training module310may perform dictionary training while a session is running to produce better dictionaries. For example, the dictionary training module310may perform dictionary training on live game data while a game is running (e.g., during a live game session). In this way, the dictionary training module310may perform periodic retraining to make sure the dictionary contents are trained based on fresh samples (e.g., live samples). To ensure that the dictionary contents come from fresh samples (e.g., live samples) for performing periodic dictionary retraining, the samples of packets may be recorded while the game is running in real-time and the sample of packets may be sent to dictionary training on a regular basis. A retraining scheduling module630may trigger new dictionary training for the dictionary training module310. For example, the retraining scheduling module630may determine and set one or more intervals for retraining the dictionaries. The timescale on which to retrain dictionaries may vary based on the types of the games, how frequently new players join the game, and/or which parts of the game the new players are interacting with. Additionally or alternatively, clients may determine and adjust the retraining intervals. In some examples, 10 minutes may be set as a default retraining interval. Intervals smaller than 5 minutes and larger than 30 minutes might not be optimal for retraining dictionaries. In some examples, the first dictionary might not be generated until the first sampling period (e.g., 10 minutes) is complete, which may be a significant portion of deployment length that might be run without a dictionary. To solve that problem, all the data packets (e.g., network packets) sent to the clients may be submitted as data samples so that a dictionary may be trained as soon as possible. Additionally or alternatively, clients may store dictionaries trained on similar deployments and use one or more of those dictionaries as an initial dictionary. The retraining scheduling module630may trigger dictionary training as soon as the data samples are ready. Clients may also download, upload, and/or configure their first dictionary and might not rely on the server to generate and synchronize the first dictionary. Training dictionaries based on one or more sets of samples may be computationally expensive relative to other networking tasks. In order to perform dictionary training while a game is running and not to starve out the game server of resources to simulate the game, the gameplay experience may be prioritized over the new dictionary data. The low-priority scheduling module650may schedule dictionary training asynchronously on a low priority thread, which may allow the OS-level scheduling to handle prioritization. In this way, for example, the quality of a visual simulation (e.g., a game) as perceived by a client might not be affected by deferring and/or amortizing the training stage until sufficient computational capacity is available. Additionally or alternatively, for game servers running on few cores, spare time between game networking ticks may be used for dictionary training. For example, if a game server (e.g., the server210) needs to tick at 1/60th of a second, but may sometimes only need 1/120th of a second to perform regular functions, the game server may have an additional 1/120th of “spare time” to perform dictionary training. For game servers running on many cores, one or more cores may be reserved to be used for asynchronously training dictionaries. After training the dictionaries, the dictionary training module310may compress the trained dictionaries and provide the compressed dictionaries to the dictionary archive530. The dictionary archive530may comprise raw dictionaries660and compressed dictionaries670. The raw dictionaries may be generated by decompressing the compressed dictionaries. The dictionary archive530may also comprise dictionary metadata associated with each dictionary. Further, to improve the compression ratio, it may be beneficial to pool samples among clients and to create a dictionary that draws from the data supplied by two or more clients. In this way, fewer dictionaries may need to be trained and kept in a memory. A data sample pooling scheme may group clients that have high data affinity (e.g., data co-occurrences associated with the clients) to share training samples. A degree of data affinity may be determined based on correlations and/or similarities of game properties associated with the clients. For example, the dictionary training module310may select samples and group samples into pools of related players based on game properties. In this way, dictionaries may be trained sooner (e.g., by reaching the minimum amount of required data faster) and higher compression ratios may be achieved (e.g., by the locality in a game world). Sample pools may be determined according to the groupings of clients. The grouping of clients may be determined by sending one or more queries on client descriptions (e.g., client game start time, client's character level and/or location). Clients that share the same or similar descriptions may be grouped. Additionally or alternatively, sample pools may be automatically determined by comparing how well a set of samples compress against previous (e.g., archived) dictionaries, and samples that better fit the previous dictionaries may be grouped. For example, when sampling one or more frames for a client and determining which sample pool to add them to, the frame compressor211may attempt to compress the frame with one or more archived dictionaries retrieved from the dictionary archive530. The sampling manager610may observe the compression ratio achieved with each of the archived dictionaries and rank the archived dictionaries by how well they compress the frame. The sampling manager610may select the archived dictionary that achieves the highest compression ratio. If the sample pool corresponding to the selected dictionary is oversubscribed (e.g., exceed a threshold number of clients), the sampling manager610may select the archived dictionary that achieves the second-best compression ratio. The client may then be added to the sample pool that fits the selected dictionary. Sample pools may be determined by sending one or more queries on the data contained within the data frames and samples of individual clients. The mapping between a client and a sample pool may be updated. For example, a client may leave one pool for another pool that has a higher level of data affinity. The mapping relations between the client and the pools may be updated to reflect the changes in data affinity. In addition, for session-based games that have relatively small player counts, it might not be advantageous to periodically retrain more than one compression dictionary for each type of player client. Different types of player client may require different types of network connection. For example, a first player client playing a fully-fledged version of a game on a personal computer (PC) may require more game data than a second player client playing a lower-fidelity version of the game on a mobile phone or a third player client playing on a console or an older PC. For games that involve a larger number of players, grouping together clients that have strong data affinity for each other to share dictionaries may yield higher compression ratios. Therefore, sample grouping may be used instead of tightly coupling player client type to dictionaries. In some examples, the sample pools may be determined based on both the player client type and the data affinity among the clients. FIG.7illustrates an example of sample pools for dictionary training according to one or more illustrative aspects described herein. InFIG.7, a simulated world700may comprise a city area710, a mountain area720, and a forest area730. In some simulations (e.g., games), the nature of either the changing world or the changes in player behaviors in the simulation may result in different types of data being sent from the server to clients. For example, in play-testing and development, most players may be in the forest area730. If we used samples of network data from this play-testing to train a dictionary, the dictionary may be well-suited to compress data from the forest area730, but may be less suited to data from other areas such as the city area710or the mountain area720. For example, players may move from the forest area730to the mountain area720. A new dictionary may be generated to improve the compression ratio because the patterns of data coming from a different part of the game may be sufficiently different. As another example, if the simulated world700is mutable, the players may build a new area (e.g., a lake area) or move to a new area in the simulated world700. The new area might not be featured anywhere in the original training samples, so the original dictionary might not be well suited to compress data from the new area of the simulated world700. Data samples may be selected and pooled together based on locations associated with players. For example, data associated with players who are in similar locations in the game world may be grouped into the same sample pools. In this way, the data samples may be specific to the players and may improve the compression ratio and accuracy. As shown inFIG.7, a sample pool A740may be determined based on the data associated with the players in the city area710, a sample pool B750may be determined based on the data associated with the players in the mountain area720, and/or a sample pool C760may be determined based on the data associated with the players in the forest area730. Additionally or alternatively, data samples may be grouped or pooled based on game data. For example, data samples may be grouped based on what team (e.g., faction, alliance, clan) the players belong to, what in-game activities (e.g., fighting, building, trading) the players are participating in, and/or game-specific indicators, such as a player's character level (e.g., in MMORPG the player's character level may indicate which area the character might go to next). The optimal size of the sample groups (e.g., number of players) may depend on the nature of the game. In general, grouping players together may be better than treating players as individuals in a group-of-one, or treating all players together in a group-of-all in terms of the amount of network bandwidth saved and the compression ratios. FIG.8is a block diagram illustrating an example system for dictionary negotiation according to one or more illustrative aspects described herein. dictionary negotiation may be needed to synchronize dictionaries between a server (e.g., the server210) and a client (e.g., the client220). The client might not receive data packets that contain data compressed with a dictionary unknown to the client (e.g., the client does not possess or have access to the dictionary). To solve this problem, the server might only send data packets that contain data compressed with a dictionary that the server knows that the client has the same dictionary. For example, the server dictionary negotiator320might not provide the frame compressor211with a dictionary unknown to the client. InFIG.8, the server dictionary negotiator320may receive a new dictionary from the dictionary archive530and may send the new dictionary to the client dictionary negotiator330via a negotiator channel820. The negotiator channel820may be different from a worker protocol channel810between the frame compressor211and the frame decompressor222because the worker protocol channel810may be lossy and/or unordered. The negotiator channel820may be a lower priority channel than the worker protocol channel810. The worker protocol channel810may be used for transmitting worker protocol messages between the framer compressor211and the frame decompressor222. The client dictionary negotiator330may add the new dictionary to the local dictionary550so that the frame decompressor222may retrieve the new dictionary from the local dictionary550and may decompress future frames (e.g., frames that compressed based on the new dictionary) using the new dictionary. The local dictionary550may store the old dictionary for at least a period of time based on a lifetime of packets being sent (e.g., time to live (TTL) of the packets) after a new dictionary is loaded because the client may still receive data compressed with an old dictionary. In some examples, dictionaries might not fit inside an MTU because the size of an uncompressed dictionary may be larger than 100 kB. Therefore, dictionaries may be compressed before being sent to the client. For example, dictionaries may be compressed using static compression methods such as ZSTD and Zlib. The client dictionary negotiator330may send the compressed dictionaries and the metadata describing the compression of the dictionaries to the client dictionary negotiator330. Because dictionaries may be shared between different clients, the server210may store a copy of the compressed dictionaries in the dictionary archive530. Because the amount of data a game server desires to send to a client in a given data packet may be less than the MTU, dictionaries may be fragmented and the fragments of dictionaries may fill spare space in the data packets. For example, if a size of a data packet is 1500 bytes, and a server intends to send 1000 bytes of game data to a client, 500 bytes of dictionary data may be filled in the data packet and sent to the client. In this way, new dictionaries may be gradually transmitted without interfering with game data. Additionally or alternatively, the server may upload the dictionary and/or dictionary-related information to external storage, such as a content delivery network (CDN), and may send a key to the client to retrieve the dictionary from the external storage. This allows the client to retrieve the dictionary based on its own schedule. Additionally or alternatively, a low-priority channel as part of the networking layer (e.g., an additional TCP connection) may be established and used between the server and the client. The low-priority channel may be used to transmit dictionaries and/or dictionary-related information. The dictionary archive530may store each dictionary produced by the dictionary training module310. In addition, the dictionary archive530may store information (e.g., timestamps) indicating when each dictionary was created and/or trained, information indicating which sample group the dictionary belongs to, the raw dictionary itself, a dictionary ID associated with the raw dictionary, a description of the compression settings that may be used with this dictionary (e.g., the LZ4 or ZSTD), a compressed version of the dictionary to send via the negotiator channel820, and/or a description of how the dictionary was compressed (e.g., the ZSTD). The dictionary archive530may comprise one or more retention policies that specify how long the dictionaries and the related information may be stored. In addition, because a worker connection's frame compressor may only use one dictionary at a time, the retention policy may specify what dictionaries may be stored and/or when to remove the dictionaries from the dictionary archive530. For example, the retention policy may state that: “Retain all dictionaries in use by at least one worker connection. Retain the latest dictionary for each sample group. Eventually remove all other dictionaries.” If a client tries to decompress a data packet using the wrong dictionary, a client may fail to decompress the packet or may obtain the wrong packet contents. Because data packets may be reordered over the network, so the dictionary archive530may store the old dictionary for a short period of time (e.g., a few seconds) during the transition period between the old dictionary and the new dictionary. The server may indicate with every packet which dictionary (e.g., by associating an ID with a dictionary) is in use. When a new dictionary for a sample group is added to the dictionary archive530, the dictionary archive530may need to compress the new dictionary and then notify all the server dictionary negotiators for that sample group that a new dictionary is available. FIG.9depicts an example event sequence for dictionary negotiation according to one or more illustrative aspects described herein. The server dictionary negotiator320may accept indications (e.g., prompts) from the dictionary archive530that a new dictionary (or an update to the existing dictionary) is available. At step901, the server dictionary negotiator320may update the existing dictionary based on the new dictionary. The server dictionary negotiator320may send a message to a server negotiator channel910(e.g., the negotiator channel820) indicating that a new dictionary is ready to be sent to the client. At step903, the server negotiator channel910may send dictionary metadata to a client negotiator channel920(e.g., the negotiator channel820). The implementation of the negotiator channels may be specific to the connection protocol (e.g., TCP) in use. The dictionary metadata may comprise information associated with the new dictionary such as the date and time the new dictionary was created and trained and/or the compression settings associated with the new dictionary. At step905, the server negotiator channel910may send one or more fragments of the new dictionary to the client negotiator channel920. Each fragment may be associated with a dictionary ID that uniquely identifies the new dictionary. In some examples, the new dictionary may be compressed based on one or more rules and each fragment may comprise one or more segments of the compressed new dictionary. The server negotiator channel910may send the dictionary metadata and all dictionary fragments to the client negotiator channel920and complete the dictionary transfer process. At step907, the client negotiator channel920may send the dictionary metadata and all dictionary fragments to the client dictionary negotiator330. For example, the client negotiator channel920may send the dictionary fragments to the client dictionary negotiator330until all dictionary fragments have been received. As another example, the client negotiator channel920may immediately transfer the receive the received each individual fragment to the client dictionary negotiator330. At step909, the client dictionary negotiator330may receive the dictionary metadata and all dictionary fragments, and decompress the new dictionary based on the dictionary metadata. At step911, the client dictionary negotiator330may load the decompressed new dictionary into the local dictionaries550. At step913, once the client dictionary negotiator330loads the decompressed new dictionary into the local dictionaries550, the client dictionary negotiator330may send a message to the client negotiator channel920indicating that the new dictionary has been loaded and is ready to be used for decompressing future data frames. At step915, the client negotiator channel920may send the received message from the client dictionary negotiator330to the server negotiator channel910. At step917, the server negotiator channel910may send the received message from the client negotiator channel920to the server dictionary negotiator320. In this way, the server may be informed that the client is able to decompress data frames based on the new dictionary. FIG.10is a flow chart of an example method for dynamic dictionary-based compression according to one or more illustrative aspects described herein. Steps of the method may comprise performing training of a compression dictionary on live data (e.g., live game data) while a session is running. Steps of the method may also distinguish between the collection of data samples and the training of the dictionary. Further, steps of the method may group clients (e.g., the client220) that have high data affinity to share training samples. The description ofFIG.10includes examples of computing devices that may perform various steps. However, any or all of those steps (and/or other steps) may be performed by one or more other computing devices. One or more steps may be combined, sub-divided, omitted, or otherwise modified, and/or added to other steps. The order of the steps may be modified. At step1001, a computing device (e.g., the server210) may receive, during a currently running session, a plurality of messages. The plurality of messages may be associated with changes to one or more states. The plurality of messages may comprise worker protocol messages that indicate information (e.g., states maintained by the computing device) related to a visual simulated world (e.g., a game world). For example, one or more messages may comprise information related to the positions of players in a video game. The computing device may receive the plurality of messages from one or more servers and/or client devices (e.g., the client220). The currently running session may comprise a duration of time during which an individual user (e.g., player) stays connected to a game server without being asked to connect or reconnect to the game server. The currently running session may end when the user is disconnected from the game server or the user is asked to join or rejoin a game. At step1003, the computing device may determine, based on the plurality of messages, one or more frames. Each frame may comprise at least one of the plurality of messages. Each frame may be included in a data packet to be sent to one or more client devices. For example, the computing device may determine the one or more frames based on a time period within the currently running session. Each frame may collect all the messages received during the time period within the currently running session. The time period may be predetermined by the computing device and may be adjusted in real-time based on the nature of the game and the number of players that are currently involved in the game. At step1005, the computing device may determine, based on the one or more frames, data samples. The computing device may select, during the currently running session, data samples from the frames. For example, the computing device may group the determined one or more frames based on one or more common characteristics associated with the plurality of messages, and determine, based on the grouped frames, the data samples. The one or more common characteristics may comprise information indicating locations associated with clients (e.g., the players in a game). The computing device may also group the determined one or more frames based on other indicators associated with the clients (e.g., player characters' levels, game session start time, whether the players belong to the same team). At step1007, the computing device may compress the one or more frames based on a compression dictionary. The computing device may store a plurality of dictionaries for compressing frames. Each dictionary may be used to compress a corresponding set of data samples (e.g., data samples collected from one region of the simulated world). In some examples, one or more client devices (e.g., clients belong to the same group) may share a dictionary for compressing frames associated with the respective client device. At step1009, the computing device may train, during the currently running session, the compression dictionary based on the determined data samples. Based on the training of the dictionary, the computing device may generate a new dictionary (e.g., update the existing or previous dictionary). The computing device may periodically train and/or retrain the compression dictionary based on the most recent data samples determined during the currently running session. Because the data samples are determined during the same session that the dictionary is trained, newly received messages during the session may be compressed using the trained and retrained dictionaries. At step1011, the computing device may determine, during the currently running session and based on receiving additional messages, one or more additional frames. The additional frames may comprise one or more additional messages. Each frame may be included in a data packet to be sent to one or more client devices. At step1013, the computing device may compress the one or more additional frames based on the trained compression dictionary (e.g., the new compression dictionary). The trained compression dictionary may be generated during the currently running session. The computing device may constantly use the newly generated or updated dictionary to compress the future frames and/or frames that are just received but not yet compressed. In some examples, the computing device may send the trained compression dictionary to the client devices. In response to receiving a message that the trained compression dictionary has been received by the client devices, the computing device may start compressing the one or more additional frames based on the trained compression dictionary. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as illustrative forms of implementing the claims. | 55,029 |
11863208 | The accompanying drawings illustrate various examples. The skilled person will appreciate that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the drawings represent one example of the boundaries. It may be that in some examples, one element may be designed as multiple elements or that multiple elements may be designed as one element. Common reference numerals are used throughout the figures, where appropriate, to indicate similar features. DETAILED DESCRIPTION The following description is presented by way of example to enable a person skilled in the art to make and use the invention. The present invention is not limited to the embodiments described herein and various modifications to the disclosed embodiments will be apparent to those skilled in the art. Embodiments will now be described by way of example only. As described above, an array of weights for a neural network (e.g. a convolutional NN) can be very large and as these are stored in memory, rather than a local cache, a significant amount of system bandwidth is used at run time to read in the weight data (e.g. 50% of the system bandwidth in some examples). In order to reduce the amount of bandwidth that is used, the weights may be stored in a compressed form and then decompressed prior to use (and after having been read from memory). Described herein is an improved method of data compression that involves interleaving the compressed data in such a way that the decompression can be performed efficiently (e.g. a reduced area of hardware is needed to perform the decompression and the decompression process has reduced latency and power consumption). Whilst the methods of compression and decompression that are described herein may be used for an array of weights for a neural network (e.g. a convolutional NN), the methods may also be applied to other data. In various examples the methods may be applied to any multi-dimensional array of data, including, but not limited to, image data, voice data, etc.; however the methods are also applicable to1D data. In various examples, the methods described herein may be used to compress the data (i.e. the results) output by a layer within a neural network (e.g. a layer within a convolutional NN). This then provides a saving in system bandwidth when the data is subsequently read in (as input data) to the next layer in the neural network. As described in detail below, the data compression method comprises encoding groups of data items (e.g. binary numbers) using an encoding method that generates header data for each group and, except where a group comprises only zeros, body data for each group. In the examples described herein, the header data for each group comprises a fixed number of bits (e.g. h bits for each group, where h is fixed), whereas the body data may differ in size for different groups (e.g. B bits of body data for each group, where B is variable) and in an extreme case there may be no body data for a group (e.g. B=0). If each input data item comprises n bits and each group comprises N numbers, the input data for a group (i.e. the uncompressed data) comprises n*N bits whereas the compressed data for the group comprises h+B bits, and if the compression is successful (h+B)(n*N). Like any other data compression method, there may be a few cases where no compression is possible (in which case (h+B) may be larger than (n*N). In some examples these cases may be identified and the data items stored in their original format. Alternatively, as compression is still achieved (on average) when looking across many groups (e.g. across weights for a layer within a convolutional NN or other type of NN), the lack of compression in rare isolated groups may be accommodated. The body data for a group comprises the same number of bits, b bits, of body data for each number in the group (where b≤n and B=b*N), but the number of body bits may differ between groups and as detailed above, in an extreme case there may be no body bits for a particular group (e.g. b=B=0). The data compression method described herein further comprises packing the compressed body data for a group into a body data field in a data block. In this example, the body data field for a group comprises interleaved bits from the body data for each data item in the group. For example, the body data field comprises the least significant bit (LSB) of body data for each data item of the group, followed by the next least significant bit of body data for each data item of the group, etc., until all body bits have been packed into the body data field. In various examples the data block comprises body data fields for each of a plurality of groups and the header bits may be included in a separate data block (that comprises header bits from each of the groups in the plurality of groups). Alternatively a data block may comprise the body data field and the header bits for the same group (e.g. alternating headers and body data fields or a group of headers followed by a group of corresponding body data fields). Whilst counter-intuitive, by interleaving the body bits as described herein when packing the compressed data for a group into a data block, the decompression is made more efficient and less complex. In particular, the body bits that are required for decompressing any weight (b bits, where b varies between groups) are in the same place in the compressed data block for any value of b. For example, if bit0of word 0 is at a position x, bit1of word 0 is at position x+N, bit2of word 0 is at position x+2N, etc. or more generally, bit J of word K is found at bit position x+(J*N)+K where KE [0, N−1] and x is the starting position of the body data field (i.e. the position of bit0of word 0). These fixed positions within the data block reduce the amount of hardware that is required to perform the decompression (e.g. a multiplexer that would otherwise be required is no longer needed and as a result the decompression hardware is smaller and consumes less power). FIG.1Ais a flow diagram of an improved data compression method andFIG.1Bis a flow diagram of the corresponding data decompression method. As described above, these methods may be used for arrays of weights for a neural network (e.g. a convolutional NN) or for any other data, including other arrays of multi-dimensional data. Where the data comprises an array of weights for a neural network (e.g. a convolutional NN), the data that is compressed may be the entirety of the weight (e.g. where it is in fixed point format) or, where the weight is in a floating point format with all weights having the same exponent, the data that is compressed may be the n-bit mantissas of the weights. Any reference to a ‘weight’ in the following description may refer either to the entire weight or to the mantissas of the weights where the weights have common exponents. As shown inFIG.1A, the data compression method comprises receiving the input data (block102), e.g. the weights for a convolution NN, image data or other multi-dimensional data (e.g. as shown graphically inFIG.9). The input data may be received (in block102) in chunks of data or as an input stream of data items. As described below, the data compression method operates on groups of data items from the input data. In some examples, the input data may be pre-processed (block103) and this is described in more detail below with reference toFIGS.8A-8C. This pre-processing, where implemented, operates on a plurality of data items, e.g. on all the input data or on a subset of the input data (e.g. a chunk or group of input data, where a chunk of input data may be the same as, or different from, a group of data items). The input data (i.e. either pre-processed input data or the original input data) is then encoded (block104) using a method that generates, for a group of input data items (e.g. binary numbers), a header for the group and in most cases body data for each of the data items, although in extreme cases where all the data items are zero, there will be no body data. The encoding operates on groups of data items, although it will be appreciated that in various examples, multiple groups of data items may be encoded in parallel. As detailed above, the header has a fixed size for all groups (e.g. h bits for each group, where h is fixed) and the body data for each of the data items is the same for all data items within a group (e.g. b bits of body data per data item, where130) but may differ between groups (e.g. b is fixed within a group but is not fixed between groups). The header comprises an indication of the size of the body portion for each data item (e.g. the value of b) and in some examples the header may indicate that the size of the body portion is zero (b=0) in which case there are no body portions for the data items in the group and hence no body data field for the group. Having encoded a group of data items (in block104), the body bits for the group are packed into a body data field in a data block by interleaving the body bits for each of the data items in the group (block106, as described below with reference toFIG.2). The resulting body data block, which may comprise body data fields from multiple groups is then stored or otherwise output (block108). The header bits may be packed into a separate data block (e.g. into a header field for the group) and the resulting header data block, which may comprise header fields from multiple groups, is then stored or otherwise output (block109). When storing the data blocks (in blocks108and109), they may be byte aligned rather than necessarily being stored adjacent to the immediately previously stored data block. This simplifies addressing given that the size of a data block varies between groups of data items. In various examples the bit depth of the data items, n (which is fixed) is between 4 and 16 and in various examples n=8 or n=16. In various examples the number of data items in a group, N, is 4 or 8. In examples where N=8, this may result in less complex hardware than where N=4. This is because the multiplexing logic becomes less complex as N increases because the possible starting positions of a new body data field within a body data block reduces. For example, if N=4, each group is guaranteed to start on every 4th bit and cannot start on any of the bit positions between, whereas if N=8, each group can only start every 8th bit and this removes layers from a multiplexing tree because the tree does not need the ability to select any of the intervening values. In contrast, by using smaller values of N (i.e. fewer data items in a group), the amount of compression that may be achieved may be increased (i.e. the body portion size may be reduced); however, the number of encoded groups of data items, and hence header fields, is increased and this may outweigh any benefits achieved as a consequence of the smaller body data field. Therefore there is a trade-off to consider when deciding what value to use for N. FIG.2is a schematic diagram that shows an example of the encoding and interleaving operations (blocks104and106) fromFIG.1Afor N=8 (i.e. N data items per group). As shown inFIG.2, a group of data items202from the input data comprises 8 data items (N=8)204. The encoding operation (in block104) generates, from the 8 data items, one header206(comprising h-bits) for the group and if b>0, one body portion208for each data item in the group (as noted above, if b=0 then there are no body portions for the data items in the group). The header206comprises an indication of the size of the body portion208for each data item (e.g. the indication could represent the value of b, or in some examples the indication could represent the value of B where the value of b can easily be determined from B, as b=B/N). Each body portion208comprises b-bits and in the example shown inFIG.2, b=8. If there are body portions (i.e. b>0), bits from the body data210for each of the data items are then interleaved (in block106). As shown inFIG.2, the interleaving forms a body data field212by first adding one bit from each body portion208, then adding a next bit from each body portion etc. In the example shown, the least significant bit (LSBs) of body data for each data item is first added (bits A0, B0, C0, D0, E0, F0, G0, H0) followed by the next least significant bit of body data for each data item of the group (bits A1, B1, C1, D1, E1, F1, G1, H1), etc., until all body bits have been packed into the body data field. As shown inFIG.2, the last bit in the body data field is the most significant bit (MSB) of the body data for the last data item in the group (bit H7). In other examples, the MSBs (bits A7, B7, C7, D7, E7, F7, G7, H7) may be packed first, followed by the next most significant bit, etc. until all the bits of body data210have been packed into the body data field212. Having generated the body data field212by interleaving (in block106), the body data field is packed into a body data block214and the header206is packed into a header data block216. By storing the headers and body data fields in different data blocks216,214, the decompression operation is made less complex. Within the header data block, the location of the start of each header is fixed (because the header size is the same for all groups) and so the headers can be read easily to determine the offsets for the starting positions of each body data field212within the body data block214. In various examples, instead of storing the header and body data fields in separate data blocks, a data block may comprise K headers (i.e. the headers206for K groups) followed by the corresponding K body data fields. By selecting K such that K*h has the same bit alignment properties as the body data fields, e.g. K*h=0 mod N, the alignment of bits within the resultant data block also reduces the complexity of the decompression operation. Whilst the description above refers to interleaving bits starting with the LSB of body data for each data item in the group, in other examples, the interleaving may start with the MSB of body data for each data item in the group. The selection of whether to start with the LSB or MSB depends on the encoding scheme that is used (in block104). Where an encoding scheme as described below with reference toFIG.4is used, starting with the LSB is most appropriate. However, if the encoding scheme is a lossy compression scheme that removes one or more LSBs, then interleaving starting with the MSB of body data may be used instead. Similarly, if the decoding scheme uses online arithmetic (instead of binary multipliers and adders, as is the case in the examples described below), interleaving starting with the MSB may be used because online arithmetic performs calculations starting with the MSB. The interleaving that is performed on the body bits (in block106) reduces the complexity of the decompression operation and this can be described with reference toFIGS.1B and3. FIG.1Bis a flow diagram of the data decompression method that corresponds to the data compression method shown inFIG.1Aand described above. As shown inFIG.1B, the method comprises receiving encoded data, where, as described above, the original items are encoded by representing groups of data items with a header and none, one or more body data fields. In various examples, receiving this data may comprise receiving blocks of header data216(block110) and blocks of body data214(block111). These may, for example, be received as two parallel streams and may be read and buffered independently (blocks112and114). Alternatively the header data and body data may be received together (e.g. where the data is stored in the same data block, as described above). As part of the decode operation (block116), the header for a group is processed and this indicates the amount of body data that is required to decode the group of data items (i.e. the size of the body data field for the group, B=b*N). The corresponding amount of data can then be extracted from the buffer (of body data) and as a consequence of the fixed bit positions (due to the interleaving operation, as described above), the bits can be shuffled using a fixed scheme (irrespective of the value of b) to re-create the original body portions for each data item in the group without requiring any multiplexer logic. Having recreated the body portions, the body portions are decoded. The fixed relationship between the bit positions in the body data field and the corresponding bit positions in the original data items, irrespective of the number of body bits for each data item in the group (i.e. irrespective of the value of b, which may vary between groups) is shown graphically inFIG.3for two different sizes of body data field302(b=5),304(b=3), and in this example, to reduce the complexity of the diagram, the number of data items in a group is four (N=4). As shown inFIG.3, irrespective of the size of the body data field (and hence the number of bits in each body portion, b), the first N (i.e.4) bits in the body data field302,304comprise a single bit for each of the body portions, denoted A′-D′. In the example shown, the first N bits comprise the LSBs for each of the body portions. The next N bits in the body data field302,304comprise the next bit for each of the body portions, again irrespective of the size of the body data field. Consequently, by reading the concatenated sections310-312each comprising N bits in turn and building up the body portions one bit per section, until a point is reached where there are no further bits in the body data field302,304(as extracted from the buffer) and at that stage, all the bits of the body portions A′-D′ have been identified and deinterleaved. Once the decoded data block has been generated (in block116), there is an optional post-processing operation (block117) which is an inverse of the optional pre-processing operation (block103) in the compression method (as shown inFIG.1A). The decoded data, either in its original or post-processed form, is then output (block118). In various examples, the decoded data may be used immediately. In various examples, the decoded data may not be buffered or stored in a cache because of its large size and instead the decompression may be performed each time the data items are used. However, in some other examples at least some of the decoded data may be stored, e.g. in a cache. The encoding operation (in block104) may use any suitable encoding scheme that generates a fixed size header206for a group of data items and a body portion for each data item in the group, where the size of the body portion is the same for all data items within a group but may be different for other groups of data items. An example of such an encoding scheme is shown inFIG.4. FIG.4is a flow diagram of an example encoding method that operates on groups of data items and the method may be described with reference to the example shown inFIG.5. The encoding method receives a group of data items502(block402), for example, 8 data items504(N=8), denoted A-H as shown inFIG.5. In the example shown inFIG.5, each of the data items504comprises 8 bits (n=8). The optimum size of body portion (i.e. the optimum value of b, bopt) is then identified by identifying the most significant leading one across the group of data items (block404). The most significant leading one may, for example, be identified by determining the bit position of the leading one in each data item (where the bit positions may be identified by the bit index 0-7, as shown inFIG.5) and comparing these values to identify the highest bit index. The optimum size of body portion is one more than the highest bit index, in examples where the LSB has a bit index of zero (as shown inFIG.5). Alternatively, the bit position of the most significant leading one across the group of data items may be identified in any other way (in block404). In the example shown inFIG.5, data item A has the leading one in bit position 4, as do data items E and H. All other data items in the group have their leading ones in lower bit positions (i.e. less significant bit positions). Consequently, in the example shown inFIG.5, the optimum body portion size is 5 bits (bopt=5). If all of the data items only comprise zeros, then the optimum body portion size is also zero (bopt=0). Having identified an optimum body portion size (in block404) and in examples where all body portion sizes from zero to n, i.e. n+1 sizes, can be encoded within the h-bits of the header, the method may continue by generating a header comprising a bit sequence that encodes the identified optimum body portion size (block408) and truncating each data item to create a corresponding body portion by removing none, one or more leading zeros from the data item, until the body portion has the optimum body portion size (i.e. n-boptleading zeros are removed such that the resulting body portion comprises boptbits). If the optimum body portion size is zero (bopt=0) and then n leading zeros are removed and there are no remaining body bits. In various examples, a look up table may be used to identify the bit sequence used for a particular body portion size or the body portion size may be included in the header as a binary number. In various examples, however, the size of the header (i.e. the number of bits, h, in the header) may be insufficient to identify all of the possible body portion sizes, i.e. n+1 sizes. In particular, this may occur where the bit depth of the input data (i.e. the value of n) is a power of two. Referring to the example inFIG.5, there are nine possible body portion sizes since n=8 (i.e. body portion sizes of 0, 1, 2, 3, 4, 5, 6, 7, 8) and if the header only comprises three bits (h=3), then only eight body portion sizes can be represented in the header bits (using binary values 0-7) and hence there are only eight valid body portion sizes. In other examples, more than one body portion size may be considered invalid in order to reduce the overall header size. For example, if n=9 and h=3 then two body portion sizes may be considered invalid. In examples where one or more body portion sizes are not valid, having determined the optimum body portion size (in block404), the method checks whether the optimum body portion size, bopt, is valid (block406). If the optimum body portion size is valid (Yes' in block406), then the method continues by encoding that valid optimum body portion size into the header (in block408) and truncating each data item to create a corresponding body portion by removing none, one or more leading zeros from the data item, until the body portion has the optimum body portion size (i.e. n-boptleading zeros are removed such that the resulting body portion comprises boptbits). If, however, the optimum body portion size is not valid (No′ in block406), the next largest valid body portion size, bvalid, is selected (block407). The method then continues by encoding that valid body portion size into the header (in block408) instead of the optimum body portion size and truncating each data item to create a corresponding body portion by removing none, one or more leading zeros from the data item, until the body portion has the valid body portion size (i.e. n-bvalidleading zeros are removed such that the resulting body portion comprises bvalidbits). Again, a look up table may be used to identify the bit sequence used for the selected valid body portion size (which may be equal to the optimum body portion size, bopt, or the next larger valid body portion size, bvalid), and two example look up tables are shown inFIGS.6A and6B. In the example look up table shown inFIG.6A, the body portion size which is omitted, and hence is not considered valid, is three and in the example look up table shown inFIG.6B, the body portion size which is omitted, and hence is not considered valid, is five. The body portion size that is considered invalid may be chosen for omission based on analysis of the input data (e.g. on many or all groups of data items) and this is shown as an optional initial analysis step inFIG.4(block401). The analysis (in block401) determines, based on a plurality of groups of data items, which optimum body portion size is least common and then allocates header values to body portion sizes omitting that least common size. In various examples, where the input data comprises weights for a neural network (e.g. a convolutional NN), the omitted body portion size may be identified separately for each layer of the NN based on analysis of the weights which are most commonly used for that layer (in block401) and in particular based on the least common position of a leading one across all the weights (or across all the n-bit mantissas of the weights) for the particular layer of the NN. In this way, different layers may have different body portion sizes that are considered invalid. For example, one layer of a NN may have three as the invalid body portion size (as inFIG.6A) and another layer of the same NN may have five as the invalid body portion size (as inFIG.6B). As noted above, in various examples there may be more than one invalid body portion size and these multiple invalid body portion sizes may be selected based on the least common positions of leading ones across the input data (i.e. the least common optimum body sizes for the groups of data items), e.g. across all the weights (or mantissas of the weights) for a particular layer of a NN, with the analysis being performed independently for each layer of the NN. Referring again to the example group of data items502shown inFIG.5, as detailed above the optimum body portion size is 5-bits (bopt=5). If the look up table ofFIG.6Ais used, this is a valid body portion size (Yes' in block406) and the bit sequence ‘100’ is included within the header (in block408). Additionally, each of the data items are truncated by removing three leading zeros (in block410) to form the corresponding body portion. If, however, the look up table ofFIG.6Bis used, a body portion size of 5-bits is not valid (No′ in block406) and hence the next largest valid body portion size would be used instead, i.e. a body portion size of six. In this example, the bit sequence ‘101’ is included within the header (in block408) and each of the data items are truncated by removing two leading zeros (in block410) to form the corresponding body portions. This means that there is a leading zero in all of the body portions as a consequence of not using the optimum body portion size. In the first of these examples, where the table ofFIG.6Ais used, the resultant encoded data block comprises 3+(8*5)=43 bits, i.e. 3 header bits and 40 body bits (5 for each body portion). In contrast, where the table ofFIG.6Bis used, the resultant encoded data block comprises 3+(8*6)=51 bits, i.e. 3 header bits and 48 body bits (6 for each body portion). Assuming that many data blocks are encoded and that the optimum body portion size is rarely invalid, the additional N bits (which are all leading zeros) included in the body portion on those rare occasions (when the optimum body portion size is invalid), will still result in a smaller overall amount of encoded data than an alternative solution of increasing every header by one bit so that all optimum body portion sizes can be validly encoded within the header. In examples where the encoding method ofFIG.4is used, the corresponding decoding method that may be used in the data decompression method ofFIG.1B(in block116), including the fixed-pattern de-interleaving may be as shown inFIG.7A. As shown inFIG.7A, the method comprises processing the header to determine the size of the body data field, b*N (block702) and reading the corresponding amount of data from the body data buffer (block704). Using the fixed relationship (as described above), the data items can then be generated by starting with a set of data Items comprising only zeros (i.e. n zeros in each data item) and overwriting zeroes in each data item (starting with the LSB) with appropriate bits from the body data field (block706) and once all the bits that have been read from the buffer (in block704) have been used to overwrite zeros (in block706), the resultant decoded data items are output (block708). The decoding method ofFIG.7Ais shown graphically inFIG.7Bwhich is a variation onFIG.3(described above) and again the fixed relationship between the bit positions in the body data field and the corresponding bit positions in the original data items, irrespective of the number of body bits for each data item in the group (i.e. irrespective of the value of b, which may vary between groups) is shown for two different sizes of body data field302(b=5),304(b=3). In this example, to reduce the complexity of the diagram, the number of data items in a group is four (N=4) and the number of bits in each decoded data item is six (n=6). As shown inFIG.7B, irrespective of the size of the body data field (and hence the number of bits in each body portion, b), the first N (i.e. 4) bits in the body data field302,304comprise a single bit for each of the data items. In the example shown, the first N bits comprise the LSBs for each of the data items and these are used to overwrite the zeros that initially occupy those LSBs. The next N bits in the body data field302,304comprise the next bit for each of the data items, again irrespective of the size of the body data field, and these are used to overwrite the next bit in each of the data items. Consequently, by reading the concatenated sections310-312each comprising N bits in turn and overwriting zeros, one zero (for each data item) per N-bit section of the data read from the buffer, until all the bits that have been read have been used, the data items are recreated (i.e. both deinterleaved and decoded). As shown inFIG.7B, by pre-populating each bit in each data item with a zero (given that the number of bits, n, in each data item is fixed) and then replacing these zeros with values from the body data, there is no need to first recreate the body portions and then, in a separate operation, pad each body portion with the requisite number of leading zeros (as determined based on the header for the group). An alternative to the method ofFIG.7Ais shown inFIG.7Cand can be described with reference to the example shown inFIG.7D(for b=3, N=4, n=6). As shown inFIG.7C, the header data is processed to determine the size of the body data field, b*N (block702) and this is then used both to read the corresponding amount of data from the body data buffer (block704) and to generate a body data mask710(block705). The body data mask710may comprise ones in positions where valid data can be stored and zeros in all other bit positions and an example is shown inFIG.7D. The body bits are extracted using the fixed relationship (as described above), the data items can be generated using the body data mask710and the body bits (block707). In various examples, AND gates may be used to combine the body bits and the body mask bits. As before, the resultant decoded data items are output (block708). The encoding scheme described above with reference toFIGS.4,5,6A and6Bresults in high levels of compression where the data items are small in value (e.g. close to zero) and where the binary representations of each of the data items are similar in terms of the position of the leading ones. For example, if a group of data items was: 0100000, 00000101, 00000100, 00000010, then although a body portion size of 7 bits could be used, the resulting amount of compression is much less than if the group of data items was: 0000111, 00000101, 00000100, 00000010, when a body portion size of 3 bits could be used. In various examples, such as where the data items are weights for a NN (or the mantissas of those weights), the distribution of the data items may be centred around (or close to) zero, as shown inFIG.8A. This distribution may, for example, be Gaussian or Laplacian. In such examples, if the data items are represented using two's complement, then the binary strings representing the negative values all have a MSB which is a one and so the encoding method described above cannot remove any leading zeros (e.g. bopt=n) and there is no compression. To improve compression or enable compression (e.g. in the case of two's complement representation), the data items are pre-processed (in block103ofFIG.1A). The pre-processing operation comprising folding and interleaving the data items so that they are all positive and the distribution is a curve of almost continuously decreasing probability, as shown inFIG.8B. The folding and interleaving operation can be written mathematically as follows: symbol={(-2*coeff)-1,coeff<02*coeff,otherwise where ‘coeff’ is the original input data item and ‘symbol’ is the pre-processed data item. summingng the coefficients are represented in twos complement format, a multiplication by two can be implemented by left shifting by one bit position, and so this may be implemented in hardware (or software) as:symbol:=isignbitset? (((NOT coeff)<<1) OR 1): coeff<<1; orsymbol:=isignbitset? NOT (coeff<<1): coeff<<1; where NOT and OR are bitwise operators and <<1 indicates left shifting by one bit position. This essentially converts the data item to a sign magnitude-like format but places the sign bit as LSB. It will therefore be appreciated that if the data items are originally in sign magnitude format, the pre-processing operation may be modified such that it comprises moving the sign bit from the MSB to the LSB. In various examples, such as for Android NN formatted weights, the data items may not be centred around (or close to) zero and an example distribution is shown inFIG.8C. In such examples, the offset is subtracted from the input data items as part of the pre-processing operation (in block103), prior to interleaving and folding, such that a distribution similar to that shown inFIG.8Bis still achieved. An example of the folding and interleaving operation can be written as follows: or offset_coeff=mod(coeff-offset,2n)symbol={(-2*offset_coeff)-1,offset_coeff<02*offset_coeff,otherwisesymbol:=isignbitset?((NOToffset_coeff)<<1)OR1):offset_coeff<<1; where the function ‘mod’ computes the remainder of the division when the division is rounded towards negative infinity and in various examples may be equivalent to discarding the MSB. As noted above, multiplying by two can be implemented by left shifting by one bit position. Wherever a pre-processing operation is applied to the original input data items (in block103ofFIG.1A), the inverse operation is applied to the decoded data items before they are output (in block117ofFIG.1B). This post-processing operation (in block117) may therefore comprise an unfolding and de-interleaving operation and optionally the addition of an offset (e.g. so as to reset the distribution from that shown inFIG.8Bto the original distribution, e.g. as shown inFIG.8A or8C). Referring to the two examples given above, where the distribution is centred around zero and so no offset is involved: coeff={-0.5*(symbol+1),odd(symbol)0.5*symbol,otherwise And where the distribution is offset: offset_coeff={-0.5*(symbol+1),odd(symbol)0.5*symbol,otherwisecoeff=mod(offsetcoeff+offset,2n) where in both cases, the halving operation may be performed by right shifting by one bit position. The offset may be communicated separately to the encoded data (e.g. in a command stream) or in any other way (e.g. via a register interface). In the examples described above, the header has a fixed size for all groups (e.g. h bits for each group, where h is fixed). The size of the header data as a fraction of the overall compressed data size, is small (e.g. half a bit per input data value) and so compression of the header bits is not used. In a variation of the examples described above, however, variable length encoding (e.g. Huffman encoding) may be used for the header data. This may, for example, be used where the header data is biased. The data compression that may be achieved using the methods described herein may be further improved by increasing the number of data items that comprise only zeros. In examples where the data items are NN weights, this may be achieved by using pruning when training the NN. The data compression that may be achieved using the methods described herein may be further improved by grouping correlated data items, i.e. data items that are likely to have a similar value and hence a similar position of the leading one. For example, by grouping N data items with a leading one in bit positions 6 and 7 and grouping N data items with a leading one in bit positions 3 and 4, more compression can be achieved than if both groups of data items comprise a mixture of data items with leading ones in bit positions 3, 4, 6 and 7. This may be achieved, for example, by changing the way that the multi-dimensional input data, shown graphically inFIG.9, is divided into groups. In examples where the data items are NN weights, this may be achieved by grouping weights that relate to the same plane (e.g. weights for different x, y values but the same z value,902) instead of grouping weights that relate to different planes (e.g. weights for the same x,y value but different z values,904). The data compression and decompression methods described herein may be implemented in software, in hardware or in a combination of software and hardware. In various examples, the data compression method described herein may be implemented in software when mapping a NN to particular hardware (or a particular hardware type) and this is often a one-time operation. In contrast, the data decompression method described herein may be implemented at run time and may be performed many times, e.g. whenever the data items are used. FIG.10shows a schematic diagram of a data compression apparatus1000arranged to implement the method ofFIG.1A. As shown inFIG.10, the data compression apparatus1000comprises an input1001, an encoding module1002, an interleaving module1004, a memory interface1006and an output1007. The apparatus1000may additionally comprise a pre-processing module1008. The encoding module1002is arranged to perform the encoding of the data items (as in block104) using any suitable encoding method that generates a fixed size header for a group of data items and a body portion for each data item in a group (unless the body portion size, b, is zero). In various examples the encoding module1002is arranged to implement the method ofFIG.4. The interleaving module1004is arranged to interleave the body bits into the body data field (as in block106), e.g. as described above with reference toFIG.2. The memory interface1006is arranged to output the encoded data, via the output1007for storage in memory (as in block108). Where provided, the pre-processing module1008is arranged to fold and interleave, and optionally subtract an offset, from the input data items (in block103), as described above with reference toFIGS.8A-8C. In various examples, the encoding module1002, interleaving module1004and, where provided the pre-processing module1008, may be implemented in software. FIG.11shows a schematic diagram of a data decompression apparatus1000arranged to implement the method ofFIG.1B. The data decompression apparatus1100comprises a plurality of inputs1101-1103and a plurality of read modules1104-1106each arranged to read a different type of data: header data (in the header read module1105via input1102), body data (in the body read module1106via input1103) and in examples where a bias is used in the NN, a bias (in the bias read module1104via input1101). These read modules1104-1106each comprise a buffer (e.g. one or more FIFOs) and requesters (which may be implemented as linear counters) that request data from memory (not shown inFIG.11) in linear order as long as there is space for the requested data in the corresponding buffer. A prioritization scheme may be implemented to arbitrate between the requesters such that, for example, priority is given to bias requests and then to header requests with body data requests being the lowest priority. This prioritization is in inverse order of the quantity of data required and as a result the biases will never stall (as they will never not have enough data) and the FIFOs in both the bias read module1104and header read module1105can be narrower. As described above (with reference toFIG.1B), the decoding module1108reads data from the buffers in the header and body read modules1105,1106, performs the fixed pattern bit shuffle (as described above with reference toFIG.3or7) and generates the decoded data items which are then either output from the decompression apparatus (via output1112), or where post-processing is used, are first post-processed in the post-processing module1110before being output (via output1111). FIG.12shows a computer system in which the methods of data compression or decompression described herein may be implemented. The computer system comprises a CPU1202, a GPU1204, a memory1206and other devices1214, such as a display1216, speakers1218and a camera1220. The components of the computer system can communicate with each other via a communications bus1220. The system further comprises a neural network accelerator1224arranged to implement a method of data compression and/or decompression as described herein. Whilst this neural network accelerator1224is shown as a separate hardware unit inFIG.12, in other examples it may be part of the GPU1204and/or may be part of the same SoC (system on chip) as the CPU1202. The apparatus ofFIGS.10-12are shown as comprising a number of functional blocks. This is schematic only and is not intended to define a strict division between different logic elements of such entities. Each functional block may be provided in any suitable manner. It is to be understood that intermediate values described herein as being formed by a module need not be physically generated by the module at any point and may merely represent logical values which conveniently describe the processing performed by the apparatus between its input and output. The data compression and data decompression apparatus described herein may be embodied in hardware on an integrated circuit. The data compression and data decompression apparatus described herein may be configured to perform any of the methods described herein. Generally, any of the functions, methods, techniques or components described above can be implemented in software, firmware, hardware (e.g., fixed logic circuitry), or any combination thereof. The terms “module,” “functionality,” “component”, “element”, “unit”, “block” and “logic” may be used herein to generally represent software, firmware, hardware, or any combination thereof. In the case of a software implementation, the module, functionality, component, element, unit, block or logic represents program code that performs the specified tasks when executed on a processor. The algorithms and methods described herein could be performed by one or more processors executing code that causes the processor(s) to perform the algorithms/methods. Examples of a computer-readable storage medium include a random-access memory (RAM), read-only memory (ROM), an optical disc, flash memory, hard disk memory, and other memory devices that may use magnetic, optical, and other techniques to store instructions or other data and that can be accessed by a machine. The terms computer program code and computer readable instructions as used herein refer to any kind of executable code for processors, including code expressed in a machine language, an interpreted language or a scripting language. Executable code includes binary code, machine code, bytecode, code defining an integrated circuit (such as a hardware description language or netlist), and code expressed in a programming language code such as C, Java or OpenCL. Executable code may be, for example, any kind of software, firmware, script, module or library which, when suitably executed, processed, interpreted, compiled, executed at a virtual machine or other software environment, cause a processor of the computer system at which the executable code is supported to perform the tasks specified by the code. A processor, computer, or computer system may be any kind of device, machine or dedicated circuit, or collection or portion thereof, with processing capability such that it can execute instructions. A processor may be any kind of general purpose or dedicated processor, such as a CPU, GPU, System-on-chip, state machine, media processor, an application-specific integrated circuit (ASIC), a programmable logic array, a field-programmable gate array (FPGA), physics processing units (PPUs), radio processing units (RPUs), digital signal processors (DSPs), general purpose processors (e.g. a general purpose GPU), microprocessors, any processing unit which is designed to accelerate tasks outside of a CPU, etc. A computer or computer system may comprise one or more processors. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the term ‘computer’ includes set top boxes, media players, digital radios, PCs, servers, mobile telephones, personal digital assistants and many other devices. It is also intended to encompass software which defines a configuration of hardware as described herein, such as HDL (hardware description language) software, as is used for designing integrated circuits, or for configuring programmable chips, to carry out desired functions. That is, there may be provided a computer readable storage medium having encoded thereon computer readable program code in the form of an integrated circuit definition dataset that when processed (i.e. run) in an integrated circuit manufacturing system configures the system to manufacture a data compression or decompression apparatus configured to perform any of the methods described herein, or to manufacture a computing device comprising any apparatus described herein. An integrated circuit definition dataset may be, for example, an integrated circuit description. Therefore, there may be provided a method of manufacturing, at an integrated circuit manufacturing system, a data compression or decompression apparatus as described herein. Furthermore, there may be provided an integrated circuit definition dataset that, when processed in an integrated circuit manufacturing system, causes the method of manufacturing a data compression or decompression apparatus to be performed. An integrated circuit definition dataset may be in the form of computer code, for example as a netlist, code for configuring a programmable chip, as a hardware description language defining an integrated circuit at any level, including as register transfer level (RTL) code, as high-level circuit representations such as Verilog or VHDL, and as low-level circuit representations such as OASIS® and GDSII. Higher level representations which logically define an integrated circuit (such as RTL) may be processed at a computer system configured for generating a manufacturing definition of an integrated circuit in the context of a software environment comprising definitions of circuit elements and rules for combining those elements in order to generate the manufacturing definition of an integrated circuit so defined by the representation. As is typically the case with software executing at a computer system so as to define a machine, one or more intermediate user steps (e.g. providing commands, variables etc.) may be required in order for a computer system configured for generating a manufacturing definition of an integrated circuit to execute code defining an integrated circuit so as to generate the manufacturing definition of that integrated circuit. An example of processing an integrated circuit definition dataset at an integrated circuit manufacturing system so as to configure the system to manufacture a data compression or decompression apparatus will now be described with respect toFIG.13. FIG.13shows an example of an integrated circuit (IC) manufacturing system1002which is configured to manufacture a data compression or decompression apparatus as described in any of the examples herein. In particular, the IC manufacturing system1302comprises a layout processing system1304and an integrated circuit generation system1306. The IC manufacturing system1302is configured to receive an IC definition dataset (e.g. defining a data compression or decompression apparatus as described in any of the examples herein), process the IC definition dataset, and generate an IC according to the IC definition dataset (e.g. which embodies a data compression or decompression apparatus as described in any of the examples herein). The processing of the IC definition dataset configures the IC manufacturing system1302to manufacture an integrated circuit embodying a data compression or decompression apparatus as described in any of the examples herein. The layout processing system1304is configured to receive and process the IC definition dataset to determine a circuit layout. Methods of determining a circuit layout from an IC definition dataset are known in the art, and for example may involve synthesising RTL code to determine a gate level representation of a circuit to be generated, e.g. in terms of logical components (e.g. NAND, NOR, AND, OR, MUX and FLIP-FLOP components). A circuit layout can be determined from the gate level representation of the circuit by determining positional information for the logical components. This may be done automatically or with user involvement in order to optimise the circuit layout. When the layout processing system1304has determined the circuit layout it may output a circuit layout definition to the IC generation system1306. A circuit layout definition may be, for example, a circuit layout description. The IC generation system1306generates an IC according to the circuit layout definition, as is known in the art. For example, the IC generation system1306may implement a semiconductor device fabrication process to generate the IC, which may involve a multiple-step sequence of photo lithographic and chemical processing steps during which electronic circuits are gradually created on a wafer made of semiconducting material. The circuit layout definition may be in the form of a mask which can be used in a lithographic process for generating an IC according to the circuit definition. Alternatively, the circuit layout definition provided to the IC generation system1306may be in the form of computer-readable code which the IC generation system1306can use to form a suitable mask for use in generating an IC. The different processes performed by the IC manufacturing system1302may be implemented all in one location, e.g. by one party. Alternatively, the IC manufacturing system1302may be a distributed system such that some of the processes may be performed at different locations, and may be performed by different parties. For example, some of the stages of: (i) synthesising RTL code representing the IC definition dataset to form a gate level representation of a circuit to be generated, (ii) generating a circuit layout based on the gate level representation, (iii) forming a mask in accordance with the circuit layout, and (iv) fabricating an integrated circuit using the mask, may be performed in different locations and/or by different parties. In other examples, processing of the integrated circuit definition dataset at an integrated circuit manufacturing system may configure the system to manufacture a data compression or decompression apparatus without the IC definition dataset being processed so as to determine a circuit layout. For instance, an integrated circuit definition dataset may define the configuration of a reconfigurable processor, such as an FPGA, and the processing of that dataset may configure an IC manufacturing system to generate a reconfigurable processor having that defined configuration (e.g. by loading configuration data to the FPGA). In some embodiments, an integrated circuit manufacturing definition dataset, when processed in an integrated circuit manufacturing system, may cause an integrated circuit manufacturing system to generate a device as described herein. For example, the configuration of an integrated circuit manufacturing system in the manner described above with respect toFIG.13by an integrated circuit manufacturing definition dataset may cause a device as described herein to be manufactured. In some examples, an integrated circuit definition dataset could include software which runs on hardware defined at the dataset or in combination with hardware defined at the dataset. In the example shown inFIG.13, the IC generation system may further be configured by an integrated circuit definition dataset to, on manufacturing an integrated circuit, load firmware onto that integrated circuit in accordance with program code defined at the integrated circuit definition dataset or otherwise provide program code with the integrated circuit for use with the integrated circuit. A first aspect provides a method of data compression comprising: receiving a plurality of data items; encoding groups of data items by generating, for each of the groups, header data comprising h-bits and a plurality of body portions each comprising b-bits and each of the body portions corresponding to a data item in the group, wherein b is fixed within a group and wherein the header data for a group comprises an indication of b for the body portions of that group; generating, for each of the groups where b>0, a body data field for the group by interleaving bits from the body portions corresponding to data items in the group; and storing one or more encoded data blocks comprising the header data and the body data fields. In some examples, h is fixed for all groups and b is not fixed between groups. b may be an integer greater than or equal to zero and h is an integer greater than zero. Said storing one or more encoded data blocks may comprise: storing a body data block comprising body data fields for a plurality of groups; and storing a header data block comprising header data for the plurality of groups. Said interleaving bits from the body portions corresponding to data items in the group may comprise: (a) inserting a first bit from each of the body portions into the body data field; (b) inserting a next bit from each of the body portions into the body data field; and (c) repeating (b) until all bits from each of the body portions have been inserted into the body data field. Said inserting a first bit from each of the body portions into the body data field may comprise inserting a least significant bit from each of the body portions into the body data field and wherein inserting a next bit from each of the body portions into the body data field may comprise inserting a next least significant bit from each of the body portions into the body data field. Said encoding groups of data items may comprise, for each of the groups: receiving the group of data items; identifying a body portion size, b, by locating a bit position of a most significant leading one across all the data items in the group; generating the header data comprising a bit sequence encoding the body portion size; and generating a body portion comprising b-bits for each of the data items in the group by removing none, one or more leading zeros from each data item. Said identifying a body portion size may further comprise: checking if the identified body portion size is a valid body portion size; and in response to determining that the identified body portion size is not a valid body portion size, updating the body portion size to a next largest valid body portion size. The method may further comprise, prior to encoding groups of data items: analysing a plurality of groups of data items to generate a set of valid body portion sizes. Said analysing a plurality of groups of data items to generate a set of valid body portion sizes may comprise: analysing the data items in the plurality of groups of data items to identify a body portion size for each of the plurality of groups; identifying one or more least common body portion sizes for the plurality of groups of data items; and generating the set of valid body portion sizes by removing from a set of all possible body portion sizes, those body portion sizes corresponding to the identified one or more least common body portion sizes. The set of valid body portion sizes may comprise 2hdifferent valid body portion sizes. The data items may comprise weights for a neural network. Said analysing a plurality of groups of data items to generate a set of valid body portion sizes may comprise, for each layer in the neural network: analysing all weights for the layer to generate a set of valid body portion sizes for that layer. The data items may have a distribution centred substantially on zero and the method may further comprise, prior to encoding a group of data items, pre-processing the data items in the group by converting all data items having a negative value to positive values and interleaving the converted data items with data items having a positive value. The data items may have a distribution centred on a non-zero value and the method may further comprise, prior to encoding a group of data items, pre-processing the data items in the group by shifting all data items such that the shifted distribution is centred substantially on zero and then converting all shifted data items having a negative value to positive values and interleaving the converted shifted data items with shifted data items having a positive value. A second aspect provides a method of data decompression comprising: receiving one or more blocks of data, the one or more blocks of data encoding one or more groups of data items; reading header data into a first buffer; reading body data into a second buffer; and for each of the encoded groups of data items: reading header data for the group from the first buffer, wherein the header data for a group of data items comprises a h-bit indication of a body portion size, b, for the group of data items, wherein b is fixed within a group; determining the body portion size, b, for the group of data items from the header data; reading a body data field from the second buffer based on the determined body portion size, the body data field comprising interleaved body portions, with one body portion for each of the data items in the group; decoding the body data field to generate the decoded data items, the decoding comprising de-interleaving the body portions, wherein each of the decoded data items comprises n bits, where n≥b; and outputting the decoded data items. In some examples, h is fixed for all groups and b is not fixed between groups. b may be an integer greater than or equal to zero and h may be an integer greater than zero. The body data field may comprise a plurality of concatenated sections, each of the sections comprising one bit from each body portion, and wherein decoding the body data field may comprise: starting with an initial set of data items comprising only zeros, one for each data item in the group, reading sections of the body data field and for each section of the body data field, overwriting one of the zeros for each of the data items with a bit value from the section of the body data field to generate the decoded data items; or generating a body data mask comprising ones in bit positions corresponding to the determined body portion size, reading sections of the body data field and for each section of the body data field, combining one of the bits in the body data mask for each of the data items with a bit value from the section of body data field. The first section in the body data field may comprise a least significant bit from each of the body portions, the subsequent section may comprise a next least significant bit from each of the body portions and a last section in the body data field may comprise a most significant bit from each of the body portions. The method may further comprise, prior to outputting the decoded data items, post-processing the decoded data items in the group to convert one or more of the data items from positive values to negative values. The post-processing may further comprise applying an offset to each of the data items. The data items may comprise weights for a neural network. A third aspect provides a data compression apparatus comprising: an input for receiving a plurality of data items; an encoding module configured to encode groups of data items by generating, for each of the groups, header data comprising h-bits and a plurality of body portions each comprising b-bits and each of the body portions corresponding to a data item in the group, wherein b is fixed within a group and wherein the header data for a group comprises an indication of b for the body portions of that group; an interleaving module configured to generate a body data field for each of the groups by interleaving bits from the body portions corresponding to data items in the group; and a memory interface configured to output, for storage, one or more encoded data blocks comprising the header data and the body data field. A fourth aspect provides a data decompression apparatus comprising: one or more inputs for receiving one or more blocks of data, the one or more blocks of data encoding one or more groups of data items; a header read module configured to read header data into a first buffer; a body read module configured to read body data into a second buffer; and a decoding module configured, for each of the encoded groups of data items, to: read header data for the group from the first buffer, wherein the header data for a group of data items comprises a h-bit indication of a body portion size, b, for the group of data items, wherein b is fixed within a group; determine the body portion size, b, for the group of data items from the header data; read a body data field from the second buffer based on the determined body portion size, the body data field comprising interleaved body portions, with one body portion for each of the data items in the group; decode the body data field, comprising de-interleaving the body portions (704), to generate the decoded data items, wherein each of the decoded data items comprises n bits, where n≥b; and output the decoded data items. A fifth aspect provides a compression apparatus comprising: an input configured to receive weights to be used in a neural network; a compression module configured to compress the weights; and a memory interface configured to output the compressed weights for storage. A sixth aspect provides a hardware implementation of a neural network, the hardware implementation comprising decompression apparatus comprising: an input configured to receive compressed weights to be used in the neural network; and a decompression module configured to decompress the compressed weights; wherein the hardware implementation is configured to use the decompressed weights in the neural network. A seventh aspect provides a method of compressing weights to be used in a neural network. An eighth aspect provides a method of decompressing weights to be used in a neural network. A ninth aspect provides computer readable code configured to cause any of the methods described herein to be performed when the code is run. Those skilled in the art will realize that storage devices utilized to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like. The methods described herein may be performed by a computer configured with software in machine readable form stored on a tangible storage medium e.g. in the form of a computer program comprising computer readable program code for configuring a computer to perform the constituent portions of described methods or in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable storage medium. Examples of tangible (or non-transitory) storage media include disks, thumb drives, memory cards etc. and do not include propagated signals. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously. The hardware components described herein may be generated by a non-transitory computer readable storage medium having encoded thereon computer readable program code. Memories storing machine executable data for use in implementing disclosed aspects can be non-transitory media. Non-transitory media can be volatile or non-volatile. Examples of volatile non-transitory media include semiconductor-based memory, such as SRAM or DRAM. Examples of technologies that can be used to implement non-volatile memory include optical and magnetic memory technologies, flash memory, phase change memory, resistive RAM. A particular reference to “logic” refers to structure that performs a function or functions. An example of logic includes circuitry that is arranged to perform those function(s). For example, such circuitry may include transistors and/or other hardware elements available in a manufacturing process. Such transistors and/or other elements may be used to form circuitry or structures that implement and/or contain memory, such as registers, flip flops, or latches, logical operators, such as Boolean operations, mathematical operators, such as adders, multipliers, or shifters, and interconnect, by way of example. Such elements may be provided as custom circuits or standard cell libraries, macros, or at other levels of abstraction. Such elements may be interconnected in a specific arrangement. Logic may include circuitry that is fixed function and circuitry can be programmed to perform a function or functions; such programming may be provided from a firmware or software update or control mechanism. Logic identified to perform one function may also include logic that implements a constituent function or sub-process. In an example, hardware logic has circuitry that implements a fixed function operation, or operations, state machine or process. The implementation of concepts set forth in this application in devices, apparatus, modules, and/or systems (as well as in methods implemented herein) may give rise to performance improvements when compared with known implementations. The performance improvements may include one or more of increased computational performance, reduced latency, increased throughput, and/or reduced power consumption. During manufacture of such devices, apparatus, modules, and systems (e.g. in integrated circuits) performance improvements can be traded-off against the physical implementation, thereby improving the method of manufacture. For example, a performance improvement may be traded against layout area, thereby matching the performance of a known implementation but using less silicon. This may be done, for example, by reusing functional blocks in a serialised fashion or sharing functional blocks between elements of the devices, apparatus, modules and/or systems. Conversely, concepts set forth in this application that give rise to improvements in the physical implementation of the devices, apparatus, modules, and systems (such as reduced silicon area) may be traded for improved performance. This may be done, for example, by manufacturing multiple instances of a module within a predefined area budget.” Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person. It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. Any reference to ‘an’ item refers to one or more of those items. The term ‘comprising’ is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and an apparatus may contain additional blocks or elements and a method may contain additional operations or elements. Furthermore, the blocks, elements and operations are themselves not impliedly closed. The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. The arrows between boxes in the figures show one example sequence of method steps but are not intended to exclude other sequences or the performance of multiple steps in parallel. Additionally, individual blocks may be deleted from any of the methods without departing from the spirit and scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought. Where elements of the figures are shown connected by arrows, it will be appreciated that these arrows show just one example flow of communications (including data and control messages) between elements. The flow between elements may be in either direction or in both directions. The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention. BACKGROUND Convolutional neural networks (NN) may comprise an input layer, an output layer and multiple hidden layers. For each layer in the NN an array of weights, or coefficients, (e.g. a multi-dimensional array of weights) is computed in advance (e.g. as part of training stage) and stored in memory so that they can be used at run time, when they are applied to the input data (which may also be a multi-dimensional array of data). The arrays of weights may be defined as having a size of x*y*z, where x and y may be the same or different for different layers (e.g. dependent upon whether padding is used) and the depth of the array, z, is typically different for different layers. For the input layer, the depth of the array of weights may be small (e.g. a depth of two) but for other layers, particularly towards the end of the NN, the depth may be much larger (e.g. over 100 or over 1000 and depths of 4000+ in a later layer have been known). At run time, these weights are read from the memory. The embodiments described below are provided by way of example only and are not limiting of implementations which solve any or all of the disadvantages of known methods of handling data. SUMMARY This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Methods of data compression and decompression are described. These methods can be used to compress/decompress the weights used in a neural network. The compression method comprises encoding groups of data items by generating, for each group, header data comprising h-bits and a plurality of body portions each comprising b-bits and each body portion corresponding to a data item in the group. The value of h may be fixed for all groups and the value of b is fixed within a group, wherein the header data for a group comprises an indication of b for the body portions of that group. In various examples, b=0 and so there are no body portions. In examples where b is not equal to zero, a body data field is generated for each group by interleaving bits from the body portions corresponding to data items in the group. The resultant encoded data block, comprising the header data and, where present, the body data field can be written to memory. A first aspect provides a method of data compression comprising: receiving a plurality of data items; encoding groups of data items by generating, for each of the groups, header data comprising h-bits and a plurality of body portions each comprising b-bits and each of the body portions corresponding to a data item in the group, wherein b is fixed within a group and wherein the header data for a group comprises an indication of b for the body portions of that group; generating, for each of the groups where b>0, a body data field for the group by interleaving bits from the body portions corresponding to data items in the group; and storing one or more encoded data blocks comprising the header data and the body data fields. A second aspect provides a method of data decompression comprising: receiving one or more blocks of data, the one or more blocks of data encoding one or more groups of data items; reading header data into a first buffer; reading body data into a second buffer; and for each of the encoded groups of data items: reading header data for the group from the first buffer, wherein the header data for a group of data items comprises a h-bit indication of a body portion size, b, for the group of data items, wherein b is fixed within a group; determining the body portion size, b, for the group of data items from the header data; reading a body data field from the second buffer based on the determined body portion size, the body data field comprising interleaved body portions, with one body portion for each of the data items in the group; decoding the body data field to generate the decoded data items, the decoding comprising de-interleaving the body portions, wherein each of the decoded data items comprises n bits, where n≥b; and outputting the decoded data items. A third aspect provides a data compression apparatus comprising: an input for receiving a plurality of data items; an encoding module configured to encode groups of data items by generating, for each of the groups, header data comprising h-bits and a plurality of body portions each comprising b-bits and each of the body portions corresponding to a data item in the group, wherein b is fixed within a group and wherein the header data for a group comprises an indication of b for the body portions of that group; an interleaving module configured to generate a body data field for each of the groups by interleaving bits from the body portions corresponding to data items in the group; and a memory interface configured to output, for storage, one or more encoded data blocks comprising the header data and the body data field. A fourth aspect provides a data decompression apparatus comprising: one or more inputs for receiving one or more blocks of data, the one or more blocks of data encoding one or more groups of data items; a header read module configured to read header data into a first buffer; a body read module configured to read body data into a second buffer; and a decoding module configured, for each of the encoded groups of data items, to: read header data for the group from the first buffer, wherein the header data for a group of data items comprises a h-bit indication of a body portion size, b, for the group of data items, wherein b is fixed within a group; determine the body portion size, b, for the group of data items from the header data; read a body data field from the second buffer based on the determined body portion size, the body data field comprising interleaved body portions, with one body portion for each of the data items in the group; decode the body data field, comprising de-interleaving the body portions (704), to generate the decoded data items, wherein each of the decoded data items comprises n bits, where n≥b; and output the decoded data items. A fifth aspect provides a compression apparatus comprising: an input configured to receive weights to be used in a neural network; a compression module configured to compress the weights; and a memory interface configured to output the compressed weights for storage. A sixth aspect provides a hardware implementation of a neural network, the hardware implementation comprising decompression apparatus comprising: an input configured to receive compressed weights to be used in the neural network; and a decompression module configured to decompress the compressed weights; wherein the hardware implementation is configured to use the decompressed weights in the neural network. A seventh aspect provides a method of compressing weights to be used in a neural network. An eighth aspect provides a method of decompressing weights to be used in a neural network. A ninth aspect provides computer readable code configured to cause any of the methods described herein to be performed when the code is run. The data compression or data decompression apparatus as described herein may be embodied in hardware on an integrated circuit. There may be provided a method of manufacturing, at an integrated circuit manufacturing system, a data compression or data decompression apparatus as described herein. There may be provided an integrated circuit definition dataset that, when processed in an integrated circuit manufacturing system, configures the system to manufacture a data compression or data decompression apparatus as described herein. There may be provided a non-transitory computer readable storage medium having stored thereon a computer readable description of an integrated circuit that, when processed, causes a layout processing system to generate a circuit layout description used in an integrated circuit manufacturing system to manufacture a data compression or data decompression apparatus as described herein. There may be provided an integrated circuit manufacturing system comprising: a non-transitory computer readable storage medium having stored thereon a computer readable integrated circuit description that describes the data compression or data decompression apparatus as described herein; a layout processing system configured to process the integrated circuit description so as to generate a circuit layout description of an integrated circuit embodying the data compression or data decompression apparatus as described herein; and an integrated circuit generation system configured to manufacture the data compression or data decompression apparatus as described herein according to the circuit layout description. There may be provided computer program code for performing any of the methods described herein. There may be provided non-transitory computer readable storage medium having stored thereon computer readable instructions that, when executed at a computer system, cause the computer system to perform any of the methods described herein. The above features may be combined as appropriate, as would be apparent to a skilled person, and may be combined with any of the aspects of the examples described herein. BRIEF DESCRIPTION OF THE DRAWINGS Examples will now be described in detail with reference to the accompanying drawings in which: FIG.1Ais a flow diagram of an improved data compression method; FIG.1Bis a flow diagram of a corresponding data decompression method; FIG.2is a schematic diagram that shows an example of the encoding and interleaving operations from the method ofFIG.1A; FIG.3is a schematic diagram showing two examples of the de-interleaving operation from the method ofFIG.1B; FIG.4is a flow diagram of an example encoding method that may be used in the method ofFIG.1A; FIG.5is a schematic diagram of a group of data items that may be encoded using the method ofFIG.4; FIGS.6A and6Bshow two example look up tables for body portion sizes; FIG.7Ais a flow diagram of a first example decoding method that may be used in the method ofFIG.1B; FIG.7Bis a schematic diagram showing two examples of the de-interleaving operation from the method ofFIG.7A; FIG.7Cis a flow diagram of a second example decoding method that may be used in the method ofFIG.1B; FIG.7Dis a schematic diagram showing an example of the de-interleaving operation from the method ofFIG.7C; FIG.8Ais a graph of a first example distribution of data items; FIG.8Bis graph of an example distribution of pre-processed data items; FIG.8Cis a graph of a second example distribution of data items; FIG.9is a schematic diagram of a multi-dimensional array of data; FIG.10is a schematic diagram of a data compression apparatus arranged to implement the method ofFIG.1A; FIG.11is a schematic diagram of a data decompression apparatus arranged to implement the method ofFIG.1B; FIG.12shows a computer system in which a graphics processing system is implemented; and FIG.13shows an integrated circuit manufacturing system for generating an integrated circuit embodying a graphics processing system. The accompanying drawings illustrate various examples. The skilled person will appreciate that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the drawings represent one example of the boundaries. It may be that in some examples, one element may be designed as multiple elements or that multiple elements may be designed as one element. Common reference numerals are used throughout the figures, where appropriate, to indicate similar features. DETAILED DESCRIPTION The following description is presented by way of example to enable a person skilled in the art to make and use the invention. The present invention is not limited to the embodiments described herein and various modifications to the disclosed embodiments will be apparent to those skilled in the art. Embodiments will now be described by way of example only. As described above, an array of weights for a neural network (e.g. a convolutional NN) can be very large and as these are stored in memory, rather than a local cache, a significant amount of system bandwidth is used at run time to read in the weight data (e.g. 50% of the system bandwidth in some examples). In order to reduce the amount of bandwidth that is used, the weights may be stored in a compressed form and then decompressed prior to use (and after having been read from memory). Described herein is an improved method of data compression that involves interleaving the compressed data in such a way that the decompression can be performed efficiently (e.g. a reduced area of hardware is needed to perform the decompression and the decompression process has reduced latency and power consumption). Whilst the methods of compression and decompression that are described herein may be used for an array of weights for a neural network (e.g. a convolutional NN), the methods may also be applied to other data. In various examples the methods may be applied to any multi-dimensional array of data, including, but not limited to, image data, voice data, etc.; however the methods are also applicable to1D data. In various examples, the methods described herein may be used to compress the data (i.e. the results) output by a layer within a neural network (e.g. a layer within a convolutional NN). This then provides a saving in system bandwidth when the data is subsequently read in (as input data) to the next layer in the neural network. As described in detail below, the data compression method comprises encoding groups of data items (e.g. binary numbers) using an encoding method that generates header data for each group and, except where a group comprises only zeros, body data for each group. In the examples described herein, the header data for each group comprises a fixed number of bits (e.g. h bits for each group, where h is fixed), whereas the body data may differ in size for different groups (e.g. B bits of body data for each group, where B is variable) and in an extreme case there may be no body data for a group (e.g. B=0). If each input data item comprises n bits and each group comprises N numbers, the input data for a group (i.e. the uncompressed data) comprises n*N bits whereas the compressed data for the group comprises h+B bits, and if the compression is successful (h+B)(n*N). Like any other data compression method, there may be a few cases where no compression is possible (in which case (h+B) may be larger than (n*N). In some examples these cases may be identified and the data items stored in their original format. Alternatively, as compression is still achieved (on average) when looking across many groups (e.g. across weights for a layer within a convolutional NN or other type of NN), the lack of compression in rare isolated groups may be accommodated. The body data for a group comprises the same number of bits, b bits, of body data for each number in the group (where b≤n and B=b*N), but the number of body bits may differ between groups and as detailed above, in an extreme case there may be no body bits for a particular group (e.g. b=B=0). The data compression method described herein further comprises packing the compressed body data for a group into a body data field in a data block. In this example, the body data field for a group comprises interleaved bits from the body data for each data item in the group. For example, the body data field comprises the least significant bit (LSB) of body data for each data item of the group, followed by the next least significant bit of body data for each data item of the group, etc., until all body bits have been packed into the body data field. In various examples the data block comprises body data fields for each of a plurality of groups and the header bits may be included in a separate data block (that comprises header bits from each of the groups in the plurality of groups). Alternatively a data block may comprise the body data field and the header bits for the same group (e.g. alternating headers and body data fields or a group of headers followed by a group of corresponding body data fields). Whilst counter-intuitive, by interleaving the body bits as described herein when packing the compressed data for a group into a data block, the decompression is made more efficient and less complex. In particular, the body bits that are required for decompressing any weight (b bits, where b varies between groups) are in the same place in the compressed data block for any value of b. For example, if bit0of word 0 is at a position x, bit1of word 0 is at position x+N, bit2of word 0 is at position x+2N, etc. or more generally, bit J of word K is found at bit position x+(J*N)+K where K∈[0, N−1] and x is the starting position of the body data field (i.e. the position of bit0of word 0). These fixed positions within the data block reduce the amount of hardware that is required to perform the decompression (e.g. a multiplexer that would otherwise be required is no longer needed and as a result the decompression hardware is smaller and consumes less power). FIG.1Ais a flow diagram of an improved data compression method andFIG.1Bis a flow diagram of the corresponding data decompression method. As described above, these methods may be used for arrays of weights for a neural network (e.g. a convolutional NN) or for any other data, including other arrays of multi-dimensional data. Where the data comprises an array of weights for a neural network (e.g. a convolutional NN), the data that is compressed may be the entirety of the weight (e.g. where it is in fixed point format) or, where the weight is in a floating point format with all weights having the same exponent, the data that is compressed may be the n-bit mantissas of the weights. Any reference to a ‘weight’ in the following description may refer either to the entire weight or to the mantissas of the weights where the weights have common exponents. As shown inFIG.1A, the data compression method comprises receiving the input data (block102), e.g. the weights for a convolution NN, image data or other multi-dimensional data (e.g. as shown graphically inFIG.9). The input data may be received (in block102) in chunks of data or as an input stream of data items. As described below, the data compression method operates on groups of data items from the input data. In some examples, the input data may be pre-processed (block103) and this is described in more detail below with reference toFIGS.8A-8C. This pre-processing, where implemented, operates on a plurality of data items, e.g. on all the input data or on a subset of the input data (e.g. a chunk or group of input data, where a chunk of input data may be the same as, or different from, a group of data items). The input data (i.e. either pre-processed input data or the original input data) is then encoded (block104) using a method that generates, for a group of input data items (e.g. binary numbers), a header for the group and in most cases body data for each of the data items, although in extreme cases where all the data items are zero, there will be no body data. The encoding operates on groups of data items, although it will be appreciated that in various examples, multiple groups of data items may be encoded in parallel. As detailed above, the header has a fixed size for all groups (e.g. h bits for each group, where h is fixed) and the body data for each of the data items is the same for all data items within a group (e.g. b bits of body data per data item, where130) but may differ between groups (e.g. b is fixed within a group but is not fixed between groups). The header comprises an indication of the size of the body portion for each data item (e.g. the value of b) and in some examples the header may indicate that the size of the body portion is zero (b=0) in which case there are no body portions for the data items in the group and hence no body data field for the group. Having encoded a group of data items (in block104), the body bits for the group are packed into a body data field in a data block by interleaving the body bits for each of the data items in the group (block106, as described below with reference toFIG.2). The resulting body data block, which may comprise body data fields from multiple groups is then stored or otherwise output (block108). The header bits may be packed into a separate data block (e.g. into a header field for the group) and the resulting header data block, which may comprise header fields from multiple groups, is then stored or otherwise output (block109). When storing the data blocks (in blocks108and109), they may be byte aligned rather than necessarily being stored adjacent to the immediately previously stored data block. This simplifies addressing given that the size of a data block varies between groups of data items. In various examples the bit depth of the data items, n (which is fixed) is between 4 and 16 and in various examples n=8 or n=16. In various examples the number of data items in a group, N, is 4 or 8. In examples where N=8, this may result in less complex hardware than where N=4. This is because the multiplexing logic becomes less complex as N increases because the possible starting positions of a new body data field within a body data block reduces. For example, if N=4, each group is guaranteed to start on every 4th bit and cannot start on any of the bit positions between, whereas if N=8, each group can only start every 8th bit and this removes layers from a multiplexing tree because the tree does not need the ability to select any of the intervening values. In contrast, by using smaller values of N (i.e. fewer data items in a group), the amount of compression that may be achieved may be increased (i.e. the body portion size may be reduced); however, the number of encoded groups of data items, and hence header fields, is increased and this may outweigh any benefits achieved as a consequence of the smaller body data field. Therefore there is a trade-off to consider when deciding what value to use for N. FIG.2is a schematic diagram that shows an example of the encoding and interleaving operations (blocks104and106) fromFIG.1Afor N=8 (i.e. N data items per group). As shown inFIG.2, a group of data items202from the input data comprises 8 data items (N=8)204. The encoding operation (in block104) generates, from the 8 data items, one header206(comprising h-bits) for the group and if b>0, one body portion208for each data item in the group (as noted above, if b=0 then there are no body portions for the data items in the group). The header206comprises an indication of the size of the body portion208for each data item (e.g. the indication could represent the value of b, or in some examples the indication could represent the value of B where the value of b can easily be determined from B, as b=B/N). Each body portion208comprises b-bits and in the example shown inFIG.2, b=8. If there are body portions (i.e. b>0), bits from the body data210for each of the data items are then interleaved (in block106). As shown inFIG.2, the interleaving forms a body data field212by first adding one bit from each body portion208, then adding a next bit from each body portion etc. In the example shown, the least significant bit (LSBs) of body data for each data item is first added (bits A0, B0, C0, D0, E0, F0, G0, H0) followed by the next least significant bit of body data for each data item of the group (bits A1, B1, C1, D1, E1, F1, G1, H1), etc., until all body bits have been packed into the body data field. As shown inFIG.2, the last bit in the body data field is the most significant bit (MSB) of the body data for the last data item in the group (bit H7). In other examples, the MSBs (bits A7, B7, C7, D7, E7, F7, G7, H7) may be packed first, followed by the next most significant bit, etc. until all the bits of body data210have been packed into the body data field212. Having generated the body data field212by interleaving (in block106), the body data field is packed into a body data block214and the header206is packed into a header data block216. By storing the headers and body data fields in different data blocks216,214, the decompression operation is made less complex. Within the header data block, the location of the start of each header is fixed (because the header size is the same for all groups) and so the headers can be read easily to determine the offsets for the starting positions of each body data field212within the body data block214. In various examples, instead of storing the header and body data fields in separate data blocks, a data block may comprise K headers (i.e. the headers206for K groups) followed by the corresponding K body data fields. By selecting K such that K*h has the same bit alignment properties as the body data fields, e.g. K*h=0 mod N, the alignment of bits within the resultant data block also reduces the complexity of the decompression operation. Whilst the description above refers to interleaving bits starting with the LSB of body data for each data item in the group, in other examples, the interleaving may start with the MSB of body data for each data item in the group. The selection of whether to start with the LSB or MSB depends on the encoding scheme that is used (in block104). Where an encoding scheme as described below with reference toFIG.4is used, starting with the LSB is most appropriate. However, if the encoding scheme is a lossy compression scheme that removes one or more LSBs, then interleaving starting with the MSB of body data may be used instead. Similarly, if the decoding scheme uses online arithmetic (instead of binary multipliers and adders, as is the case in the examples described below), interleaving starting with the MSB may be used because online arithmetic performs calculations starting with the MSB. The interleaving that is performed on the body bits (in block106) reduces the complexity of the decompression operation and this can be described with reference toFIGS.1B and3. FIG.1Bis a flow diagram of the data decompression method that corresponds to the data compression method shown inFIG.1Aand described above. As shown inFIG.1B, the method comprises receiving encoded data, where, as described above, the original items are encoded by representing groups of data items with a header and none, one or more body data fields. In various examples, receiving this data may comprise receiving blocks of header data216(block110) and blocks of body data214(block111). These may, for example, be received as two parallel streams and may be read and buffered independently (blocks112and114). Alternatively the header data and body data may be received together (e.g. where the data is stored in the same data block, as described above). As part of the decode operation (block116), the header for a group is processed and this indicates the amount of body data that is required to decode the group of data items (i.e. the size of the body data field for the group, B=b*N). The corresponding amount of data can then be extracted from the buffer (of body data) and as a consequence of the fixed bit positions (due to the interleaving operation, as described above), the bits can be shuffled using a fixed scheme (irrespective of the value of b) to re-create the original body portions for each data item in the group without requiring any multiplexer logic. Having recreated the body portions, the body portions are decoded. The fixed relationship between the bit positions in the body data field and the corresponding bit positions in the original data items, irrespective of the number of body bits for each data item in the group (i.e. irrespective of the value of b, which may vary between groups) is shown graphically inFIG.3for two different sizes of body data field302(b=5),304(b=3), and in this example, to reduce the complexity of the diagram, the number of data items in a group is four (N=4). As shown inFIG.3, irrespective of the size of the body data field (and hence the number of bits in each body portion, b), the first N (i.e.4) bits in the body data field302,304comprise a single bit for each of the body portions, denoted A′-D′. In the example shown, the first N bits comprise the LSBs for each of the body portions. The next N bits in the body data field302,304comprise the next bit for each of the body portions, again irrespective of the size of the body data field. Consequently, by reading the concatenated sections310-312each comprising N bits in turn and building up the body portions one bit per section, until a point is reached where there are no further bits in the body data field302,304(as extracted from the buffer) and at that stage, all the bits of the body portions A′-D′ have been identified and deinterleaved. Once the decoded data block has been generated (in block116), there is an optional post-processing operation (block117) which is an inverse of the optional pre-processing operation (block103) in the compression method (as shown inFIG.1A). The decoded data, either in its original or post-processed form, is then output (block118). In various examples, the decoded data may be used immediately. In various examples, the decoded data may not be buffered or stored in a cache because of its large size and instead the decompression may be performed each time the data items are used. However, in some other examples at least some of the decoded data may be stored, e.g. in a cache. The encoding operation (in block104) may use any suitable encoding scheme that generates a fixed size header206for a group of data items and a body portion for each data item in the group, where the size of the body portion is the same for all data items within a group but may be different for other groups of data items. An example of such an encoding scheme is shown inFIG.4. FIG.4is a flow diagram of an example encoding method that operates on groups of data items and the method may be described with reference to the example shown inFIG.5. The encoding method receives a group of data items502(block402), for example, 8 data items504(N=8), denoted A-H as shown inFIG.5. In the example shown inFIG.5, each of the data items504comprises 8 bits (n=8). The optimum size of body portion (i.e. the optimum value of b, bopt) is then identified by identifying the most significant leading one across the group of data items (block404). The most significant leading one may, for example, be identified by determining the bit position of the leading one in each data item (where the bit positions may be identified by the bit index 0-7, as shown inFIG.5) and comparing these values to identify the highest bit index. The optimum size of body portion is one more than the highest bit index, in examples where the LSB has a bit index of zero (as shown inFIG.5). Alternatively, the bit position of the most significant leading one across the group of data items may be identified in any other way (in block404). In the example shown inFIG.5, data item A has the leading one in bit position 4, as do data items E and H. All other data items in the group have their leading ones in lower bit positions (i.e. less significant bit positions). Consequently, in the example shown inFIG.5, the optimum body portion size is 5 bits (bopt=5). If all of the data items only comprise zeros, then the optimum body portion size is also zero (bopt=0). Having identified an optimum body portion size (in block404) and in examples where all body portion sizes from zero to n, i.e. n+1 sizes, can be encoded within the h-bits of the header, the method may continue by generating a header comprising a bit sequence that encodes the identified optimum body portion size (block408) and truncating each data item to create a corresponding body portion by removing none, one or more leading zeros from the data item, until the body portion has the optimum body portion size (i.e. n-boptleading zeros are removed such that the resulting body portion comprises boptbits). If the optimum body portion size is zero (bopt=0) and then n leading zeros are removed and there are no remaining body bits. In various examples, a look up table may be used to identify the bit sequence used for a particular body portion size or the body portion size may be included in the header as a binary number. In various examples, however, the size of the header (i.e. the number of bits, h, in the header) may be insufficient to identify all of the possible body portion sizes, i.e. n+1 sizes. In particular, this may occur where the bit depth of the input data (i.e. the value of n) is a power of two. Referring to the example inFIG.5, there are nine possible body portion sizes since n=8 (i.e. body portion sizes of 0, 1, 2, 3, 4, 5, 6, 7, 8) and if the header only comprises three bits (h=3), then only eight body portion sizes can be represented in the header bits (using binary values 0-7) and hence there are only eight valid body portion sizes. In other examples, more than one body portion size may be considered invalid in order to reduce the overall header size. For example, if n=9 and h=3 then two body portion sizes may be considered invalid. In examples where one or more body portion sizes are not valid, having determined the optimum body portion size (in block404), the method checks whether the optimum body portion size, bopt, is valid (block406). If the optimum body portion size is valid (Yes' in block406), then the method continues by encoding that valid optimum body portion size into the header (in block408) and truncating each data item to create a corresponding body portion by removing none, one or more leading zeros from the data item, until the body portion has the optimum body portion size (i.e. n-boptleading zeros are removed such that the resulting body portion comprises boptbits). If, however, the optimum body portion size is not valid (No′ in block406), the next largest valid body portion size, bvalid, is selected (block407). The method then continues by encoding that valid body portion size into the header (in block408) instead of the optimum body portion size and truncating each data item to create a corresponding body portion by removing none, one or more leading zeros from the data item, until the body portion has the valid body portion size (i.e. n−bvalidleading zeros are removed such that the resulting body portion comprises bvalidbits). Again, a look up table may be used to identify the bit sequence used for the selected valid body portion size (which may be equal to the optimum body portion size, bopt, or the next larger valid body portion size, bvalid), and two example look up tables are shown inFIGS.6A and6B. In the example look up table shown inFIG.6A, the body portion size which is omitted, and hence is not considered valid, is three and in the example look up table shown inFIG.6B, the body portion size which is omitted, and hence is not considered valid, is five. The body portion size that is considered invalid may be chosen for omission based on analysis of the input data (e.g. on many or all groups of data items) and this is shown as an optional initial analysis step inFIG.4(block401). The analysis (in block401) determines, based on a plurality of groups of data items, which optimum body portion size is least common and then allocates header values to body portion sizes omitting that least common size. In various examples, where the input data comprises weights for a neural network (e.g. a convolutional NN), the omitted body portion size may be identified separately for each layer of the NN based on analysis of the weights which are most commonly used for that layer (in block401) and in particular based on the least common position of a leading one across all the weights (or across all the n-bit mantissas of the weights) for the particular layer of the NN. In this way, different layers may have different body portion sizes that are considered invalid. For example, one layer of a NN may have three as the invalid body portion size (as inFIG.6A) and another layer of the same NN may have five as the invalid body portion size (as inFIG.6B). As noted above, in various examples there may be more than one invalid body portion size and these multiple invalid body portion sizes may be selected based on the least common positions of leading ones across the input data (i.e. the least common optimum body sizes for the groups of data items), e.g. across all the weights (or mantissas of the weights) for a particular layer of a NN, with the analysis being performed independently for each layer of the NN. Referring again to the example group of data items502shown inFIG.5, as detailed above the optimum body portion size is 5-bits (bopt=5). If the look up table ofFIG.6Ais used, this is a valid body portion size (Yes' in block406) and the bit sequence ‘100’ is included within the header (in block408). Additionally, each of the data items are truncated by removing three leading zeros (in block410) to form the corresponding body portion. If, however, the look up table ofFIG.6Bis used, a body portion size of 5-bits is not valid (No′ in block406) and hence the next largest valid body portion size would be used instead, i.e. a body portion size of six. In this example, the bit sequence ‘101’ is included within the header (in block408) and each of the data items are truncated by removing two leading zeros (in block410) to form the corresponding body portions. This means that there is a leading zero in all of the body portions as a consequence of not using the optimum body portion size. In the first of these examples, where the table ofFIG.6Ais used, the resultant encoded data block comprises 3+(8*5)=43 bits, i.e. 3 header bits and 40 body bits (5 for each body portion). In contrast, where the table ofFIG.6Bis used, the resultant encoded data block comprises 3+(8*6)=51 bits, i.e. 3 header bits and 48 body bits (6 for each body portion). Assuming that many data blocks are encoded and that the optimum body portion size is rarely invalid, the additional N bits (which are all leading zeros) included in the body portion on those rare occasions (when the optimum body portion size is invalid), will still result in a smaller overall amount of encoded data than an alternative solution of increasing every header by one bit so that all optimum body portion sizes can be validly encoded within the header. In examples where the encoding method ofFIG.4is used, the corresponding decoding method that may be used in the data decompression method ofFIG.1B(in block116), including the fixed-pattern de-interleaving may be as shown inFIG.7A. As shown inFIG.7A, the method comprises processing the header to determine the size of the body data field, b*N (block702) and reading the corresponding amount of data from the body data buffer (block704). Using the fixed relationship (as described above), the data items can then be generated by starting with a set of data Items comprising only zeros (i.e. n zeros in each data item) and overwriting zeroes in each data item (starting with the LSB) with appropriate bits from the body data field (block706) and once all the bits that have been read from the buffer (in block704) have been used to overwrite zeros (in block706), the resultant decoded data items are output (block708). The decoding method ofFIG.7Ais shown graphically inFIG.7Bwhich is a variation onFIG.3(described above) and again the fixed relationship between the bit positions in the body data field and the corresponding bit positions in the original data items, irrespective of the number of body bits for each data item in the group (i.e. irrespective of the value of b, which may vary between groups) is shown for two different sizes of body data field302(b=5),304(b=3). In this example, to reduce the complexity of the diagram, the number of data items in a group is four (N=4) and the number of bits in each decoded data item is six (n=6). As shown inFIG.7B, irrespective of the size of the body data field (and hence the number of bits in each body portion, b), the first N (i.e. 4) bits in the body data field302,304comprise a single bit for each of the data items. In the example shown, the first N bits comprise the LSBs for each of the data items and these are used to overwrite the zeros that initially occupy those LSBs. The next N bits in the body data field302,304comprise the next bit for each of the data items, again irrespective of the size of the body data field, and these are used to overwrite the next bit in each of the data items. Consequently, by reading the concatenated sections310-312each comprising N bits in turn and overwriting zeros, one zero (for each data item) per N-bit section of the data read from the buffer, until all the bits that have been read have been used, the data items are recreated (i.e. both deinterleaved and decoded). As shown inFIG.7B, by pre-populating each bit in each data item with a zero (given that the number of bits, n, in each data item is fixed) and then replacing these zeros with values from the body data, there is no need to first recreate the body portions and then, in a separate operation, pad each body portion with the requisite number of leading zeros (as determined based on the header for the group). An alternative to the method ofFIG.7Ais shown inFIG.7Cand can be described with reference to the example shown inFIG.7D(for b=3, N=4, n=6). As shown inFIG.7C, the header data is processed to determine the size of the body data field, b*N (block702) and this is then used both to read the corresponding amount of data from the body data buffer (block704) and to generate a body data mask710(block705). The body data mask710may comprise ones in positions where valid data can be stored and zeros in all other bit positions and an example is shown inFIG.7D. The body bits are extracted using the fixed relationship (as described above), the data items can be generated using the body data mask710and the body bits (block707). In various examples, AND gates may be used to combine the body bits and the body mask bits. As before, the resultant decoded data items are output (block708). The encoding scheme described above with reference toFIGS.4,5,6A and6Bresults in high levels of compression where the data items are small in value (e.g. close to zero) and where the binary representations of each of the data items are similar in terms of the position of the leading ones. For example, if a group of data items was: 0100000, 00000101, 00000100, 00000010, then although a body portion size of 7 bits could be used, the resulting amount of compression is much less than if the group of data items was: 0000111, 00000101, 00000100, 00000010, when a body portion size of 3 bits could be used. In various examples, such as where the data items are weights for a NN (or the mantissas of those weights), the distribution of the data items may be centred around (or close to) zero, as shown inFIG.8A. This distribution may, for example, be Gaussian or Laplacian. In such examples, if the data items are represented using two's complement, then the binary strings representing the negative values all have a MSB which is a one and so the encoding method described above cannot remove any leading zeros (e.g. bopt=n) and there is no compression. To improve compression or enable compression (e.g. in the case of two's complement representation), the data items are pre-processed (in block103ofFIG.1A). The pre-processing operation comprising folding and interleaving the data items so that they are all positive and the distribution is a curve of almost continuously decreasing probability, as shown inFIG.8B. The folding and interleaving operation can be written mathematically as follows: symbol={(-2*coeff)-1,coeff<02*coeff,otherwise where ‘coeff’ is the original input data item and ‘symbol’ is the pre-processed data item. summingng the coefficients are represented in twos complement format, a multiplication by two can be implemented by left shifting by one bit position, and so this may be implemented in hardware (or software) as: symbol:=isignbitset? (((NOT coeff)<<1) OR 1): coeff<<1; or symbol:=isignbitset? NOT (coeff<<1): coeff<<1; where NOT and OR are bitwise operators and <<1 indicates left shifting by one bit position. This essentially converts the data item to a sign magnitude-like format but places the sign bit as LSB. It will therefore be appreciated that if the data items are originally in sign magnitude format, the pre-processing operation may be modified such that it comprises moving the sign bit from the MSB to the LSB. In various examples, such as for Android NN formatted weights, the data items may not be centred around (or close to) zero and an example distribution is shown inFIG.8C. In such examples, the offset is subtracted from the input data items as part of the pre-processing operation (in block103), prior to interleaving and folding, such that a distribution similar to that shown inFIG.8Bis still achieved. An example of the folding and interleaving operation can be written as follows: offset_coeff=mod(coeff-offset,2n)symbol={(-2*offset_coeff)-1,offset_coeff<02*offset_coeff,otherwisesymbol:=isignbitset?((NOToffset_coeff)<<1)OR1):offset_coeff<<1; or where the function ‘mod’ computes the remainder of the division when the division is rounded towards negative infinity and in various examples may be equivalent to discarding the MSB. As noted above, multiplying by two can be implemented by left shifting by one bit position. Wherever a pre-processing operation is applied to the original input data items (in block103ofFIG.1A), the inverse operation is applied to the decoded data items before they are output (in block117ofFIG.1B). This post-processing operation (in block117) may therefore comprise an unfolding and de-interleaving operation and optionally the addition of an offset (e.g. so as to reset the distribution from that shown inFIG.8Bto the original distribution, e.g. as shown inFIG.8A or8C). Referring to the two examples given above, where the distribution is centred around zero and so no offset is involved: coeff={-0.5*(symbol+1),odd(symbol)0.5*symbol,otherwise And where the distribution is offset: offset_coeff={-0.5*(symbol+1),odd(symbol)0.5*symbol,otherwisecoeff=mod(offsetcoeff+offset,2n) where in both cases, the halving operation may be performed by right shifting by one bit position. The offset may be communicated separately to the encoded data (e.g. in a command stream) or in any other way (e.g. via a register interface). In the examples described above, the header has a fixed size for all groups (e.g. h bits for each group, where h is fixed). The size of the header data as a fraction of the overall compressed data size, is small (e.g. half a bit per input data value) and so compression of the header bits is not used. In a variation of the examples described above, however, variable length encoding (e.g. Huffman encoding) may be used for the header data. This may, for example, be used where the header data is biased. The data compression that may be achieved using the methods described herein may be further improved by increasing the number of data items that comprise only zeros. In examples where the data items are NN weights, this may be achieved by using pruning when training the NN. The data compression that may be achieved using the methods described herein may be further improved by grouping correlated data items, i.e. data items that are likely to have a similar value and hence a similar position of the leading one. For example, by grouping N data items with a leading one in bit positions 6 and 7 and grouping N data items with a leading one in bit positions 3 and 4, more compression can be achieved than if both groups of data items comprise a mixture of data items with leading ones in bit positions 3, 4, 6 and 7. This may be achieved, for example, by changing the way that the multi-dimensional input data, shown graphically inFIG.9, is divided into groups. In examples where the data items are NN weights, this may be achieved by grouping weights that relate to the same plane (e.g. weights for different x, y values but the same z value,902) instead of grouping weights that relate to different planes (e.g. weights for the same x,y value but different z values,904). The data compression and decompression methods described herein may be implemented in software, in hardware or in a combination of software and hardware. In various examples, the data compression method described herein may be implemented in software when mapping a NN to particular hardware (or a particular hardware type) and this is often a one-time operation. In contrast, the data decompression method described herein may be implemented at run time and may be performed many times, e.g. whenever the data items are used. FIG.10shows a schematic diagram of a data compression apparatus1000arranged to implement the method ofFIG.1A. As shown inFIG.10, the data compression apparatus1000comprises an input1001, an encoding module1002, an interleaving module1004, a memory interface1006and an output1007. The apparatus1000may additionally comprise a pre-processing module1008. The encoding module1002is arranged to perform the encoding of the data items (as in block104) using any suitable encoding method that generates a fixed size header for a group of data items and a body portion for each data item in a group (unless the body portion size, b, is zero). In various examples the encoding module1002is arranged to implement the method ofFIG.4. The interleaving module1004is arranged to interleave the body bits into the body data field (as in block106), e.g. as described above with reference toFIG.2. The memory interface1006is arranged to output the encoded data, via the output1007for storage in memory (as in block108). Where provided, the pre-processing module1008is arranged to fold and interleave, and optionally subtract an offset, from the input data items (in block103), as described above with reference toFIGS.8A-8C. In various examples, the encoding module1002, interleaving module1004and, where provided the pre-processing module1008, may be implemented in software. FIG.11shows a schematic diagram of a data decompression apparatus1000arranged to implement the method ofFIG.1B. The data decompression apparatus1100comprises a plurality of inputs1101-1103and a plurality of read modules1104-1106each arranged to read a different type of data: header data (in the header read module1105via input1102), body data (in the body read module1106via input1103) and in examples where a bias is used in the NN, a bias (in the bias read module1104via input1101). These read modules1104-1106each comprise a buffer (e.g. one or more FIFOs) and requesters (which may be implemented as linear counters) that request data from memory (not shown inFIG.11) in linear order as long as there is space for the requested data in the corresponding buffer. A prioritization scheme may be implemented to arbitrate between the requesters such that, for example, priority is given to bias requests and then to header requests with body data requests being the lowest priority. This prioritization is in inverse order of the quantity of data required and as a result the biases will never stall (as they will never not have enough data) and the FIFOs in both the bias read module1104and header read module1105can be narrower. As described above (with reference toFIG.1B), the decoding module1108reads data from the buffers in the header and body read modules1105,1106, performs the fixed pattern bit shuffle (as described above with reference toFIG.3or7) and generates the decoded data items which are then either output from the decompression apparatus (via output1112), or where post-processing is used, are first post-processed in the post-processing module1110before being output (via output1111). FIG.12shows a computer system in which the methods of data compression or decompression described herein may be implemented. The computer system comprises a CPU1202, a GPU1204, a memory1206and other devices1214, such as a display1216, speakers1218and a camera1220. The components of the computer system can communicate with each other via a communications bus1220. The system further comprises a neural network accelerator1224arranged to implement a method of data compression and/or decompression as described herein. Whilst this neural network accelerator1224is shown as a separate hardware unit inFIG.12, in other examples it may be part of the GPU1204and/or may be part of the same SoC (system on chip) as the CPU1202. The apparatus ofFIGS.10-12are shown as comprising a number of functional blocks. This is schematic only and is not intended to define a strict division between different logic elements of such entities. Each functional block may be provided in any suitable manner. It is to be understood that intermediate values described herein as being formed by a module need not be physically generated by the module at any point and may merely represent logical values which conveniently describe the processing performed by the apparatus between its input and output. The data compression and data decompression apparatus described herein may be embodied in hardware on an integrated circuit. The data compression and data decompression apparatus described herein may be configured to perform any of the methods described herein. Generally, any of the functions, methods, techniques or components described above can be implemented in software, firmware, hardware (e.g., fixed logic circuitry), or any combination thereof. The terms “module,” “functionality,” “component”, “element”, “unit”, “block” and “logic” may be used herein to generally represent software, firmware, hardware, or any combination thereof. In the case of a software implementation, the module, functionality, component, element, unit, block or logic represents program code that performs the specified tasks when executed on a processor. The algorithms and methods described herein could be performed by one or more processors executing code that causes the processor(s) to perform the algorithms/methods. Examples of a computer-readable storage medium include a random-access memory (RAM), read-only memory (ROM), an optical disc, flash memory, hard disk memory, and other memory devices that may use magnetic, optical, and other techniques to store instructions or other data and that can be accessed by a machine. The terms computer program code and computer readable instructions as used herein refer to any kind of executable code for processors, including code expressed in a machine language, an interpreted language or a scripting language. Executable code includes binary code, machine code, bytecode, code defining an integrated circuit (such as a hardware description language or netlist), and code expressed in a programming language code such as C, Java or OpenCL. Executable code may be, for example, any kind of software, firmware, script, module or library which, when suitably executed, processed, interpreted, compiled, executed at a virtual machine or other software environment, cause a processor of the computer system at which the executable code is supported to perform the tasks specified by the code. A processor, computer, or computer system may be any kind of device, machine or dedicated circuit, or collection or portion thereof, with processing capability such that it can execute instructions. A processor may be any kind of general purpose or dedicated processor, such as a CPU, GPU, System-on-chip, state machine, media processor, an application-specific integrated circuit (ASIC), a programmable logic array, a field-programmable gate array (FPGA), physics processing units (PPUs), radio processing units (RPUs), digital signal processors (DSPs), general purpose processors (e.g. a general purpose GPU), microprocessors, any processing unit which is designed to accelerate tasks outside of a CPU, etc. A computer or computer system may comprise one or more processors. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the term ‘computer’ includes set top boxes, media players, digital radios, PCs, servers, mobile telephones, personal digital assistants and many other devices. It is also intended to encompass software which defines a configuration of hardware as described herein, such as HDL (hardware description language) software, as is used for designing integrated circuits, or for configuring programmable chips, to carry out desired functions. That is, there may be provided a computer readable storage medium having encoded thereon computer readable program code in the form of an integrated circuit definition dataset that when processed (i.e. run) in an integrated circuit manufacturing system configures the system to manufacture a data compression or decompression apparatus configured to perform any of the methods described herein, or to manufacture a computing device comprising any apparatus described herein. An integrated circuit definition dataset may be, for example, an integrated circuit description. Therefore, there may be provided a method of manufacturing, at an integrated circuit manufacturing system, a data compression or decompression apparatus as described herein. Furthermore, there may be provided an integrated circuit definition dataset that, when processed in an integrated circuit manufacturing system, causes the method of manufacturing a data compression or decompression apparatus to be performed. An integrated circuit definition dataset may be in the form of computer code, for example as a netlist, code for configuring a programmable chip, as a hardware description language defining an integrated circuit at any level, including as register transfer level (RTL) code, as high-level circuit representations such as Verilog or VHDL, and as low-level circuit representations such as OASIS® and GDSII. Higher level representations which logically define an integrated circuit (such as RTL) may be processed at a computer system configured for generating a manufacturing definition of an integrated circuit in the context of a software environment comprising definitions of circuit elements and rules for combining those elements in order to generate the manufacturing definition of an integrated circuit so defined by the representation. As is typically the case with software executing at a computer system so as to define a machine, one or more intermediate user steps (e.g. providing commands, variables etc.) may be required in order for a computer system configured for generating a manufacturing definition of an integrated circuit to execute code defining an integrated circuit so as to generate the manufacturing definition of that integrated circuit. An example of processing an integrated circuit definition dataset at an integrated circuit manufacturing system so as to configure the system to manufacture a data compression or decompression apparatus will now be described with respect toFIG.13. FIG.13shows an example of an integrated circuit (IC) manufacturing system1002which is configured to manufacture a data compression or decompression apparatus as described in any of the examples herein. In particular, the IC manufacturing system1302comprises a layout processing system1304and an integrated circuit generation system1306. The IC manufacturing system1302is configured to receive an IC definition dataset (e.g. defining a data compression or decompression apparatus as described in any of the examples herein), process the IC definition dataset, and generate an IC according to the IC definition dataset (e.g. which embodies a data compression or decompression apparatus as described in any of the examples herein). The processing of the IC definition dataset configures the IC manufacturing system1302to manufacture an integrated circuit embodying a data compression or decompression apparatus as described in any of the examples herein. The layout processing system1304is configured to receive and process the IC definition dataset to determine a circuit layout. Methods of determining a circuit layout from an IC definition dataset are known in the art, and for example may involve synthesising RTL code to determine a gate level representation of a circuit to be generated, e.g. in terms of logical components (e.g. NAND, NOR, AND, OR, MUX and FLIP-FLOP components). A circuit layout can be determined from the gate level representation of the circuit by determining positional information for the logical components. This may be done automatically or with user involvement in order to optimise the circuit layout. When the layout processing system1304has determined the circuit layout it may output a circuit layout definition to the IC generation system1306. A circuit layout definition may be, for example, a circuit layout description. The IC generation system1306generates an IC according to the circuit layout definition, as is known in the art. For example, the IC generation system1306may implement a semiconductor device fabrication process to generate the IC, which may involve a multiple-step sequence of photo lithographic and chemical processing steps during which electronic circuits are gradually created on a wafer made of semiconducting material. The circuit layout definition may be in the form of a mask which can be used in a lithographic process for generating an IC according to the circuit definition. Alternatively, the circuit layout definition provided to the IC generation system1306may be in the form of computer-readable code which the IC generation system1306can use to form a suitable mask for use in generating an IC. The different processes performed by the IC manufacturing system1302may be implemented all in one location, e.g. by one party. Alternatively, the IC manufacturing system1302may be a distributed system such that some of the processes may be performed at different locations, and may be performed by different parties. For example, some of the stages of: (i) synthesising RTL code representing the IC definition dataset to form a gate level representation of a circuit to be generated, (ii) generating a circuit layout based on the gate level representation, (iii) forming a mask in accordance with the circuit layout, and (iv) fabricating an integrated circuit using the mask, may be performed in different locations and/or by different parties. In other examples, processing of the integrated circuit definition dataset at an integrated circuit manufacturing system may configure the system to manufacture a data compression or decompression apparatus without the IC definition dataset being processed so as to determine a circuit layout. For instance, an integrated circuit definition dataset may define the configuration of a reconfigurable processor, such as an FPGA, and the processing of that dataset may configure an IC manufacturing system to generate a reconfigurable processor having that defined configuration (e.g. by loading configuration data to the FPGA). In some embodiments, an integrated circuit manufacturing definition dataset, when processed in an integrated circuit manufacturing system, may cause an integrated circuit manufacturing system to generate a device as described herein. For example, the configuration of an integrated circuit manufacturing system in the manner described above with respect toFIG.13by an integrated circuit manufacturing definition dataset may cause a device as described herein to be manufactured. In some examples, an integrated circuit definition dataset could include software which runs on hardware defined at the dataset or in combination with hardware defined at the dataset. In the example shown inFIG.13, the IC generation system may further be configured by an integrated circuit definition dataset to, on manufacturing an integrated circuit, load firmware onto that integrated circuit in accordance with program code defined at the integrated circuit definition dataset or otherwise provide program code with the integrated circuit for use with the integrated circuit. A first aspect provides a method of data compression comprising: receiving a plurality of data items; encoding groups of data items by generating, for each of the groups, header data comprising h-bits and a plurality of body portions each comprising b-bits and each of the body portions corresponding to a data item in the group, wherein b is fixed within a group and wherein the header data for a group comprises an indication of b for the body portions of that group; generating, for each of the groups where b>0, a body data field for the group by interleaving bits from the body portions corresponding to data items in the group; and storing one or more encoded data blocks comprising the header data and the body data fields. In some examples, h is fixed for all groups and b is not fixed between groups. b may be an integer greater than or equal to zero and h is an integer greater than zero. Said storing one or more encoded data blocks may comprise: storing a body data block comprising body data fields for a plurality of groups; and storing a header data block comprising header data for the plurality of groups. Said interleaving bits from the body portions corresponding to data items in the group may comprise: (a) inserting a first bit from each of the body portions into the body data field; (b) inserting a next bit from each of the body portions into the body data field; and (c) repeating (b) until all bits from each of the body portions have been inserted into the body data field. Said inserting a first bit from each of the body portions into the body data field may comprise inserting a least significant bit from each of the body portions into the body data field and wherein inserting a next bit from each of the body portions into the body data field may comprise inserting a next least significant bit from each of the body portions into the body data field. Said encoding groups of data items may comprise, for each of the groups: receiving the group of data items; identifying a body portion size, b, by locating a bit position of a most significant leading one across all the data items in the group; generating the header data comprising a bit sequence encoding the body portion size; and generating a body portion comprising b-bits for each of the data items in the group by removing none, one or more leading zeros from each data item. Said identifying a body portion size may further comprise: checking if the identified body portion size is a valid body portion size; and in response to determining that the identified body portion size is not a valid body portion size, updating the body portion size to a next largest valid body portion size. The method may further comprise, prior to encoding groups of data items: analysing a plurality of groups of data items to generate a set of valid body portion sizes. Said analysing a plurality of groups of data items to generate a set of valid body portion sizes may comprise: analysing the data items in the plurality of groups of data items to identify a body portion size for each of the plurality of groups; identifying one or more least common body portion sizes for the plurality of groups of data items; and generating the set of valid body portion sizes by removing from a set of all possible body portion sizes, those body portion sizes corresponding to the identified one or more least common body portion sizes. The set of valid body portion sizes may comprise 2hdifferent valid body portion sizes. The data items may comprise weights for a neural network. Said analysing a plurality of groups of data items to generate a set of valid body portion sizes may comprise, for each layer in the neural network: analysing all weights for the layer to generate a set of valid body portion sizes for that layer. The data items may have a distribution centred substantially on zero and the method may further comprise, prior to encoding a group of data items, pre-processing the data items in the group by converting all data items having a negative value to positive values and interleaving the converted data items with data items having a positive value. The data items may have a distribution centred on a non-zero value and the method may further comprise, prior to encoding a group of data items, pre-processing the data items in the group by shifting all data items such that the shifted distribution is centred substantially on zero and then converting all shifted data items having a negative value to positive values and interleaving the converted shifted data items with shifted data items having a positive value. A second aspect provides a method of data decompression comprising: receiving one or more blocks of data, the one or more blocks of data encoding one or more groups of data items; reading header data into a first buffer; reading body data into a second buffer; and for each of the encoded groups of data items: reading header data for the group from the first buffer, wherein the header data for a group of data items comprises a h-bit indication of a body portion size, b, for the group of data items, wherein b is fixed within a group; determining the body portion size, b, for the group of data items from the header data; reading a body data field from the second buffer based on the determined body portion size, the body data field comprising interleaved body portions, with one body portion for each of the data items in the group; decoding the body data field to generate the decoded data items, the decoding comprising de-interleaving the body portions, wherein each of the decoded data items comprises n bits, where na); and outputting the decoded data items. In some examples, h is fixed for all groups and b is not fixed between groups. b may be an integer greater than or equal to zero and h may be an integer greater than zero. The body data field may comprise a plurality of concatenated sections, each of the sections comprising one bit from each body portion, and wherein decoding the body data field may comprise: starting with an initial set of data items comprising only zeros, one for each data item in the group, reading sections of the body data field and for each section of the body data field, overwriting one of the zeros for each of the data items with a bit value from the section of the body data field to generate the decoded data items; or generating a body data mask comprising ones in bit positions corresponding to the determined body portion size, reading sections of the body data field and for each section of the body data field, combining one of the bits in the body data mask for each of the data items with a bit value from the section of body data field. The first section in the body data field may comprise a least significant bit from each of the body portions, the subsequent section may comprise a next least significant bit from each of the body portions and a last section in the body data field may comprise a most significant bit from each of the body portions. The method may further comprise, prior to outputting the decoded data items, post-processing the decoded data items in the group to convert one or more of the data items from positive values to negative values. The post-processing may further comprise applying an offset to each of the data items. The data items may comprise weights for a neural network. A third aspect provides a data compression apparatus comprising: an input for receiving a plurality of data items; an encoding module configured to encode groups of data items by generating, for each of the groups, header data comprising h-bits and a plurality of body portions each comprising b-bits and each of the body portions corresponding to a data item in the group, wherein b is fixed within a group and wherein the header data for a group comprises an indication of b for the body portions of that group; an interleaving module configured to generate a body data field for each of the groups by interleaving bits from the body portions corresponding to data items in the group; and a memory interface configured to output, for storage, one or more encoded data blocks comprising the header data and the body data field. A fourth aspect provides a data decompression apparatus comprising: one or more inputs for receiving one or more blocks of data, the one or more blocks of data encoding one or more groups of data items; a header read module configured to read header data into a first buffer; a body read module configured to read body data into a second buffer; and a decoding module configured, for each of the encoded groups of data items, to: read header data for the group from the first buffer, wherein the header data for a group of data items comprises a h-bit indication of a body portion size, b, for the group of data items, wherein b is fixed within a group; determine the body portion size, b, for the group of data items from the header data; read a body data field from the second buffer based on the determined body portion size, the body data field comprising interleaved body portions, with one body portion for each of the data items in the group; decode the body data field, comprising de-interleaving the body portions (704), to generate the decoded data items, wherein each of the decoded data items comprises n bits, where n≥b; and output the decoded data items. A fifth aspect provides a compression apparatus comprising: an input configured to receive weights to be used in a neural network; a compression module configured to compress the weights; and a memory interface configured to output the compressed weights for storage. A sixth aspect provides a hardware implementation of a neural network, the hardware implementation comprising decompression apparatus comprising: an input configured to receive compressed weights to be used in the neural network; and a decompression module configured to decompress the compressed weights; wherein the hardware implementation is configured to use the decompressed weights in the neural network. A seventh aspect provides a method of compressing weights to be used in a neural network. An eighth aspect provides a method of decompressing weights to be used in a neural network. A ninth aspect provides computer readable code configured to cause any of the methods described herein to be performed when the code is run. Those skilled in the art will realize that storage devices utilized to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like. The methods described herein may be performed by a computer configured with software in machine readable form stored on a tangible storage medium e.g. in the form of a computer program comprising computer readable program code for configuring a computer to perform the constituent portions of described methods or in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable storage medium. Examples of tangible (or non-transitory) storage media include disks, thumb drives, memory cards etc. and do not include propagated signals. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously. The hardware components described herein may be generated by a non-transitory computer readable storage medium having encoded thereon computer readable program code. Memories storing machine executable data for use in implementing disclosed aspects can be non-transitory media. Non-transitory media can be volatile or non-volatile. Examples of volatile non-transitory media include semiconductor-based memory, such as SRAM or DRAM. Examples of technologies that can be used to implement non-volatile memory include optical and magnetic memory technologies, flash memory, phase change memory, resistive RAM. A particular reference to “logic” refers to structure that performs a function or functions. An example of logic includes circuitry that is arranged to perform those function(s). For example, such circuitry may include transistors and/or other hardware elements available in a manufacturing process. Such transistors and/or other elements may be used to form circuitry or structures that implement and/or contain memory, such as registers, flip flops, or latches, logical operators, such as Boolean operations, mathematical operators, such as adders, multipliers, or shifters, and interconnect, by way of example. Such elements may be provided as custom circuits or standard cell libraries, macros, or at other levels of abstraction. Such elements may be interconnected in a specific arrangement. Logic may include circuitry that is fixed function and circuitry can be programmed to perform a function or functions; such programming may be provided from a firmware or software update or control mechanism. Logic identified to perform one function may also include logic that implements a constituent function or sub-process. In an example, hardware logic has circuitry that implements a fixed function operation, or operations, state machine or process. The implementation of concepts set forth in this application in devices, apparatus, modules, and/or systems (as well as in methods implemented herein) may give rise to performance improvements when compared with known implementations. The performance improvements may include one or more of increased computational performance, reduced latency, increased throughput, and/or reduced power consumption. During manufacture of such devices, apparatus, modules, and systems (e.g. in integrated circuits) performance improvements can be traded-off against the physical implementation, thereby improving the method of manufacture. For example, a performance improvement may be traded against layout area, thereby matching the performance of a known implementation but using less silicon. This may be done, for example, by reusing functional blocks in a serialised fashion or sharing functional blocks between elements of the devices, apparatus, modules and/or systems. Conversely, concepts set forth in this application that give rise to improvements in the physical implementation of the devices, apparatus, modules, and systems (such as reduced silicon area) may be traded for improved performance. This may be done, for example, by manufacturing multiple instances of a module within a predefined area budget.” Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person. It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. Any reference to ‘an’ item refers to one or more of those items. The term ‘comprising’ is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and an apparatus may contain additional blocks or elements and a method may contain additional operations or elements. Furthermore, the blocks, elements and operations are themselves not impliedly closed. The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. The arrows between boxes in the figures show one example sequence of method steps but are not intended to exclude other sequences or the performance of multiple steps in parallel. Additionally, individual blocks may be deleted from any of the methods without departing from the spirit and scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought. Where elements of the figures are shown connected by arrows, it will be appreciated that these arrows show just one example flow of communications (including data and control messages) between elements. The flow between elements may be in either direction or in both directions. The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention. | 153,483 |
11863209 | DETAILED DESCRIPTION The invention aims at providing a technical solution and circuit device capable of equivalently providing the same/similar configurations to the operations/functions of the functional blocks/circuits comprised within the circuit device by using fewer configuration bits, data compression/encoding operation, and data comparison operation, thereby minimizing the circuit area of a non-volatile memory circuit such as an electronic fuse (E-Fuse) array as well as retaining or holding the same/similar configurations to the operations/functions of the functional blocks/circuits. In addition, the provided technical solution and circuit device can increase the total addressable memory space used for storing information in the E-Fuse array(s) and/or can reduce the silicon area taken up by the E-Fuse array(s), thus effectively reducing the assembly, test and manufacturing cycle time as well as overall cost of manufacturing the circuit device such as a circuit chip. The electronic fuse (E-Fuse) is a component in modern integrated circuit design intended for storing configuration bits which collectively define the various capabilities of the end product. Each system capability is assigned a dedicated bit within the E-Fuse block. For example, some system capabilities may require multiple bits to represent more variations of the supported features. The more capabilities a particular system has, the more bits of E-Fuse are required to be implemented in silicon. As the E-Fuse array size increases, aside from the increased silicon area incurred, the E-Fuse burn and verification cycle time during chip manufacturing will also be negatively impacted. Therefore it is often a tradeoff between manufacturing cost and marketing flexibility of configuring multiple SKUs when it comes to appropriately sizing the E-Fuse array during the chip design phase. To overcome the aforementioned tradeoff, the provided technical solution and circuit device can enable a large number of system capabilities to be individually configured post-fabrication while only requires a relatively smaller E-Fuse array to be implemented on chip. Thus, each physical fuse bit in the provided technical solution and circuit device equivalently stores information of more than one capability bits. FIG.1is a block diagram of an integrated circuit100such as an integrated circuit chip or a system-on-chip device according to an embodiment of the invention. InFIG.1, the integrated circuit100comprises a non-volatile memory circuit105, a programmable memory circuit110, a decision circuit115, and a processing circuit120. The decision circuit115comprises a data compression circuit125, a comparator circuit130, and a fallback mechanism block135. The system of the integrated circuit100comprises a plurality of function blocks comprised within the processing circuit120, and multiple configuration bits are used to provide corresponding configuration settings for the plurality of function blocks, to enable or disable at least one portion of functional blocks. For example, each configuration bit may set as ‘0’ initially, and a configuration bit may be configured as ‘1’ in response to different requirements. The non-volatile memory circuit105is used for securely and permanently recording and protecting a key data content having Y bits. The non-volatile memory circuit105for example may be implemented by using or may comprise one or more actual electronic fuse arrays, one time programmable read-only memory (OTP ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other technologies as long as the key data content having Y bits indicated by the signal fuse_key[Y−1:0] inFIG.1can be permanently set by a chip manufacturer and electronically protected such that it cannot be erased or tampered by unauthorized parties. The key data content fuse_key[Y−1:0] for example is a compressed or encoded data content generated from X number of capability bits of the user configuration data content set by the chip manufacturer. The key data content fuse_key[Y−1:0] cannot be changed by the user once it is set. The programmable memory circuit110is used for storing a user configuration data content having X bits wherein the number X is larger/greater than the number Y. The programmable memory circuit110may be implemented by using or may comprise a user programmable memory, a bank of flip-flops, a bank of latches, register file, a static random access memory (SRAM), or a dynamic random access memory (DRAM) as long as it is programmable by the user and that the programmed value is accessible by other logic blocks of the system. The programmed value, i.e. the X capability bits of the user configuration data content, is indicated by the signal user_config [X−1:0], and it will be transmitted to two blocks, i.e. the data compression circuit125and the fallback mechanism135in the decision circuit115. The decision circuit115is coupled to the non-volatile memory circuit105and the programmable memory circuit110, and it is arranged for receiving the key data content having the Y bits (i.e. fuse_key [Y−1:0]) and the user configuration data content having the X bits (i.e. user_config [X−1:0]), converting the user configuration data content having the X bits (user_config [X−1:0]) into a user configuration key content having Y bits (i.e. user_key [Y−1:0]), comparing the user configuration key content having the Y bits (user_key [Y−1:0]) with the key data content having Y bits (fuse_key [Y−1:0]), selecting a fallback configuration data content having X bits (fallback_config [X−1:0]) as an output data when the user configuration key content (user_key [Y−1:0]) does not match the key data content (fuse_key [Y−1:0]), and selecting the user configuration data content having X bits (user_key [Y−1:0]) as the output data when the user configuration key content (user_key [Y−1:0]) matches the key data content (fuse_key [Y−1:0]). That is, the decision circuit115is used to generate more resultant configuration bits for the functions blocks according to the fewer preliminary configuration bits stored in the non-volatile memory circuit105and the information stored in the user programmable memory circuit110. Thus, the circuit area of the non-volatile memory circuit105can be minimized or reduced, and the storage space of the user programmable memory circuit110can be increased. In practice, the data compression circuit125for example is a compression/encoder logic (or circuit) which may use a variety of operations or algorithms such as Hamming Code, checksums, Cyclic Redundancy Check (CRC), hash algorithms and other data compression algorithms. The above-mentioned algorithms can process a large amount of data to generate or produce a smaller amount of data/key/signature. For instance, the Hamming Code operation can be used to produce a 6-bit parity signature from a 26-bit input data and an 8-bit parity signature from a 120-bit input data, to effectively reduce the E-Fuse array size requirements by 77% and 93% respectively; however, this is not intended to be a limitation. As shown inFIG.1, the data compression circuit125performs a data compression/encoding operation upon the X bits of user configuration data content stored in the programmable memory circuit110, indicated by user_config [X−1:0], to compress the user configuration data content having X bits to generate the user configuration key content having Y bits (indicated by the signal user_key [Y−1:0]) smaller than X bits. Then, the comparator circuit130is arranged to receive the bits of signal user_key [Y−1:0] from the data compression circuit125, receive the bits of signal fuse_key [Y−1:0] from the non-volatile memory circuit105, compare the bits/value of signal user_key [Y−1:0] with the bits/value of signal fuse_key [Y−1:0] to generate a comparison signal, and report the information of such comparison signal to the fallback mechanism135wherein the comparison signal indicates whether the information of signal user_key [Y−1:0] matches that of signal fuse_key [Y−1:0] or not. If the bits/value of signal user_key[Y−1:0] matches the bits/value of signal fuse_key[Y−1:0] (i.e. the user configuration matches the intended configuration set by the chip manufacturer for the integrated circuit100), then the content of the report signal, transmitted from the comparator circuit130to the multiplexer MUX, may be set as ‘1’ and the multiplexer MUX based on the bit ‘1’ of the report signal is arranged to selects the signal user_config [X−1:0] as its output. Instead, if the bits/value of signal user_key [Y−1:0] do not match the bits/value of signal fuse_key [Y−1:0] (i.e. the user configuration does not match the intended configuration set by the chip manufacturer), then the content of the report signal, transmitted from the comparator circuit130to the multiplexer MUX, may be set as ‘0’ and the multiplexer MUX based on the bit ‘0’ of the report signal is arranged to selects a fallback configuration signal having X number of bits, indicated by the signal fallback_config [X−1:0], as its output. The fallback mechanism135as shown inFIG.1comprises the multiplexer MUX which may be implemented by an array of multiplexers. In this embodiment, the fallback configuration signal having X number of bits indicated by the signal fallback_config[X−1:0] is for example an alternate configuration pattern signal set by the chip manufacturer that either denies the system from functioning, partially disable or cripple system capabilities, or even enter a special operating mode to inform the user of the mismatch and possibly allow the user to troubleshoot and correct the problem. That is, when the fallback configuration data fallback_config[X−1:0] is received by the processing circuit120, the processing circuit120may performs at least one of denying a system from functioning, partially disabling or crippling system capabilities, entering a special operating mode to inform a user of mismatch, and allowing the user to troubleshoot and correct a specific problem. When the problem is corrected, the final configuration signal having X number of bits indicated by the signal final_config[X−1:0] is generated from the output of the multiplexer MUX to the processing circuit120, and the final configuration signal is selected from the signal user_config[X−1:0] if the bit of report signal is equal to ‘1’. In this situation, the full capabilities defined/specified by the chip manufacturer become available for use by the functional blocks comprised by the processing circuit120. The processing circuit120is coupled to the decision circuit115, and it is used for receiving the output data (i.e. signal final_config[X−1:0]) of the decision circuit115and performing at least one corresponding capability operation according to the output data. For example, one or more functional blocks comprised by the processing circuit120are arranged to execute one or more corresponding functions and operations based on the configuration information indicated by the configuration bits carried by the signal final_config[X−1:0]. By doing so, the provided integrated circuit employing the data compression and comparison operations with fewer configuration/physical bits stored in the E-Fuse array(s) can equivalently achieve the functional blocks' capabilities corresponding to more configuration/physical bits. Thus, the circuit silicon area of E-Fuse array(s) can be significantly reduced. Further, the numbers X and Y and the relation between X and Y can be determined the type of data compression or the encoding algorithm used by the system of integrated circuit100or is specific to the used data compression or used encoding algorithm. For example, the number X is designed to be larger/greater than the number Y so as to provide more actual technical benefits. Further, it should be noted that in other embodiments the integrated circuit100as shown inFIG.1can be synthesized as a netlist circuit or implemented as an application specific integrated circuit (ASIC). In one embodiment, the integrated circuit100can be emulated by a programmable hardware circuit. Further, in one embodiment, the integrated circuit100can be implemented by using discrete hardware components. Further, in one embodiment, the integrated circuit100can be partially implemented by software components. To make readers more clearly understand the spirits of the invention,FIG.2is provided.FIG.2is a flowchart diagram of a method of the integrated circuit100as shown inFIG.1according to an embodiment of the invention. The description of steps is described in the following:Step S200: Start;Step S205: Receive a key data content having Y bits stored in a non-volatile memory circuit which securely and permanently records and protects the key data content having Y bits;Step S210: Receive a user configuration data content having X bits greater than Y bits from a programmable memory circuit;Step S215: Compress or Encode the user configuration data content having X bits to generate a user configuration key content having Y bits which are fewer than X bits;Step S220: Compare the user configuration key content having Y bits with the key data content having Y bits to determine whether the contents are matched; if the contents are not matched, then the flow proceeds Step S225, otherwise, the flow proceeds Step S230;Step S225: Select a fallback configuration data content having X bits as an output dataStep S230: Selecting the user configuration data content having X bits as the output data;Step S235: Performing at least one corresponding capability operation according to output data; andStep S240: End. Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims. | 14,003 |
11863210 | Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION Described are various digital predistortion implementations that include an envelope generator that can generate a slow or fast envelope signal (depending on available bandwidth), an actuator (predistortion) block that depends on both the signal itself and the envelope to thus use the envelope signal e (in addition to u) to generate the pre-distorted output v provided to the transmit chain. The system is able to generate an optimal shaping table for a given PA set-up and performance targets. The envelope generator implements an advanced function to convert an input signal (e.g., baseband input), u, into a safe (e.g., from a power consumption perspective) and efficient envelope signal, e. The actuator block is configured to also generate a resultant envelope signal, eA, that can be optimized to generate better linearization results with a bandwidth that is compatible with a power supply modulator of the linearization system. Thus, in some embodiments, an example method for digital predistortion, generally performed at a digital predistorter of a linearization system, is provided. The method includes receiving (by a digital predistorter) a first signal that depends on amplitude variations based on an input signal, u, with the variations of the first signal corresponding to time variations in non-linear characteristics of a transmit chain that includes a power amplifier. The method further includes receiving (by the digital predistorter) the input signal u, generating, by the digital predistorter, based at least in part on signals comprising the input signal u and the first signal, a digitally predistorted signal v to mitigate the non-linear behavior of the transmit chain, and providing the predistorted signal v to the transmit chain. In some embodiments, the first signal may be the output of an envelope tracker, in which case receiving the first signal may include monitoring by the digital predistorter a time-varying signal e generated by an envelope tracker (the envelope tracking module which may be a separate module from the digital predistorter) that received a copy of the input signal u. In some examples, the time-varying signal e is generated from the input signal u such that the time-varying signal e causes at least some non-linear behavior of the power amplifier. In such examples, generating the digitally predistorted signal v may include using the time-varying signal e to digitally predistort the input signal u such that the output of the transmit chain resulting from digitally predistorting the input signal u is substantially free of the at least some non-linear distortion caused by the time-varying signal e. In additional implementations described herein, an example method to control electrical operation of a transmit chain, generally performed by an envelope tracking module of a linearization system, is provided. The method includes receiving by an envelope tracking module an input signal u, with the input signal u further provided to a digital predistorter coupled to a transmit chain that includes a power amplifier. The method further includes determining, by the envelope tracking module, based on amplitude variations of the input signal u, a time-varying signal, e, with the amplitude variations of the time-varying signal e correspond to time variations in non-linear characteristics of the transmit chain, and outputting, by the envelope tracking module, the time-varying signal e. The digital predistorter is configured to receive another input signal that depends on the amplitude variations of the time-varying signal e, and to generate, based at least in part on signals comprising the input signal u and the other input signal, a digitally predistorted output, v, provided to the transmit chain, to mitigate non-linear behavior of the transmit chain. In some embodiments, the other input signal provided to the digital predistorter is the time varying signal e determined by the envelope tracking module. The digital predistorter is further configured, in such embodiments, to additionally produce a resultant control signal, eA, provided to a power supply modulator controlling the electrical operation of the transit chain. In further implementations described herein, an example method to control electrical operation of a transmit chain, generally performed at a power supply modulator of a linearization system, is provided. The method includes receiving, by a power supply modulator, one or more control signals, and regulating, based on the one or more control signals, power supply provided to a power amplifier of a transmit chain to underpower the power amplifier so that the transmit chain includes at least some non-linear behavior. The at least some non-linear behavior of the transmit chain, resulting from regulating the power supply based on the one or more control signals, is at least partly mitigated through digital predistortion performed by a digital predistorter on signals comprising an input signal, u, provided to the digital predistorter, and on another signal, provided to the digital predistorter, that depends on amplitude variations based on an input signal, u, wherein the variations of the other signal correspond to time variations in non-linear characteristics of the transmit chain. In some embodiments, the other signal provided to the digital predistorter includes a time-varying signal e generated by an envelope tracker that receives a copy of the input signal u. In some examples, the time-varying signal e is derived based on a set of constraints that includes a first constraint in which e[t]≥h(|u[t]|), where h(⋅) defines a relation between instantaneous power of the input signal u and a power supply of the transmit chain, a second constraint imposing maximal value and curvature bounds for the signal e, such that e[t]≤E0, and |2e[t]−e[t−1]−e[t+1]|≤E2, where E0 and E2 are values representative of operational characteristics of the power amplifier, and a third constraint requiring that values of e[t] be as small as possible, subject to the first and second constraints (it is to be noted that a similar procedure to derive e may be realized with respect to the above methods implemented for the digital predistorter and the envelope tracking module). Thus, with reference toFIG.1, a schematic diagram of an example linearization system100, comprising a power amplifier114whose operational power (and thus non-linear behavior) is controlled based on an envelope tracking signal e produced by an envelope generator (tracker)140, and based on the system's input signal, u, is shown. In some examples, the envelope tracker/generator140is configured to produce an output that takes into account non-linear behavior of various modules of the system100(and not only the non-linear behavior of the PA114), including the non-linear behavior of a power supply modulator150that controls the power provided to the power amplifier114. Thus, for example, the envelope tracker140may produce output based on the non-linear characteristics of the power supply modulator (and/or non-linear characteristics of other modules/units of the system100) with such output either provided directly to the power supply modulator (e.g., to mitigate, in some embodiments, the non-linear behavior of the power supply modulator150), or provided to the actuator120for further processing (as will be discussed in greater detail below). The linearization system100includes a transmitter chain110used to perform the transmission processing on signals produced by an actuator (implementing a digital predistorter, or DPD)120. Particularly, in the embodiments ofFIG.1, the transmit chain includes a transmitter and observation path circuitry that includes a digital-to-analog converter112(which may optionally be coupled to a frequency modulator/multiplier and/or a variable gain amplifier) that produces an analog signal that possibly has been shifted to an appropriate RF band. The predistorted analog signal (also referred to as predistorted output signal) is then provided to the power amplifier114whose power is modulated by a signal eB(which may be the actual supplied voltage to power the PA, or a signal representative of the voltage level to be applied to the PA) determined based on an envelope tracking signal eAproduced by the actuator120based on the input signal u. As will be discussed in greater detail below, in some embodiments, a modulation signal eA, produced by the actuator120, may take into account the behavior of the input signal u, the predistortion functionality of the actuator120, characteristics of the power supply modulator150(including any non-linear characteristics of the power supply modulator), and characteristics of the PA114(including the PA's non-linear profile) in order to produce the modulating signal, eB, to modulate the PA114in a more optimal way that depends not only on the signal u, but also on the desired predistortion and non-linear behaviors of the system100. The output signal, w, produced by the PA114, is a process (e.g., amplified) signal in which, ideally (depending on the effectiveness of the predistortion functionality implemented by the actuator120and the non-linear behavior of the PA regulated through the control signal eB), non-linear distortion of the output signal has been substantially removed. In some embodiments, the output signal, w, may be provided to a bandpass filter (not shown inFIG.1) to remove any unwanted harmonics or other signal noise, before being communicated to its destination (a local circuit or device, or a remote device) via a wired or wireless link. It is to be noted that to the extent that the envelope tracker/generator140may be configured to model the non-linear behavior of either the PA114and/or the power supply modulator150, the sampling rate for the signal eAmay be controlled (e.g., be increased) so that it can better track/monitor frequency expansion resulting from the non-linear behavior of power supply modulator and/or other modules of the system100. eAmay have a sampling rate of, for example, 10 msps, 100 msps, or any other appropriate sampling rate. As also shown inFIG.1, the transmitter chain110includes an observation path to measure the output signal produced by the PA114in order to perform the DPD adaptation process (discussed in greater detail below). In some embodiments, the observation path circuitry includes a frequency demodulator/multiplier (e.g., which may be part of a coupler116schematically depicted inFIG.1) whose output is coupled to an analog-to-digital converter (ADC)118to produce the digital samples used in the DPD adaptation process implemented by an estimator130. The estimator130is configured to, for example, compute coefficients that weigh basis functions selected to perform the predistortion operation on the input signal u. It is to be noted that while the signals u and eAcan have different sampling rates, the feedback signals y and eB(i.e., the signal modulating the power level of the PA) generally need to be provided at the same sampling rate for proper basis reconstruction. The estimator130derives the DPD coefficients based (at least indirectly) on the amplitude variations of the input signal u (provided to the actuator120and the envelope tracker140). Particularly, because the envelope tracker140produces a signal that corresponds to the amplitude variations of the input signal u, and that signal is used to control the non-linear behavior of the PA, which in turn impacts the output of the transmit chain, the coefficients derived by the estimator130are affected by the amplitude variation behavior tracked by the envelope tracker140. In addition to producing coefficients to weigh the predistortion components/functions, the estimator130may also be configured to derive coefficients that weigh functions applied to the envelope tracking signal e (in embodiments in which the signal e produced by the envelope tracker140is provided directly to the actuator120, as occurs in the system100ofFIG.1) to produce a resultant control signal eAthat controls/modulates an envelope tracking power supply modulator150to control or modulate power provided to operate the PA114of the system100. More particularly, as noted, the envelope tracker/generator140is configured to generate an envelope signal e which is appropriate for the given input baseband signal u and for the power amplifier implementation in order to regulate the operation (including the non-linear behavior) of the PA. As will be discussed in greater detail below, in some embodiments, a modulation signal eA, produced by the actuator120, may take into account the behavior of the input signal u, the predistortion functionality of the actuator120, characteristics of the power supply modulator150(including any non-linear characteristics of the power supply modulator), and characteristics of the PA114(including the PA's non-linear profile) in order to produce the modulating signal, eB, to modulate the PA114in a more optimal way that depends not only on the signal u, but also on the desired predistortion and non-linear behaviors of the system100. In some embodiments, and as depicted inFIG.1, instead of directly being supplied to the power modulator150, the tracked envelope signal e may be directed to the actuator120, which makes explicit use of the tracked envelope signal e to generate the predistorted signal v. In some embodiments, the signal u (or, alternatively, the intermediate signal v) may also be used to generate the new envelope signal eA, with the new envelope signal eAhaving a bandwidth that is compatible with the power supply modulator150. In some embodiments, the signal e may also be provided (in addition to, or instead of) to the power supply modulator150. Thus, as depicted inFIG.2, showing an example of the inputs/output configuration for an example actuator200(which may be similar to the actuator120ofFIG.1), in some embodiments the actuator200(DPD engine) receives the input signal u (e.g., a digital signal with 500 Msps) and also receives the tracked envelope signal e generated based on the signal u (with a bandwidth commensurate with the bandwidth of the signal u, e.g., the signal e may also have 500 Msps signal). The actuator200predistorts the signal u to generate the predistorted signal v based, in part, on e, with the predistorted signal v having a bandwidth commensurate with the bandwidth of the input signal u (e.g., v may also be a signal with 500 Msps). The actuator also generates the signal eA, optionally based, in part, on the signal u, with eAhaving a bandwidth that is compatible with the bandwidth of the power supply modulator150. In the example ofFIG.2, the new tracked envelope signal generated based on the signal e (and possibly based on the signal u) has a bandwidth corresponding to 10 Msps (as compared to the much larger 500 Msmp of the predistorted signal v). It is noted that in some implementation the new signal eAneed not be based on the time-varying signal u, but instead may be, for example, a down-sampled version of the signal e, or may otherwise be a resultant signal processed (through some pre-determined filtering process) from the signal e. In embodiments where the tracked envelope signal is provided to the actuator120, the signal e should be large enough to avoid irreversible damage to the baseband signal u caused by clipping, but should also be as small as possible to maximize PA efficiency. Accordingly, in some embodiments, the envelope tracker140is implemented to generate a digital envelope signal e=e[t] (for the purposes of the present description, the notation t represents discrete digital samples or instances) that satisfies three (3) main conditions. A first condition is that the inequality e[t]≥h(|u[t]|) has to be satisfied, where the function h(⋅) defines the relation between the instantaneous power of the baseband signal and the power supply, preventing irreparable damage by clipping. This function corresponds to the shaping table used in conventional envelope trackers/generators generating envelope signals that directly regulate the power of the PA in a power amplifier system. An example of function h(⋅) is given by: h(u)=Vmin+(Vmax-Vmin)uUmax where Vmin is the minimal supply voltage for the PA, Vmax is the maximal supply voltage for the PA, and Umax is the maximal possible value of |u[n]|. This constraint can be used to control the voltage range to be controlled by the envelope tracker. Other examples of functions h that establish relations between instantaneous power of a baseband signal and the power supply may be used. A second condition or constraint that may be imposed on the signal e generated by the envelope tracker140is that e(t) needs to satisfy maximal value and curvature bounds, expressed by the following inequalities: e[t]≤E0, and |2e[t]−e[t−1]−e[t+1]|≤E2, where the constants E0, E2 depend on the particular PA used (e.g., E0 and E2 represents operational characteristics, attributes, and behavior of the particular PA114used in the transmit chain110ofFIG.1). The expression |2e[t]−e[t−1]−e[t+1]|≤E2 is a bounding second derivative and may represent a smoothness constraint. This constraint may be used to control the smoothness of the tracking envelope curve, and thus control the response speed of the envelope tracking to the signal being tracked (e.g., the constraint may be used to control how fast the signal response is; a smoother function, e, will correspond to a slower tracking signal). The pre-determined constant E2 can thus be selected (e.g., by a user) to control the speed of the response of the envelope tracker, and can be representative of the bandwidth characteristics of the PA114and/or the envelope tracking power supply modulator150. A third constraint that may be imposed on the signal e[t] is that, subject to the bounds established by the first and second constrains, the values of e[t] should be as small as possible. Thus, the envelope tracker/generator is configured to implement a substantially real-time signal processing procedure which conforms to the constraints (1)-(3) by, for example, iteratively updating a function g=g(t, α) such that selecting e[t]≥g(t, e[t−1]) is not in conflict with the specifications constraints e[t]≤E0 and e[t]≥2e[t−1]−e[t−2]−E2, and allows future selections of e[τ], with τ>t, to be possible in such a way that to satisfy conditions (1) and (2). The lower bound g(t, α) is specified as a piecewise linear function of its second argument, with break points spaced evenly on the interval [0, e0]. The signal e[t] determined, based on the input signal u[t] (prior to any predistortion processing applied to u) is provided to the actuator/DPD engine120of the system100depicted inFIG.1. As noted, the actuator120may generate two output signals: i) a predistorted signal v[t], which is derived based on the input signal u (which is a digital signal that can be represented as u[t]) and the time-varying envelope tracking signal e[t] (whose samples depend on the amplitude variations of the signal u[t]), and ii) the resultant envelope tracking signal eA[t] which may also be derived based on the input signal u[t] and the envelope tracking signal e[t] and therefore can factor into the modulating signal (controlling the PA) the behavior of the input signal u[t]. By basing both the resultant predistorted signal v[t] and the power supply modulation signal eA[t] on the input signal u[t] and the envelope tracking signal e[t] (with the tracking signal being derived based, in part, on the particular characteristics, as represented by the values E0 and E2, of the PA to be regulated), the PA can be carefully regulated to control its non-linear behavior in a way that would allow a more optimal processing of the predistorted signal. Put another way, higher efficiency (e.g., in terms of power consumption and bandwidth characteristics) may be achieved by deliberately setting the PA to operate in a non-linear mode (e.g., by under-powering the PA, to reduce power consumption) that is mitigated/countered through predistortion processing of the input signal u[t] that depends on the particular non-linear behavior point selected for the PA. Accordingly, in such embodiments, the actuator120is configured to perform the digital predistortion by using the time-varying control signal e in the digital predistortion of the input signal u such that the output of the transmit chain resulting from the input signal u and actuated by the control signal e is substantially free of the at least some non-linear distortion caused by the generated control signal. Different predistortion processing may be implemented by the actuator120to pre-invert the signal. In some example embodiments, the actuator120may derive the output v[t] according to the expression: v[t]=u[t]+∑k=1nxkBk(qu[t],qe[t])withqu[t]=[u[t+l-1]u[t+l-2]⋮u[t+l-τ]]andwithqe[t]=[e[t+s(l-1)]e[t+s(l-2)]⋮e[t+s(l-τ)]] In the above expression, Bkare the basis functions, qu[t] and qe[t] are the stacks of recent (around t) baseband and envelope input samples, respectively, s is a time scale separation factor, which is a positive integer reflecting the ratio of time constants of the PA's power modulator and the PA, and xkare complex scalars of the coefficients of the compensator x∈n. The estimator130(i.e., adaptation unit) illustrated inFIG.1adjusts the coefficients xkof the actuator by observing the measurement samples y[t] (observed from the output signal w(t) of the PA114) and may minimize the regularized mean squared error. This can be solved, for example, by any minimization technique such as a least-squares method, a stochastic gradient method, etc. The optimization of the adjustable (adaptive) coefficients may further be based, for example, on the intermediate signal v, the modulating control signal eB, and/or the control signal eA. Similarly, the output signal eA[t] may be derived using an optimization process, (which may also be implemented using the estimator130) based on the input signal u, the control signal e, and the feed-back (observed) signals eBand y provided to the estimator130. For example, in situations where the envelope tracking power supply modulator150exhibits non-linear behavior (i.e., the relationship between the power supply modulator's output signal eBand its input signal, eA, is non-linear), the signal eAmay be produced through digital predistortion processing applied to the signals u and e, that achieves some optimization criterion (e.g., to match e to eB). Alternatively, in some embodiments (e.g., when the power supply modulator exhibits substantially linear behavior), processing performed by the actuator120to produce the signal eAmay depend only on the signal e (e.g., eAmay be a downsampled version of e, with a bandwidth that is compatible with the bandwidth of the envelope tracking power supply modulator150) without needing to take the signal u (or some other signal) into account. Examples of procedures/techniques to derive coefficients (parameters) for weighing basis functions selected for predistortion operations (be it for predistortion operations on a baseband input signal such as u, or on an envelope tracking signal e to modulate a power supply powering a power amplifier) are described in U.S. patent application Ser. No. 16/004,594, entitled “Linearization System,” the content of which is incorporated herein by references in its entirety. Briefly, output signal, v, which is provided as input to the transmit chain110, is generated, based on the input signal u (or, in the embodiments of the systems depicted inFIGS.1and2, the input signal may include a combination of the baseband signal u and the output, e, of an envelope tracker), to include an “inverse nonlinear distortion” (i.e., an inverse of the nonlinear distortion introduced by the transmit chain110), such that the nonlinear distortion introduced by the transmit chain106is substantially cancelled by the inverse nonlinear distortion. The output signal, w is therefore substantially free of nonlinear distortion. In some examples, a DPD (such as the actuator120) operates according to an inverse model of the nonlinear distortion of the transmit chain (e.g., the transmit chain110ofFIG.1) such that providing the input signal, u, to the DPD causes the DPD/actuator to generate the intermediate input signal, v as follows: v=2u+∑i=1nxifi(u) where fi(⋅) is the ithbasis function of n basis functions and xiis the ithparameter (e.g., the ithweight) corresponding to the ithbasis function. Each basis function is a linear function (e.g., u(t−1)) or a non-linear function (e.g., |u(t)|2) of the input, u, which may include memory (e.g., u(t)*u(t−1)). To update the parameters, x, used by, for example, the actuator (DPD processor)120ofFIG.1, a predictor module (such as the estimator130ofFIG.1) processes the input signal to the transmit chain, i.e., the signal v, corresponding the predistorted output of the actuator120, and a sensed version (e.g., a signal b) of the output signal w of the transmit chain (or some other output module that is downstream of the transmit chain) to generate an updated set of parameters, x′. The sensed signal b is observed via an observation receiver/coupler (such as the coupler116ofFIG.1) coupled to the output of a transmit chain. In some embodiments, a synchronizer (not shown) may be used to synchronize or correlate the signals used for the adaptation processes. In one example, the predictor module determines an updated set of parameters x′ that, in combination with the basis functions and the intermediate input signal, v, generate a predicted signal that is as close as possible to the sensed signal, b (e.g., in a least mean squared error sense). This can be restated as: P(v)=∑i=1nxifi(v) The predictor, P, may be provided to the actuator120to update the actuator's coefficients. In some examples, for the predictor P described above, the adaptation processor130configures the actuator (digital predistorter)120to perform according to an approximate inverse of the predictor P as follows: DPD(u)=P-1(u)≈2u-∑i=1nxifi(u) Alternatively, the DPD parameters may be set as: ai=−αi. In another example, the predictor module may be configured to determine an updated set of coefficients {circumflex over (α)} that, in combination with the basis functions and the sensed signal, b, generate a predicted signal, that is as close as possible (e.g., in a least mean squared error sense) to the intermediate predistorted signal, v. This can be restated as: P(b)=∑i=1nxifi(b) That is, in such embodiments, P is an estimate of a (post) inverse of the nonlinearity of the transmit chain. In some examples, the adaptation processor130configures the actuator120according to the predictor P as follows: DPD(u)=P(b)=∑i=1nxifi(b) or essentially ai=αi. In another example, updating of the DPD parameters/coefficients may be implemented to generate an updated set of parameters, x′, that, in combination with the basis functions, represent a difference between the model of the nonlinear input/output characteristic of the transmit chain and the current nonlinear input/output characteristic of the transmit chain. In one example, the predictor module determines parameters x that, in combination with the basis functions and the input signal, u, to the DPD (rather than using the intermediate signal v) generate a predicted signal, {circumflex over (b)} that is as close as possible to the sensed signal, b (e.g., in a least mean squared error sense), which can be restated as: P(u)=∑i=1nxifi(u). The parameters, x, in combination with the basis functions represent the difference between the model of the nonlinear input/output characteristics of the transmit chain, and the actual nonlinear input/output characteristic of the transmit chain because the effects both the DPD and the transmit chain on the input signal are represented in the sensed signal b. An output of the predictor module, i.e., P, is provided to a DPD update module which processes the predictor P to update the digital predistorter. In some examples, the actuator combines an approximate inverse of the predictor with the existing DPD according to ai′←ai+αi. This essentially approximates a cascade of the approximate inverse of the predictor, P−1, with the previous DPD configuration to yield the new DPD configuration. In another example, the predictor module (estimator) determines a set of parameters x that, in combination with the basis functions and the sensed signal, b, generate a predicted signal, û that is as close as possible to the input signal, u (e.g., in a least mean squared error sense), which can be restated as: P(b)=∑i=1nxifi(b). In some implementations, derivation of the coefficients x for weighing the basis functions used by the digital predistorter implementation of the actuator120may be determined in batches using a least-squares process as follows: α=arg min|Aα−b|22=arg min(αHAHAα−2AHbα+bHb) where b is a vector of sensed signal samples and A is a matrix where each column includes the samples of the basis function, fi(u). The solution for x is therefore: x=(AHA)−1AHb. That is, in this formulation, the samples of the sensed signal and the basis functions are used once for the batch, and not used in subsequent determination of future coefficient values x. The reliability of the computed coefficients, x, can vary based on the desired accuracy (or other performance metric), and the computing resources available. In some embodiments, regularization may be used as a criterion for determining the coefficient values to bias the result away from coefficient values with large magnitudes. In some examples, robustness, reliability, and/or convergence of solutions of the coefficients, x, can be improved by incorporating a history of previous batches of the input as follows: x=argminϰ{xH[(1-λ)∑i=1nλn-iAiHAi]x-2(1-λ)∑i=1nλn-iAiHbiϰ+∑i=1nbiHbi+ρϰHx} where Aiand bicorrespond to inputs and outputs for batch i=1, . . . and x depends on the samples from all batches 1 to n. The above equation is subject to xL≤x≤xU, 0<λ<1, and ρ>0. In the above optimization problem, the large batch term (a Gramian), AHA may be replaced with (1-λ)∑i=1nλn-iAiHAi, which is a “memory Gramian.” Use of the memory Gramian improves the convergence properties of the optimization process, safeguards against glitches in system behavior, and improves overall performance of the system. Another example approach to implement determination of DPD parameters is described in U.S. Pat. No. 9,590,668, entitled “Digital Compensator,” the content of which is hereby incorporated by reference in its entirety. Briefly, with reference toFIG.3, a block diagram of an adjustable pre-distorting power amplifier system300, which may be similar to, or include, the portion of the system100comprising the actuator120(implementing a DPD processor), the transmit chain110, and the estimator (predictor module)130of the system100ofFIG.1, is shown. In the example system300, a digital input signal x[m] at a baseband or intermediate frequency is passed through a Digital Pre-Distorter (DPD)310(which may be similar, in implementation or functionality, to the DPD processing implementation of the actuator130) to produce a “pre-distorted” input y[m], which is passed through a transmit chain340to produce a driving signal v(t) that drives an antenna350. The transmit chain may include a Digital-to-Analog Converter (DAC)342, an analog lowpass filter (LPF)344, and a modulator346(e.g., multiplication by a local oscillator) of the output of the LPF344. The output of the modulator is passed to a power amplifier (PA)348. The DPD310may be controlled using a controller to determine/compute DPD coefficients (shown as DPD coefficients Θ320) to adjust the DPD310using those determined DPD coefficients. In some embodiments, the DPD coefficients Θ320are determined using a database of coefficients330, and values that essentially characterize the operation “regime” (i.e., a class of physical conditions) of the transmit chain, and/or of other system components (including remote load components and load conditions). These values (e.g., quantitative or categorical digital variables) include environment variables332(e.g., temperature, transmitter power level, supply voltage, frequency band, load characteristics, etc.) and/or a part “signature”334, which represents substantially invariant characteristics, and which may be unique to the electronic parts of the transmit chain340. Determined system characteristic values or attributes may be provided to a coefficient estimator/interpolator336(e.g., via a feedback receive chain360). The determined characteristics and metrics may be used to estimate/derive appropriate DPD coefficients. For example, the DPD coefficient sets may be computed so as to achieve some desired associated distortion measures/metrics that characterize the effects of the preprocessing, including an error vector magnitude (EVM), an adjacent channel power ratio (ACPR), operating band unwanted emissions (OBUE) or other types of distortion measures/metrics. The coefficient interpolator336uses the various inputs it receives to access the coefficient database332and determine and output the corresponding DPD coefficients320. A variety of approaches may be implemented by the coefficient estimator/interpolator336, including selection and/or interpolation of coefficient values in the database according to the inputs, and/or applying a mathematical mapping of the input represented by values in the coefficient database. For example, the estimator/interpolator336may be configured to select, from a plurality of sets of DPD coefficients (in the database330), a DPD coefficient set associated with one or more pre-determined system characteristics or some metric derived therefrom. The DPD coefficients used to control/adjust the DPD310may be determined by selecting two or more sets of DPD coefficients from a plurality of sets of DPD coefficients (maintained in the database330) based on the system characteristics. An interpolated set of DPD coefficients may then be determined from the selected two or more sets of DPD coefficients. Turning back toFIG.1, the envelope tracking power supply modulator150modulates the power supplied to the PA114using the resultant modulating signal eA[t] that was derived, by the actuator120, based on the envelope tracking signal e[t] and optionally the input signal u. The PA114, whose power is modulated by eB(e.g., the modulator150, which may be implemented using shaping table or through some other realization, produces the output control signal, eB, responsive to the modulating input eA[t]), processes the predistorted intermediate signal v[t] to produce an output signal w(t) that is substantially free of predistortion that would otherwise have occurred had u[t] not been predistorted. As noted, by controlling the intermediate signal v and the modulating signal eA[t], the PA can be actuated to operate in a particular non-linear manner (or profile) that can more optimally process the predistorted intermediate signal v[t] (e.g., use less power). As depicted inFIG.1, the signal eA[t] produced by the actuator120has a bandwidth that is compatible with the bandwidth that can be handled by the envelope tracking power supply modulator150. Thus, if, as shown in the example ofFIG.1, the modulator150has a maximum bandwidth of 10 MHz, the actuator120may be configured to produce a signal eA[t] with 10 Msps. The system100illustrated inFIG.1may be at least part of a digital front end of a device or system (such as a network node, e.g., a WWAN base station or a WLAN access point, or of a mobile device). As such, the system100may be electrically coupled to communication modules implementing wired communications with remote devices (e.g., via network interfaces for wired network connections such as Ethernet connections), or wireless communications with such remote devices. In some embodiments, the system100may be used locally within a device or system to perform processing (predistortion processing or otherwise) on various signals produced locally in order to generate a resultant processed signal (whether or not that resultant signal is then communicated to a remote device). In some embodiments, at least some functionality of the linearization system100(e.g., generation of an envelope signal, performing predistortion on the signals u and/or e using adaptable coefficients derived through based on one or more of u, e, v, y, eAand/or eBdepicted inFIG.1) may be implemented using a controller (e.g., a processor-based controller) included with one or more of the modules of the system100(the actuator120, the estimator (adaptation module)130, or any of the other modules of the system100). The controller may be operatively coupled to the various modules or units of the system100, and be configured to generate an efficient envelope tracking signal e, compute predistorted sample values for v and eA(the outputs of the actuator120or the actuator200), update the actuator's coefficients, etc. The controller may include one or more microprocessors, microcontrollers, and/or digital signal processors that provide processing functionality, as well as other computation and control functionality. The controller may also include special purpose logic circuitry, e.g., an FPGA (field programmable gate array), an ASIC (application-specific integrated circuit), a DSP processor, a graphics processing unit (GPU), an accelerated processing unit (APU), an application processor, customized dedicated circuitry, etc., to implement, at least in part, the processes and functionality for the system100. The controller may also include memory for storing data and software instructions for executing programmed functionality within the device. Generally speaking, a computer accessible storage medium may include any non-transitory storage media accessible by a computer during use to provide instructions and/or data to the computer. For example, a computer accessible storage medium may include storage media such as magnetic or optical disks and semiconductor (solid-state) memories, DRAM, SRAM, etc. With reference toFIG.8, a diagram of another example linearization system800implemented according to an arrangement comprising an envelope tracking module (envelope generator)840that may be similar to the envelope generator140ofFIG.1, an actuator820that may be similar to the actuator120ofFIG.1, and a power supply modulator850that may be similar to the power supply modulator150ofFIG.1. In the embodiments ofFIG.8, the envelope generator840is configured to generate an output e, that is provided directly to the envelope tracking power supply modulator850to control the power provided to a PA814(of a transmit chain810, which may be implemented similarly to the transmit chain110ofFIG.1). In some embodiments, and assuming the signal e is an analog, continuous signal, the power supply modulator850may be implemented as a switched-mode power supply (e.g., multi-level buck converter), in which a decoder is required to convert continuous signal e into digital control lines. One common issue with switched-mode operation is the non-ideal transient step response. This necessitates an accurate power supply predictor and behavioral modeling. Under-sampling e can degrade the accuracy of modeling. Thus, in such embodiments, it may be preferable to over-sample e to model the behavior of the envelope tracking power supply modulator850to produce a signal sp, which would be a faithful representation of s. Control of the PA's power based on the control signal e results, in turn, in output behavior in which an output of the PA814is provided to an observation chain (via a coupler816) comprising an ADC818, to a digital predistorter that includes an estimator830and an actuator820predistorting an input signal u according to DPD parameters adaptively estimated by the estimator830. Because the envelope generator840is indirectly coupled to the estimator (via the power supply modulator850, and a supply filter852, the PA814, the coupler816, and the ADC818), the signal e outputted by the envelope generator840, affects, at least indirectly, the DPD behavior of the linearization system800, and the adaptation process implemented by the DPD of the system800. This allows the system800to adapt its predistortion behavior according to the behavior (including the non-linear behavior) of the various modules of the system800(i.e., according to the behavior of the transmit chain810, the power supply modulator850, and/or the supply filter852). As shown inFIG.8, in some implementations that linearization system may include the supply filter852, which may be configured to remove noise, and/or smooth the control or power signal fed into the PA (e.g., in order to prevent sudden power level changes that could damage the PA). The discontinuities in the envelope, as well as quantization and switching noise can cause deterioration of the output spectrum, and degrade the out-of-band noise performance in the system. The filter852, which may be an analog or digital filter, can be used to mitigate some of the degradation and noise caused by the various other modules/components of the linearization system. As noted, the envelope generator840may be similar, in its implementation/configuration, to the envelope generator140ofFIG.1. Thus, in some examples, the envelope generator840may include a receiver to receive the input signal u, with the input signal u also being provided to the digital predistorter (comprising the actuator820and the estimator830) coupled to the transmit chain810comprising the power amplifier814. The envelope generator840also include a controller (implemented as a processor-based device or as a non-processor circuit) to determine, based on amplitude variations of the input signal u, the time-varying signal, e, with the amplitude variations of the time-varying signal e corresponding to time variations in non-linear characteristics of the transmit chain. For example, in the embodiments ofFIG.8, the controller of the envelope generator may derive a control signal e that satisfies a set of constraints defined according to one or more parameters (such as the parameters E0 and E2 discussed in relation to the embodiments of the system100) that represent the behavior (e.g., non-linear behavior) of one or more of the modules/components of the system800. The signal derived by the controller of the envelope generator can then be outputted by an output section of the envelope generator840. In the embodiments ofFIG.8, the output signal e is provided, for example, to an input of the power supply modulator850(in contrast, in the embodiments depicted forFIG.1, the control signal e is provided to the actuator120of the system100). As noted, in the embodiments ofFIG.8, and similarly to the embodiments ofFIG.1, the digital predistorter of the system800is configured to receive another input signal that depends on the amplitude variations of the time-varying signal e, and to generate, based at least in part on signals comprising the input signal u and the other input signal, a digitally predistorted output, v, provided to the transmit chain810, to mitigate non-linear behavior of the transmit chain. In some examples, the other input signal may include an observed digital sample (illustrated as the signal y inFIG.8, with y being the digital output of the ADC818) of an output of the at least one power amplifier. This other signal, y, along with a copy of the output, v, of the actuator820, are provided to the estimator830, which performs an optimization process to compute DPD parameters/coefficients to weigh the basis functions implemented by the predistorter to predistort the input signal u (example optimization processes to compute DPD coefficients are discussed above). Variations (changes) to the time-varying signal e produced by the envelope generator840will result in variations/changes to the power provided to the PA814(e.g., to intentionally cause the PA to be underpowered, thus resulting in non-linear behavior that can be more efficiently mitigated through adaptation of the system800's predistorter), which in turn impacts the behavior/nature of the signal outputted by the PA814, and of the observed sample, y, that is provided to the estimator830. In some embodiments, performance of the linearization system may be improved (e.g., to speed up the response of the envelope signal, and to improve the responsiveness of the adaptation process to variations of the envelope signal) by using a predictor module (which may be implemented as a processor or non-processor circuit) to model and predict the behavior of the power supply modulator of the linearization system (and/or predict the behavior of other modules) Control signal representative of the predicted behavior of the power supply modulator (and/or of other modules of the linearization system) can then be provided directly to, for example, an estimator of the DPD unit in order to derive DPD parameters/coefficients. Predicted modeling of the behavior of various linearization system modules can thus expedite the adaptation process that would otherwise be more slowly performed if it were to rely only on observed downstream signals to derive DPD coefficients. Predictor modules, such as those described herein, can account, for example, for the presence of dynamic power supply switching. Accordingly, with reference toFIG.9, another example implementation of a linearization system900that includes a predictor module942electrically interposed between an output of an envelope generator940(which may be similar, in implementation and configuration, to the envelope generators140and840ofFIGS.1and8, respectively) and a digital predistorter (e.g., the predistorter's estimator930, which may be similar, in implementation and configuration, to the estimators130and830ofFIGS.1and8, respectively) is shown. The predictor module942is configured to model non-idealities that transform the envelope control signal e to the signal s (depicted inFIG.9as being the output of the supply filter952, which, similarly to the supply filter852ofFIG.8, may be an analog or digital filter configured to remove quantization and switching noise, and/or smooth the control or power signals provided to the PA914). An example of non-idealities impacting the signaling s resulting from e, and degrading the linearization performance (e.g., the speed of DPD adaptively if the DPD uses a signal depended on s to perform the adaptation/optimization process producing DPD coefficients) can include, for example, time misalignment between v (the output of the actuator920) and s (the output of the supply filter), non-linearities resulting from operational characteristics of active components of the various modules, etc. Modelling of non-idealities can be based on linear or non-linear transformations that map values of e to expected values of s (or values of other downstream signaling affected by the values of e). Output of the predictor module can then be provided to the estimator to derive, based on the output of the predicted values (in conjunctions with one or more of the base band input signal u and the sampled values y), DPD parameters to predistort the input signal u. Thus, the embodiments ofFIG.9realize an actuator (e.g., the actuator/predistortion block920) that depends at least on the input base band signal u, and a predicted power supply waveform sp, matching the appropriate degree of accuracy to s. Such implementations can generate optimal or near-optimal linearization results while maintaining high power efficiency by modeling the nonlinear PA and predistortion as two inputs and one output system. In some of the example embodiments ofFIG.9, the envelope generator (envelope tracking module) may thus be configured to provide the time-varying signal e to the predictor module (electrically interposed between the envelope tracking module and the digital predistorter) configured to compute the predicted signal, sp, representative of an estimated expected behavior of the power supply modulator (controlling electrical operation of the transmit chain) based on known characteristics of the power supply module and the determined time-varying signal e. In such embodiments, the other input signal of the digital predistorter may include the predicted signal, sp, computed by the predictor module. As further shown in the system arrangement ofFIG.9, the output of the envelope generator940may also be provided to the power supply modulator950(in addition to being provided to the predictor module942). However, in some embodiments, the power supply modulator may receive control input from the actuator (e.g., similarly to the arrangement depicted inFIG.1, where, the power supply modulator150receives a signal eAproduced by the actuator120) in addition to, or instead of, receiving the signal e. In such embodiments, a control signal to control the power supply modulator may be generated using a transformation (optionally an adaptive transformation) that is based on the non-linear characteristics of the power supply modulator950and/or any other module, component, or section of the linearization system900. The power supply modulator generates a power signal (or a control signal to control power provided by a separate power supply unit; not shown inFIG.9), which in turn is provided to the supply filter952that performs noise removal and conditioning operation on the signal provided by the power supply modulator to produce the signal s (which may be the actual power signal, or may be a control signal controlling a power supply unit providing the electrical power to the PA914). As also shown, the base band signal u is processed by the actuator920, which predistorts the signal u to produce a predistorted signal v. In some embodiments, the processing of the signal u may include a decomposition of the signal into a basis functions representation that is weighed by adaptive coefficients (derived based at least on the output of the transmit chain, provided to the estimator930via a coupler916and an ADC918), and/or the predicted signals produced by the predictor module942). In some embodiments, the use of the predicted signal spallows the derivation of DPD coefficients that are configured for optimal or near-optimal operation with the power supply modulator950. For example, the power supply modulator may be controlled (via the signal e computed by the envelope generator940) to be intentionally operated in a non-linear manner that can be mitigated through appropriate computation of DPD coefficients, derived based, in part, on the predicted signal spshown inFIG.9, that counter the non-linear operation of the transmit chain910. With reference next toFIG.4, a flowchart of an example procedure400for digital predistortion is shown. The procedure400includes receiving410, by a digital predistorter (e.g., by a receiver section of a digital predistorter device such as the actuators120or200depicted inFIGS.1and2), a first signal that depends on amplitude variations based on an input signal, u, with the variations of the first signal corresponding to time variations in non-linear characteristics of a transmit chain such as the transmit chain110depicted inFIG.1) that includes a power amplifier. In some embodiments, the first signal may correspond to a time varying signal e generated by an envelope tracker (such as the tracker140ofFIG.1), in which case, receiving the first signal may include monitoring, by the digital predistorter, a time-varying signal e generated by an envelope tracker that received a copy of the input signal u. The signal e tracks the shape of the envelope of the signal u, and thus e depends on amplitude variations of the input signal u. In some examples, the procedure400may further include filtering the time-varying signal e. As noted, in some embodiments, the time-varying control signal e may be determined according to an optimization process that is based on characteristics of the transmit chain, and with the resultant control signal being based on parameters that can be adjusted or selected based on desired behavior of the transmit chain and/or the envelope tracking signal. For example, the determined tracking signal e can be one whose response speed to variations of the input signal can be adjusted (so that the response can be varied from slow to fast). A fast-responding envelope tracking control signal to modulate the power supply modulator (controlling power to the transmit chain) may result in a more efficient modulator (because the power provided to the transmit chain will more closely follow variations to the input signal u, thus reducing power waste), but may require more of a computational effort to derive. In another example, the determined control signal e may be one that is more compatible with the bandwidth of the transmit chain. Thus, in some embodiments, determining the control signal e may include determining the time varying-control signal e satisfying a set of constrains, including: i) a first constraint in which e[t]≥h(|u[t]|), where h(⋅) defines a relation between instantaneous power of the signal u and a power supply of the transmit chain, ii) a second constraint imposing maximal value and curvature bounds for the signal e[t], such that e[t]≤E0, and |2e[t]−e[t−1]−e[t+1]|≤E2, where E0 and E2 are values representative of operational characteristics of the transmit chain, and iii) a third constraint requiring that values of e[t] be as small as possible, subject to the first and second constraints. The value E2 may, for example, be representative of a bandwidth of the transmit chain and/or a response speed of the transmit chain to variations in amplitude of the input signal u. With continued reference toFIG.4, the procedure400additionally includes receiving420, by the digital predistorter (e.g., by the receiver section of the digital predistorter device), the input signal u, and generating430, by the digital predistorter (e.g., by a controller circuit of the digital predistorter device), based at least in part on signals comprising the input signal u and the first signal, a digitally predistorted signal v to mitigate the non-linear behavior of the transmit chain. In examples in which the first signal received by the digital predistorter is the time-varying signal e, the time-varying signal e may generated from the input signal u such that the time-varying signal e causes at least some non-linear behavior of the power amplifier. In such embodiments, generating the digitally predistorted signal v may include using the time-varying signal e to digitally predistort the input signal u such that the output of the transmit chain resulting from digitally predistorting the input signal u is substantially free of the at least some non-linear distortion caused by the time-varying signal e. The signal e may be generated so as to, for example, underpower the power amplifier to controllably cause non-linear behavior that can be mitigated through the predistortion operations of a digital predistorter (e.g., the actuator120ofFIG.1), with the predistortion operation being adapted in accordance with the input signal u and the control signal e (i.e., the predistortion operations are based on knowledge of the control signal e that is causing the non-linear behavior, thus potentially resulting in a more efficient joint operation of the envelope tracker, the power supply modulator, the transmit chain, and/or the digital predistorter). As noted, in some embodiments, the digital predistortion operations (by the actuator120) are based on the bandpass input signal u, and the envelope tracking control signal e, thus allowing the predistortion to take into account (at least implicitly) the power modulation of the transmit chain. Accordingly, performing the digital predistortion on the combined signal may include performing the digital predistortion on the combined signal comprising the input signal u and the time-varying control signal e to produce the digitally predistorted signal v according to: v[t]=u[t]+∑k=1nxkBk(qu[t],qe[t])withqu[t]=[u[t+l-1]u[t+l-2]⋮u[t+l-τ]]andwithqe[t]=[e[t+s(l-1)]e[t+s(l-2)]⋮e[t+s(l-τ)]] In such embodiments, Bkare basis functions, qu[t] and qe[t] are stacks of recent baseband and envelope input samples, respectively, s is a time scale separation factor representative of a ratio of time constants of the power amplifier and a modulator powering the power amplifier, and xkare computed coefficients to weigh the basis functions. In some examples, the procedure400may further include computing, according to an optimization process that is based, at least in part, on observed samples of the transmit chain, the computed coefficients xkto weigh the basis functions Bk. In some embodiments, the procedure400may also include generating a resultant envelope tracking signal, eA, through digital predistortion performed on the combined signal comprising the input signal u and the time-varying control signal e, to mitigate non-linear behavior of a power supply modulator that is producing output, based on the resultant envelope tracking signal, eA, to modulate the power provided to the power amplifier of the transmit chain. In such embodiments, eAhas a lower bandwidth than the time-varying control signal e. In some embodiments, the signal eA(produced by the actuator120) may depend only on the signal e. For example, the signal eAmay simply be a down-sampled signal required for compatibility with the power supply modulator. In such embodiments, the procedure400may thus also include generating a resultant envelope tracking signal, eA, as a function of the time-varying control signal e, with eAhaving a lower bandwidth than the time-varying control signal e, and with eA, provided to a power supply modulator producing output, based on the resultant envelope tracking signal, eA, to modulate the power provided to the power amplifier of the transmit chain. In some examples, generating the resultant envelope tracking signal eAmay include down-sampling the time-varying control signal e to generate a resultant down-sampled envelope tracking signal, eA. Turning back toFIG.4, as shown, the procedure400further includes providing440(e.g., by an output section of a digital predistorter device) the predistorted signal v to the transmit chain (whose non-linear behavior depends, at least in part, on amplitude variations of the signal u, as represented by the signal e or eA). In some examples, the procedure400may further include computing samples of the digitally predistorted signal, v, provided to the transmit chain, as a non-linear function of samples of the input signal u and the first signal. In some examples, receiving by the digital predistorter the first signal may include receiving an observed digital sample, y, of an output of the power amplifier controlled by a power supply modulator controlling electrical operation of the transmit chain according to a time-varying signal e generated by an envelope tracker that receives a copy of the input signal u. In some embodiments, receiving by the digital predistorter the first signal may include receiving a predicted signal, sp, computed by a predictor module electrically interposed between an envelope tracker and the digital predistorter, the predicted signal being representative of an estimated expected behavior of a power supply modulator, controlling electrical operation of the transmit chain, based on known characteristics of the power supply module and a time-varying signal, e, determined by the envelope tracker. With reference now toFIG.5, a flowchart of an example procedure500to control electrical operation of a transmit chain (via the generation of envelope tracking control signal) is provided. The procedure500is generally performed by an envelope tracking module (envelope generator, such as the envelope generator140ofFIG.1). As shown, the procedure500includes receiving510by the envelope tracking module (e.g., by a receiver/receiver section of an envelope tracking module) an input signal u. The input signal u is further provided to a digital predistorter (e.g., the actuator120) coupled to the transmit chain (such as the transmit chain110), with the transmit chain comprising a power amplifier (e.g., the PA114ofFIG.1). The procedure500further includes determining520, by the envelope tracking module (e.g., by a controller of the envelope tracking module), based on amplitude variations of the input signal u, a time-varying signal, e, with the amplitude variations of the time-varying signal e corresponding to time variations in non-linear characteristics of the transmit chain. The procedure500further includes outputting530, by the envelope tracking module (e.g., by an output section of the envelope tracking module), the time-varying signal e. The digital predistorter is configured to receive another input signal that depends on the amplitude variations of the time-varying signal e, and to generate, based at least in part on signals comprising the input signal u and the other input signal, a digitally predistorted output, v, provided to the transmit chain, to mitigate non-linear behavior of the transmit chain. In some examples, determining the time-varying signal, e, may include determining the time-varying signal e to cause a power supply modulator controlling the electrical operation of the transit chain to underpower the power amplifier so as to cause the transmit chain to operate in a non-linear mode. Thus, in some embodiments ofFIG.5, the transmit chain may intentionally be controlled to operate in a non-linear mode (in this example, through underpowering the transmit chain, although in other embodiments, the transmit chain may be placed in non-linear mode by other means, such as overpowering the transmit chain) since the non-linear effects may be efficiently corrected through the digital predistorter of the system, while the underpowering can conserve power and prolong the life of the components of the linearization system. In the procedure500, determining the time-varying signal, e, may include deriving the time-varying signal, e, according to one or more constraints representative of characteristics of the transmit chain. Deriving the time-varying signal e may include deriving the time-varying signal e satisfying a set of constraints that includes a first constraint in which e[t]≥h(|u[t]|), where h(⋅) defines a relation between instantaneous power of the input signal u and a power supply of the transmit chain, a second constraint imposing maximal value and curvature bounds for the signal e, such that e[t]≤E0, and |2e[t]−e[t−1]−e[t+1]|≤E2, where E0 and E2 are values representative of operational characteristics of the power amplifier, and a third constraint requiring that values of e[t] be as small as possible, subject to the first and second constraints. In some examples, the parameter E2 may be representative of one or more of, for example, a bandwidth of the transmit chain, or a response speed of the transmit chain to variations in amplitude of the input signal u. Thus, by selecting/varying the parameter E2, the response speed of the envelope tracking signal, and/or its bandwidth, can be controlled. For example, consider the graphs provided inFIGS.6A-C, which show different envelope tracking responses, each of which may correspond to different values of E2 in the determination of envelope tracking control signal e (the appropriate values of E2 may be determined experimentally or analytically). InFIG.6A, the graph600shows a slow envelope tracking signal602that can track the low frequency behavior of a signal604, but cannot track the fast changing behavior of the signal604. In this example, the curve602(corresponding to the envelope tracking signal) is fairly smooth, and has a relatively small bandwidth.FIG.6Bincludes a graph610showing a mid-range envelope tracking signal612, that can track the general shape of the spikes in a signal614, but still has some noticeable deviations between the shape of the tracking envelope612and the signal614. In this example, the envelope can follow some of the higher frequency components of the signal614, and accordingly is not as smooth as the curve602ofFIG.6A. Lastly,FIG.6Cincludes a graph620with a fast-response envelope622, that can more closely follow the variations in a signal624than the envelope signals602and612could. In the example ofFIG.6C, the envelope signal622has a relatively large bandwidth, but the signal is less smooth than either of the envelope signals602and612ofFIGS.6A and6B, respectively. As noted above, in some embodiments, the time-varying signal e may also be provided to the digital predistorter. Thus, the procedure500may further include providing, by the envelope tracking module, the time-varying signal e to the digital predistorter. In such embodiments, the other input signal of the digital predistorter may include the time varying signal e. Providing the time-varying signal may include providing the time-varying signal e to the digital predistorter to produce a resultant control signal, eA, provided to a power supply modulator controlling the electrical operation of the transit chain. The digital predistorter configured to produce the resultant control signal eAmay be configured to compute, based, at least in part, on observed samples of the transmit chain, coefficients to weigh basis functions applied to samples of the input signal u and the time-varying signal e to generate the resultant control signal eA. In the embodiments ofFIG.5in which the predistorter is provided with the input signal u and the time-varying signal e, the digital predistorter configured to generate the digitally predistorted output, v, may be configured to generate the digitally predistorted output, v, based on the input signal u and the time-varying signal, e, according to: v[t]=u[t]+∑k=1nxkBk(qu[t],qe[t])withqu[t]=[u[t+l-1]u[t+l-2]⋮u[t+l-τ]]andwithqe[t]=[e[t+s(l-1)]e[t+s(l-2)]⋮e[t+s(l-τ)]] with Bkbeing basis functions, qu[t] and qe[t] are stacks of recent baseband and envelope input samples, respectively, s is a time scale separation factor representative of a ratio of time constants of the power amplifier and the power supply modulator powering the power amplifier, and xkare computed coefficients to weigh the basis functions. The procedure500may also include computing according to an optimization process based, at least in part, on observed samples of the transmit chain, the computed coefficients xkto weigh the basis functions Bk. In some examples, the procedure500may further include providing the time-varying signal e to a power supply modulator controlling electrical operation of the transmit chain, with the other input signal of the digital predistorter including an observed digital sample, y, of an output of the power amplifier. Such embodiments are illustrated, for example, inFIG.8. In some embodiments, the procedure500may further include providing the time-varying signal e to a predictor module electrically interposed between the envelope tracking module and digital predistorter, the predictor module configured to compute a predicted signal, sp, representative of an estimated expected behavior of a power supply modulator, controlling electrical operation of the transmit chain, based on known characteristics of the power supply module and the determined time-varying signal e, wherein the other input signal of the digital predistorter includes the predicted signal, sp, computed by the predictor module. Such embodiments are illustrated, for example, inFIG.9. Turning next toFIG.7, a flowchart of another example procedure700, generally performed at a power supply modulator (such as the power supply modulator150ofFIG.1) of a linearization system (such as the system100ofFIG.1) is shown. The procedure700includes receiving710, by a power supply modulator (e.g., by a receiver section of a power supply modulator), one or more control signals. The procedure700further includes regulating720(e.g., by a regulator/controller circuit of the power supply modulator), based on the one or more control signals, power supply provided to a power amplifier of a transmit chain to underpower the power amplifier so that the transmit chain includes at least some non-linear behavior. The at least some non-linear behavior of the transmit chain, resulting from regulating the power supply based on the one or more control signals, is at least partly mitigated through digital predistortion performed by a digital predistorter (e.g., the actuator120) on signals comprising an input signal, u, provided to the digital predistorter, and on another signal, provided to the digital predistorter, that depends on amplitude variations based on the input signal, u. The variations of the other signal correspond to time variations in non-linear characteristics of the transmit chain. In some embodiments, the other signal provided to the digital predistorter includes a time-varying signal e generated by an envelope tracker that receives a copy of the input signal u. As described herein, the time-varying control signal e may be derived based on a set of constraints, including a first constraint in which e[t]≥h(|u[t]|), where h(⋅) defines a relation between instantaneous power of the input signal u and a power supply of the transmit chain, a second constraint imposing maximal value and curvature bounds for the signal e, such that e[t]≤E0, and |2e[t]−e[t−1]−e[t+1]|≤E2, where E0 and E2 are values representative of operational characteristics of the power amplifier, and a third constraint requiring that values of e[t] be as small as possible, subject to the first and second constraints. As noted, E2 may be representative of one or more of, for example, a bandwidth of the transmit chain, and/or a response speed of the transmit chain to variations in amplitude of the input signal u. In some embodiments, receiving the one or more control signals may include receiving a time-varying control signal, eA, derived based, at least in part, on the time-varying signal e, with eAhaving a lower bandwidth than the time-varying signal e. Receiving the time-varying signal eAmay include receiving the time-varying signal, eA, generated through digital predistortion performed on multiple signals comprising the input signal u and the time-varying signal e, to mitigate non-linear behavior of the power supply modulator producing output based on the resultant time-varying signal, eA. In some examples, receiving the time-varying signal eAmay include receiving the time-varying control signal, eA, generated as a bandwidth lowering function of the time-varying signal e. The bandwidth lowering function may include a down-sampling function applied to the time-varying signal e. The digital predistorter may be configured to generate digitally predistorted output signal, v, from the signals comprising the input signal u and the time-varying control signal e, according to: v[t]=u[t]+∑k=1nxkBk(qu[t],qe[t])withqu[t]=[u[t+l-1]u[t+l-2]⋮u[t+l-τ]]andwithqe[t]=[e[t+s(l-1)]e[t+s(l-2)]⋮e[t+s(l-τ)]] with Bkbeing basis functions, qu[t] and qe[t] being stacks of recent baseband and envelope input samples, respectively, s being a time scale separation factor representative of a ratio of time constants of the power amplifier and the power supply modulator powering the power amplifier, and xkbeing computed coefficients to weigh the basis functions. The coefficients xkmay be computed according to an optimization process based, at least in part, on observed samples of the transmit chain. In some examples, the coefficients xkcomputed according to the optimization process are computed according to the optimization process and further based on an output of the power supply modulator, the output being one of, for example, a voltage provided to the power amplifier, and/or a control signal to cause a corresponding voltage to be provided to the power amplifier. In some examples, the other signal provided to the digital predistorter may include an observed digital sample, y, of an output of the power amplifier, the power amplifier being controlled by the power supply modulator according to a time-varying signal e generated by an envelope tracker that receives a copy of the input signal u. In some embodiments, the other signal provided to the digital predistorter may include a predicted signal, sp, computed by a predictor module electrically interposed between an envelope tracker and the digital predistorter, the predicted signal being representative of an estimated expected behavior of the power supply modulator based on known characteristics of the power supply module and a time-varying signal, e, determined by the envelope tracker. The approaches described above may be used in conjunction with the techniques described in PCT Application PCT/US2019/031714, filed on May 10, 2019, titled “Digital Compensation for a Non-Linear System,” which is incorporated herein by reference. For instance, the techniques described in that application may be used to implement the actuator (referred to as the pre-distorter in the incorporated application), and to adapt its parameters, and in particular to form the actuator to be responsive to an envelope signal or other signal related to power control of a power amplifier. The above implementations, as illustrated inFIGS.1-9, are applicable to a wide range of technologies that include RF technologies (including WWAN technologies, such as cellular technologies, and WLAN technologies), satellite communication technologies, cable modem technologies, wired network technologies, optical communication technologies, and all other RF and non-RF communication technologies. The implementations described herein encompass all techniques and embodiments that pertain to use of digital predistortion in various different communication systems. In some implementations, a computer accessible non-transitory storage medium includes a database (also referred to a “design structure” or “integrated circuit definition dataset”) representative of a system including some or all of the components of the linearization and envelope tracking implementations described herein. Generally speaking, a computer accessible storage medium may include any non-transitory storage media accessible by a computer during use to provide instructions and/or data to the computer. For example, a computer accessible storage medium may include storage media such as magnetic or optical disks and semiconductor memories. Generally, the database representative of the system may be a database or other data structure which can be read by a program and used, directly or indirectly, to fabricate the hardware comprising the system. For example, the database may be a behavioral-level description or register-transfer level (RTL) description of the hardware functionality in a high-level design language (HDL) such as Verilog or VHDL. The description may be read by a synthesis tool which may synthesize the description to produce a netlist comprising a list of gates from a synthesis library. The netlist comprises a set of gates which also represents the functionality of the hardware comprising the system. The netlist may then be placed and routed to produce a data set describing geometric shapes to be applied to masks. The masks may then be used in various semiconductor fabrication steps to produce a semiconductor circuit or circuits corresponding to the system. In other examples, the database may itself be the netlist (with or without the synthesis library) or the data set. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly or conventionally understood. As used herein, the articles “a” and “an” refer to one or to more than one (i.e., to at least one) of the grammatical object of the article. By way of example, “an element” means one element or more than one element. “About” and/or “approximately” as used herein when referring to a measurable value such as an amount, a temporal duration, and the like, encompasses variations of ±20% or ±10%, ±5%, or +0.1% from the specified value, as such variations are appropriate in the context of the systems, devices, circuits, methods, and other implementations described herein. “Substantially” as used herein when referring to a measurable value such as an amount, a temporal duration, a physical attribute (such as frequency), and the like, also encompasses variations of ±20% or ±10%, ±5%, or +0.1% from the specified value, as such variations are appropriate in the context of the systems, devices, circuits, methods, and other implementations described herein. As used herein, including in the claims, “or” as used in a list of items prefaced by “at least one of” or “one or more of” indicates a disjunctive list such that, for example, a list of “at least one of A, B, or C” means A or B or C or AB or AC or BC or ABC (i.e., A and B and C), or combinations with more than one feature (e.g., AA, AAB, ABBC, etc.). Also, as used herein, unless otherwise stated, a statement that a function or operation is “based on” an item or condition means that the function or operation is based on the stated item or condition and may be based on one or more items and/or conditions in addition to the stated item or condition. Although particular embodiments have been disclosed herein in detail, this has been done by way of example for purposes of illustration only, and is not intended to be limit the scope of the invention, which is defined by the scope of the appended claims. Features of the disclosed embodiments can be combined, rearranged, etc., within the scope of the invention to produce more embodiments. Some other aspects, advantages, and modifications are considered to be within the scope of the claims provided below. The claims presented are representative of at least some of the embodiments and features disclosed herein. Other unclaimed embodiments and features are also contemplated. | 78,670 |
11863211 | DESCRIPTION OF EMBODIMENTS First Embodiment Hereinafter, a first embodiment of the present invention will be described with reference to the drawings. FIG.9is a schematic diagram illustrating a configuration of a bus type optical access network1according to the first embodiment of the present invention. As illustrated inFIG.9, the bus type optical access network1includes an OLT10serving as a station-side apparatus and a plurality of ONUs20serving as subscriber-side apparatuses. The OLT10is connected to the plurality of ONU20sby a communication path including an optical fiber15that is wired in a bus network topology. As illustrated inFIG.9, an optical amplification apparatus serving a function to amplify an optical signal is provided between the OLT10and the communication path. In the bus type optical access network1, the communication path is separated into a communication path through which a wavelength band of an uplink signal passes and a communication path through which a wavelength band of a downlink signal passes, and the separated communication paths are again coupled by two WDM optical couplers33. As illustrated inFIG.9, in a downlink communication, the optical signal (downlink signal) transmitted from the OLT10enters the optical amplification apparatus. In the optical amplification apparatus, the downlink signal propagates through the communication path through which the downlink signal passes of the communication paths separated by the WDM optical coupler33. A concentrated optical amplifier41is installed in the communication path through which the downlink signal passes. The concentrated optical amplifier41amplifies the downlink signal. The amplified downlink signal propagates through the communication path and is received by each ONU20. On the other hand, as illustrated inFIG.9, in an uplink communication, the optical signal (uplink signal) transmitted from each ONU20passes through the communication path, and then, enters the optical amplification apparatus. In the optical amplification apparatus, the uplink signal propagates through the communication path through which the uplink signal passes of the communication paths separated by the WDM optical coupler33. The uplink signal passes through the optical amplification apparatus and is received by the OLT10. As illustrated inFIG.9, the optical amplification apparatus is provided with an excitation light output unit50. The excitation light output unit50makes excitation light for amplifying the uplink signal incident on the communication path. By doing so, once the uplink signal enters a region where an intensity of the excitation light is high in the communication path, the uplink signal is gradually amplified by the effect of the distributed Raman amplification. The bus type optical access network1according to the present embodiment reduces a drop loss experienced by the excitation light at a drop point31, and maximizes the Raman gain. In the present embodiment, the WDM optical coupler is used as the drop point31. However, the WDM optical coupler assumed here can branch one input signal into a plurality of output ports, and change a branching ratio depending on a wavelength. FIG.10is a schematic diagram illustrating a configuration of the drop point31(WDM optical coupler) of the bus type optical access network1according to the first embodiment of the present invention. In the present embodiment, as an example, the drop point31is a WDM optical coupler formed by melt drawing (for example, see NPL 2). The drop point31(WDM optical coupler) is coupled by fusing portions of two optical fibers put into contact with each other. The drop point31propagates a portion of an optical signal propagating through one optical fiber to the other optical fiber. This distributes the optical signal input to an input port to a plurality of output ports. For example, in a case that an optical signal is input to a port1and a port2, the optical signal is output from a port3and a port4. For example, in a case that an optical signal is input to the port3and the port4, the optical signal is output from the port1and the port2. For example, a downlink signal is input to the port1, and output from the port3and the port4. For example, an uplink signal is input to the port3and the port4, and output from the port1and the port2. If the drop point is an equal branch optical splitter, 50[%] of the optical signal input from the port1is output to each of the port3and the port4. In contrast, in the drop point31(WDM optical coupler formed by melt drawing) according to the present embodiment, heat is applied to the optical fiber to stretch a coupling portion, and thereby, a branching ratio of the optical signal to each output port is controlled. FIG.11is a diagram illustrating an example of a transmission characteristic of the drop point31of the bus type optical access network1according to the first embodiment of the present invention. InFIG.11, a solid line waveform illustrates a ratio of transmission from the port1to the port3, or from the port3to the port1. InFIG.11, a dashed waveform illustrates a ratio of transmission from the port1to the port4, or from the port4to the port1. Note that the port2is not used. In the present embodiment, as an example, the branching ratio of the optical signal at the drop point31is (trunk fiber direction):(branch fiber direction)=80:20. Note that the trunk fiber direction is a direction from the port1to the port3, and a direction from the port3to the port1. The branch fiber direction is a direction from the port1to the port4, and from the port4to the port1. On the other hand, the excitation light does not transmit in the branch fiber direction at the drop point31and is configured to entirely (100[%]) transmit in the trunk fiber direction. This keeps the light intensity of the excitation light high on reaching the far area, and thus, the Raman gain can be maximized. Note that, in the present embodiment, the configuration in which the WDM optical coupler formed by melting drawing is used as the drop point31is described, as an example, but the present invention is not limited thereto. For example, even in a case that a PLC is used or a Mach-Zehnder waveguide is used, it is possible to realize the drop point that changes the branching ratio depending on the wavelength. Second Embodiment Hereinafter, a second embodiment of the present invention will be described with reference to the drawings. In the present embodiment, the optical signal is a WDM signal including a plurality of wavelengths. FIG.12is a diagram illustrating an example of a change in a wavelength arrangement of an optical signal and a transmittance in a WDM optical coupler. In the case that the optical signal is a WDM signal including a plurality of wavelengths, the wavelength varies depending on the signal. As a result, the transmittance also changes depending on the signal, and therefore, variations in the transmission distance will occur. As a method for solving this, it is conceivable to use a WDM optical coupler in which the transmission ratio with respect to each port changes periodically at the same period as a wavelength interval of a WDM signal. FIG.13is a diagram illustrating how the transmittance with respect to each port changes at the same period as the wavelength interval of the WDM signal. In this case, the transmittances of all the signals can be kept constant. The wavelength of the excitation light is set so that the transmittance is 100[%] as described above. Generally, the wavelength of the excitation light is set in accordance with a wavelength of an optical signal to be amplified. For example, in a case that a standard single mode fiber is used, light at a frequency higher than the optical signal by about 13 [THz] is used as the excitation light. This results in a wavelength difference of 100 [nm] in a 1550 [nm] band. Thus, in a case that excitation light of 1500 [nm] is used, an amplification gain is generated near 1600 [nm]. FIG.14is a diagram illustrating how an amplification gain is generated. InFIG.14, an amplification band indicates a region where the maximum amplification gain is obtained. In a case that the amplification band is sufficiently larger than the band of the WDM signal to be amplified by the distributed Raman amplification, there are a plurality of wavelengths of excitation lights in which the WDM signal to be amplified is in the amplification band and the transmission of the trunk fiber is 100[%], and the wavelength of the excitation light may be set to anywhere among these plural wavelengths. FIG.15is a diagram illustrating a case that a wavelength of excitation light is changed. In this way, even in a case that the wavelength of the excitation light is changed, the wavelength band of the WDM signal can be covered by the amplification band. It is conceivable to perform amplification using a plurality of excitation lights for improving the Raman gain. FIG.16is a diagram illustrating how amplification is performed using a plurality of excitation lights. In this case, light having a wavelength at which the transmittance in the WDM optical coupler with respect to the trunk fiber is high is used as the excitation light. For example, the Raman gain can be maximized by setting the wavelength of each excitation light beam so that the transmittance in the WDM optical coupler with respect to the trunk fiber is 100[%]. Third Embodiment Hereinafter, a third embodiment of the present invention will be described with reference to the drawings. FIG.17is a schematic diagram illustrating a configuration of a bus type optical access network2according to the third embodiment of the present invention. The bus type optical access network2according to the present embodiment is an optical access network in which the concentrated optical amplifier41and the distributed Raman amplification technology are used. As illustrated inFIG.17, the bus type optical access network2includes the drop points31. In the present embodiment, each drop point31is designed so that the transmittance with respect to the trunk fiber is 100[%] in the wavelength band of the excitation light. In the present embodiment, the drop point31is a WDM optical coupler. However, in the strict sense of the word, the characteristics of an apparatus of the drop point (e.g., a WDM optical coupler) vary from individual to individual. Therefore, for maximizing the Raman gain, the wavelength of the excitation light needs to be optimized in conformity with the apparatus of the drop point to be used (e.g., the WDM optical coupler). For maximizing the Raman gain, it is important to keep a section where the intensity of the excitation light is high longer. Hereinafter, the change in the excitation light intensity with respect to the transmission distance is illustrated in a case that the transmittance in each drop point31(WDM optical coupler) with respect to the trunk fiber is slightly less than 100%. FIG.18is a diagram illustrating the change in the excitation light intensity with respect to the transmission distance in a case that the drop point31with a relatively high drop loss is present at a position close to the OLT10.FIG.19is a diagram illustrating the change in the excitation light intensity with respect to the transmission distance in a case that the drop point31with a relatively high drop loss is present at a position close to the ONU20. As illustrated inFIGS.18and19, a sum of the transmission losses to which the excitation light is subjected in each communication path is the same between both cases. However, in the case that the drop point31with a relatively high drop loss is present closer to the ONU20side (subscriber side) (FIG.19), the resulting gain is larger. This is because, in the case that the drop point31with a relatively high drop loss is present closer to the ONU20side (subscriber side) (FIG.19), the communication can be performed in the longer section in a state where the intensity of the excitation light is high. Therefore, in determining the wavelength band of the excitation light, it is important to consider the position of each drop point31for the determination rather than simply determining the wavelength band with a low drop loss at the drop point31. As a method for determining the wavelength of excitation light to maximize the Raman gain, for example, a method of sweeping a wavelength of excitation light while monitoring the Raman gain is contemplated. FIG.20is a schematic diagram illustrating a configuration of a bus type optical access network3in which a wavelength is swept to determine a wavelength of excitation light. As illustrated inFIG.20, the bus type optical access network3is an optical access network in which the concentrated optical amplifier41and the distributed Raman amplification technology are used. As illustrated inFIG.20, the bus type optical access network3includes the drop points31. In the present embodiment, each drop point31is a WDM optical coupler. An optical amplification apparatus provided to the bus type optical access network3includes a gain monitoring unit, an excitation light output unit50, and an analysis unit55. The gain monitoring unit includes a monitored light output unit51, a circulator52, an intensity monitor unit54, and the analysis unit55. As illustrated in FIG.20, the bus type optical access network3includes a mirror unit. The mirror unit includes a mirror53. The mirror53reflects monitored light described below. The gain monitoring unit monitors the Raman gain. The monitored light output unit51causes light in the wavelength band in which the Raman gain is obtained to be incident as monitored light on the communication path. The monitored light propagates in the trunk fiber, and thereafter, is reflected by the mirror53of the mirror unit disposed at the end of the trunk fiber. The reflected monitored light propagates again in the trunk fiber, and thereafter, is received by the intensity monitor unit54of the optical amplification apparatus. At this time, the excitation light output from the excitation light output unit50is swept so that the gain obtained by the monitored light entering the intensity monitor unit54changes. In order to grasp the wavelength of the excitation light at which the gain is maximized, the method of sweeping the wavelength of the excitation light is effective, as described above, for example. In order to maximize the gain, the wavelength of the excitation light is swept to detect the wavelength of the excitation light at which an intensity of the monitored light detected by the intensity monitor unit54is maximum. The analysis unit55detects an optimal wavelength of the excitation light with reference to the intensity of the monitored light detected by the intensity monitor unit54that changes as the wavelength of the excitation light is swept. The analysis unit55controls the wavelength of the excitation light output from the excitation light output unit50in accordance with the detection result. Note that in a case that the transmittance of the monitored light with respect to the trunk fiber is low in the drop point31, the intensity of the monitored light entering the intensity monitor unit54is low. As a result, measurement accuracy reduction is concerned. In this case, before setting the wavelength of the excitation light described above, it is necessary to sweep the wavelength of the monitored light to set the intensity of the monitored light entering the intensity monitor unit54to be sufficiently high. As described above, the bus type optical access network (optical communication system) according to each of the above-described embodiments is configured to include the OLT10serving as a station-side apparatus and the plurality of ONUs20serving as subscriber-side apparatuses. The OLT10is connected to each of the plurality of ONUs20by the optical fiber that is wired in the bus topology. The optical amplification apparatus (optical amplification unit) serving the function to amplify an optical signal is connected between the OLT10and the communication path. The optical amplification apparatus separates the uplink signal and the downlink signal into different communication paths by the WDM optical coupler33, and then, again couples. The concentrated optical amplifier41is installed in the communication path for the downlink signal. The concentrated optical amplifier41amplifies the downlink signal transmitted from the OLT10. On the other hand, the uplink signal transmitted from the ONU20propagates through the communication path, and thereafter, passes through the optical amplification apparatus and is received by the OLT10. Excitation light for amplifying the uplink signal is incident on the communication path from the optical amplification apparatus. Once the uplink signal enters the region where the intensity of the excitation light is high in the communication path, the uplink signal is gradually amplified by the effect of the distributed Raman amplification. Furthermore, the WDM optical coupler is used to reduce the drop loss experienced by the excitation light at the drop point31(drop unit) and to maximize the Raman gain. The WDM optical coupler branches one input signal into a plurality of output ports, and changes the branching ratio in accordance with the wavelength of the optical signal. With such a configuration, the bus type optical access network according to each of the embodiments of the present invention can keep the intensity of the excitation light high even in the far area (i.e., the area farther away from the OLT10), and thus, can maximize the Raman gain. Therefore, according to the present invention, the transmission distance in the bus type optical access network can be increased. A part of the optical access network according to each of the embodiments described above may be implemented by a computer. In that case, the functions may be implemented by recording a program for implementing the functions in a computer-readable recording medium and causing a computer system to read and execute the program recorded in the recording medium. Note that the “computer system” referred herein includes an OS and hardware of a peripheral device. The “computer-readable recording medium” means a portable medium such as a flexible disk, a magneto-optical disk, a ROM, or a CD-ROM, or a recording device such as a hard disk incorporated in the computer system. Moreover, the “computer-readable recording medium” may include a recording medium that dynamically holds the program for a short period of time, such as a communication line in a case in which the program is transmitted via a network such as the Internet or a communication line such as a telephone line, or a recording medium that holds the program for a specific period of time, such as a volatile memory inside a computer system that serves as a server or a client in that case. Furthermore, the aforementioned program may be for implementing some of the aforementioned functions, or may be able to implement the aforementioned functions in combination with a program that has already been recorded in the computer system, or using a programmable logic device such as a field programmable gate array (FPGA). Although the embodiment of the present invention has been described in detail with reference to the drawings, a specific configuration is not limited to the embodiment, and a design or the like in a range that does not depart from the gist of the present invention is included. REFERENCE SIGNS LIST 10. . . OLT15. . . Optical fiber16. . . Trunk fiber17. . . Branch fiber20. . . ONU30. . . Optical splitter31. . . Drop point32. . . Unequal branch optical splitter33. . . WDM optical coupler40. . . Optical amplifier41. . . Concentrated optical amplifier50. . . Excitation light output unit51. . . Monitored light output unit52. . . Circulator53. . . Mirror54. . . Intensity monitor unit55. . . Analysis unit | 20,045 |
11863212 | DETAILED DESCRIPTION Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings as appropriate. However, unnecessarily detailed description may be omitted. For example, detailed description of well-known matters and redundant description of substantially the same configuration may be omitted. This is to avoid unnecessary redundancy of the following description and to facilitate understanding of those skilled in the art. The accompanying drawings and the following description are provided for those skilled in the art to fully understand the present disclosure, and are not intended to limit the subject matter described in the claims. Embodiment 1 <Configuration of Vehicle> A configuration of a vehicle1according to Embodiment 1 will be described.FIG.1shows an example of the configuration of the vehicle1according to Embodiment 1. The vehicle1includes, for example, wheels2, a motor3, an inverter4, a battery5, an antenna6, a radio receiver7, a speaker8, and a signal processing device100. The motor3is driven by electric power supplied from the battery5so as to drive the wheels2of the vehicle1. The inverter4controls the electric power supplied from the battery5to the motor3so as to perform speed control and torque control of the motor3. The inverter4operates at a clock frequency of, for example, 1 kHz to 10 kHz. Since the inverter4operates at the predetermined clock frequency, the inverter4can radiate noise having the clock frequency and a frequency that is an integral multiple of the clock frequency in a frequency domain. That is, the inverter4can radiate noise having peaks of amplitude at regular frequency intervals in the frequency domain. Here, the peak refers to a local maximum value of voltage amplitude in a waveform of a frequency spectrum in which a horizontal axis represents a frequency and a vertical axis represents voltage amplitude of each frequency. The peak may be a discrete peak shown in a line spectrum. Hereinafter, a peak of amplitude of noise appearing at regular frequency intervals in the frequency domain is referred to as a noise peak. The inverter4is an example of a device that can radiate noise having a periodic spectrum pattern in the frequency domain. For example, the device capable of radiating noise having the periodic spectrum pattern in the frequency domain may also be a DC/DC converter or the like provided in the vehicle1. The antenna6receives a broadcast electromagnetic wave of an AM radio. However, the broadcast electromagnetic wave received by the antenna6is not limited to that of the AM radio, and may also be a broadcast electromagnetic wave of an FM radio. In addition, the broadcast electromagnetic wave received by the antenna6is not limited to an analog radio broadcast electromagnetic wave, and may also be a television broadcast electromagnetic wave or a digital radio broadcast electromagnetic wave. The antenna6can also receive the noise radiated from the inverter4when the broadcast electromagnetic wave is received. Hereinafter, a signal in which noise is superimposed on a broadcast electromagnetic wave, which is received by the antenna6, is referred to as a received signal. The signal processing device100removes the noise from the received signal, and outputs the signal from which the noise has been removed. Hereinafter, the signal output by the signal processing device100is referred to as an output signal. The radio receiver7demodulates the output signal input from the signal processing device100, and outputs a radio sound from the speaker8. As a result, a passenger of the vehicle1can listen to a clear radio sound in which unpleasant noise sounds caused by the noise radiated from the inverter4are reduced. Hereinafter, the signal processing device100will be described in detail. <Configuration of Signal Processing Device> Next, a configuration of the signal processing device100will be described.FIG.2is a block diagram showing an example of the configuration of the signal processing device100according to Embodiment 1.FIG.3shows an example of an input frequency domain signal according to Embodiment 1; The signal processing device100includes: an A/D converter101; a first converter102; a broadcast signal detector103; a noise frequency interval calculator104; a peak frequency detector105; a frequency shift amount determining unit106; an amplitude correction factor determining unit107; a phase correction factor determining unit108; a frequency shifter109; a corrector110; a second converter111; a delay device112; and a noise canceller113. The received signal is input to the A/D converter101from the antenna6. The received signal may include a broadcast electric signal of a channel which is tuned by the radio receiver7, a broadcast electric signal of a channel which is not tuned by the radio receiver7, and the noise radiated from the inverter4. Hereinafter, the broadcast electric signal of the tuned channel is referred to as a tuned channel broadcast signal, and the broadcast electric signal of the other channel is referred to as an other channel broadcast signal. The other channel is a channel other than the tuned channel. The A/D converter101converts the received signal into a digital signal. Hereinafter, a signal obtained by converting the received signal into the digital signal is referred to as an input time domain signal. The input time domain signal may also be expressed as inSig(t). Here, t represents time. The A/D converter101outputs inSig(t) to the first converter102and the delay device112. An RF circuit block (not shown) may be included between the antenna6and the A/D converter101. The RF circuit block may include, for example, at least one of a filter configured to pass an AM broadcast electric signal with low loss and attenuate an unnecessary signal such as an FM broadcast electromagnetic wave, a low noise amplifier (LNA) configured to improve reception sensitivity, and a mixer circuit configured to perform tuning. The first converter102converts a time domain signal into a frequency domain signal. The first converter102converts the input time domain signal inSig(t) input from the A/D converter101into a frequency domain signal by, for example, fast Fourier transform (FFT), as shown inFIG.3. Hereinafter, a signal obtained by converting the input time domain signal inSig(t) into the frequency domain signal is referred to as an input frequency domain signal. The input frequency domain signal may also be expressed as fftData(f). Here, f represents frequency. fftData(f) is a function representing a complex voltage at a certain frequency f. The first converter102outputs fftData(f) to the broadcast signal detector103, the noise frequency interval calculator104, the peak frequency detector105, and the frequency shifter109. The broadcast signal detector103detects a frequency range that includes the tuned channel broadcast signal and a frequency range that includes the other channel broadcast signal by using the fftData(f) input from the first converter102. Hereinafter, a frequency range including the frequency bandwidth occupied by the tuned broadcast station is referred to as a tuned channel frequency range, and a frequency range including the frequency bandwidth occupied by the other broadcast station is referred to as an other channel frequency range. The broadcast signal detector103generates a detection result of the tuned channel frequency range and the other channel frequency range as broadcast detection information. Hereinafter, the broadcast detection information may be expressed as detBcResult. Details of processing of the broadcast signal detector103will be described later. The noise frequency interval calculator104calculates a noise peak frequency interval of the fftData(f) input from the first converter102. That is, the noise frequency interval calculator104calculates a regular frequency interval between noise peaks of amplitude in the fftData(f). Hereinafter, the noise peak frequency interval calculated by the noise frequency interval calculator104, that is, the regular frequency interval between the noise peaks is referred to as a noise frequency interval. The noise frequency interval may also be expressed as fcyc [Hz]. The noise frequency interval calculator104outputs the calculated fcyc to the frequency shift amount determining unit106. Details of processing of the noise frequency interval calculator104will be described later. The peak frequency detector105detects a frequency of a noise peak having maximum voltage amplitude among a plurality of frequencies having peaks in a certain frequency range of the fftData(f) input from the first converter102. Hereinafter, the frequency detected by the peak frequency detector105is referred to as an extracted noise peak frequency. The extracted noise peak frequency may also be expressed as Xpeak [Hz]. The peak frequency detector105outputs the detected Xpeak to the amplitude correction factor determining unit107and the phase correction factor determining unit108. Details of processing of the peak frequency detector105will be described later. The frequency shift amount determining unit106determines a frequency shift amount of fftData(f) based on the broadcast detection information detBcResult input from the broadcast signal detector103and the noise frequency interval fcyc input from the noise frequency interval calculator104. Hereinafter, the frequency shift amount determined by the frequency shift amount determining unit106may be expressed as fShift [Hz]. The frequency shift amount determining unit106outputs the determined fShift to the frequency shifter109. Details of processing of the frequency shift amount determining unit106will be described later. The amplitude correction factor determining unit107determines, by using the Xpeak input from the peak frequency detector105, an amplitude correction factor that is a factor for correcting signal amplitude in an entire frequency range including noise peaks in the frequency domain. Hereinafter, the amplitude correction factor determined by the amplitude correction factor determining unit107may be expressed as GdB. GdB corresponds to an amplitude difference between two noise peaks adjacent to each other with fcyc interposed therebetween. Alternatively, GdB corresponds to a ratio of amplitude between two noise peaks adjacent to each other with fcyc interposed therebetween. The amplitude correction factor determining unit107outputs the determined GdB to the corrector110. Details of processing of the amplitude correction factor determining unit107will be described later. The phase correction factor determining unit108determines, by using the Xpeak input from the peak frequency detector105, a phase correction factor that is a factor for correcting signal phases in the entire frequency range including noise peaks in the frequency domain. Hereinafter, the phase correction factor determined by the phase correction factor determining unit108may be expressed as deltaD. deltaD corresponds to a phase difference between two noise peaks adjacent to each other with fcyc interposed therebetween. The phase correction factor determining unit108outputs the determined deltaD to the corrector110. Details of processing of the phase correction factor determining unit108will be described later. The frequency shifter109shifts a frequency of the fftData(f) input from the first converter102by the fShift input from the frequency shift amount determining unit106. Hereinafter, a signal obtained by shifting the frequency of the fftData(f) by the fShift by the frequency shifter109is referred to as a frequency-shifted frequency domain signal. The frequency-shifted frequency domain signal may also be expressed as fftDataShift(f). The frequency shifter109outputs fftDataShift(f) to the corrector110. Details of processing of the frequency shifter109will be described later. The corrector110corrects amplitude of a frequency spectrum in the entire frequency range of the fftDataShift(f) input from the frequency shifter109by using the GdB input from the amplitude correction factor determining unit107. However, the corrector110may also not perform the correction of the amplitude. In addition, the corrector110corrects a phase of the frequency spectrum in the entire frequency range of the fftDataShift(f) by using the deltaD input from the phase correction factor determining unit108. Hereinafter, a signal obtained by correcting the amplitude and the phase of the frequency spectrum in the entire frequency range of the fftDataShift(f) by the corrector110is referred to as a corrected frequency domain signal. The corrected frequency domain signal may also be expressed as postFftDataShift(f). The corrector110outputs the postFftDataShift(f) to the second converter111. Details of processing of the corrector110will be described later. The second converter111converts a frequency domain signal into a time domain signal. The second converter111converts the postFftDataShift(f) input from the corrector110into a time domain signal by, for example, inverse fast Fourier transform (IFFT). Hereinafter, a signal obtained by converting the postFftDataShift(f) into the time domain signal is referred to as a noise time domain signal. The noise time domain signal may also be expressed as noiseSig(t). The second converter111outputs noiseSig(t) to the noise canceller113. The delay device112delays the inSig(t) input from the A/D converter101by a predetermined time. The predetermined time may be determined based on a time from when the inSig(t) is input to the first converter102to when the noiseSig(t) corresponding to the inSig(t) is output from the second converter111. The delay device112outputs the delayed inSig(t) to the noise canceller113. The noise canceller113performs a signal synthesis by which the noiseSig(t) input from the second converter111is subtracted from the inSig(t) input from the delay device112. As a result, the noise indicated by the noiseSig(t) is removed from the entire frequency range including the tuned channel frequency range of the inSig(t). Thus an output signal is obtained by subtracting the noiseSig(t) from the inSig(t). Hereinafter, the output signal may be expressed as outSig(t). The noise canceller113outputs outSig(t) to the radio receiver7. The radio receiver7demodulates a signal in a tuned channel frequency range of the outSig(t). As a result, the radio receiver7can output, from the speaker8, the clear radio sound in which the unpleasant noise sounds caused by the noise radiated from the inverter4are reduced. <Outline of Processing of Signal Processing Device> Next, an outline of processing of the signal processing device100will be described.FIG.4is a flowchart showing the outline of the processing of the signal processing device100according to Embodiment 1. As S100, the antenna6outputs the received signal to the A/D converter101. As S200, the A/D converter101converts the received signal input from the antenna6into the digital input time domain signal inSig(t). At S300, the first converter102converts the input time domain signal inSig(t) into the input frequency domain signal fftData(f). Details of the process of S300will be described later. As S400, the broadcast signal detector103detects the tuned channel frequency range and the other channel frequency range by using the fftData(f), and generates the broadcast detection information detBcResult. Details of the process of S400will be described later. As S500, the noise frequency interval calculator104calculates the noise frequency interval fcyc in the fftData(f). Details of the process of S500will be described later. As S600, the peak frequency detector105detects the extracted noise peak frequency Xpeak in a certain frequency range of the fftData(f). Details of the process of S600will be described later. As S700, the frequency shift amount determining unit106determines the frequency shift amount fShift for the fftData(f). Details of the process of S700will be described later. As S800, the amplitude correction factor determining unit107determines the amplitude correction factor GdB. The process of S800will be described later. As S900, the phase correction factor determining unit108determines the phase correction factor deltaD. The process of S900will be described later. As S1000, the frequency shifter109shifts the frequency of the fftData(f) by the fShift determined in S700, and outputs the frequency-shifted frequency domain signal fftDataShift(f). The process of S1000will be described later. As S1100, the corrector110corrects the amplitude of the frequency spectrum in the entire frequency range of the fftDataShift(f) of S1000by using the amplitude correction factor GdB of S800, and corrects the phase of the frequency spectrum in the entire frequency range by using the phase correction factor deltaD of S900. Then, the corrector110outputs the corrected frequency domain signal postFftDataShift(f) that is the signal obtained by correcting the fftDataShift(f). The process of S1100will be described in detail later. As S1200, the second converter111converts the postFftDataShift(f) of S1100into the noise time domain signal noiseSig(t). As S1300, the noise canceller113performs the signal synthesis by which the noiseSig(t) of S1200is subtracted from the inSig(t) that is output from the A/D converter101and delayed by the delay device112, and thus outputs the output signal outSig(t). <Details of Processing of First Converter> Next, the processing of the first converter102will be described in detail. That is, the process of S300shown inFIG.4will be described in detail. The first converter102converts the input time domain signal inSig(t) input from the A/D converter101into the input frequency domain signal fftData(f). An example of a waveform of the input frequency domain signal fftData(f) is shown inFIG.3. Here, the fftData(f) shown inFIG.3is a frequency spectrum in which a center frequency of a tuned channel corresponds to 0 [Hz]. As used herein, the center frequency of the tuned channel may also be referred to as a tuned channel frequency. In addition, in the present embodiment, a total signal bandwidth of fftData(f) is expressed as BWtotal, a lower limit frequency of the fftData(f) is expressed as (−BWtotal/2), and an upper limit frequency of the fftData(f) is expressed as (BWtotal/2). In addition to a tuned channel broadcast signal, the fftData(f) shown inFIG.3includes the other channel broadcast signal, a noise signal having peaks of amplitude at regular frequency intervals, and another lower-amplitude noise signal. The noise signal having peaks of amplitude at regular frequency intervals forms a large number of noise peaks of amplitude at regular frequency intervals over a wide frequency range. The total signal bandwidth BWtotal is wider than a bandwidth occupied by one channel of broadcast. For example, in the case of AM radio broadcasting, a bandwidth occupied by one channel varies in a range of 9 to 30 [kHz], whereas the total signal bandwidth BWtotal of the fftData(f) is, for example, 650 [kHz]. It should be noted that the total signal bandwidth BWtotal may be larger or smaller than 650 [kHz]. <Details of Processing of Broadcast Signal Detector> Next, the processing of the broadcast signal detector103will be described in detail. That is, the process of S400shown inFIG.4will be described in detail.FIG.5is a flowchart showing an example of the processing of the broadcast signal detector103according to Embodiment 1.FIG.6shows an example in which a total signal bandwidth of the input frequency domain signal is divided into each of the channels of broadcast.FIG.7shows a process of calculating a power for each channel.FIG.8shows a process of rearranging the channels based on magnitude of the power.FIG.9shows an example of the broadcast detection information (detBcResult). As S401, as shown inFIG.6, the broadcast signal detector103divides a total signal bandwidth of the fftData(f) into frequency ranges for respective broadcast channels. Hereinafter, each divided frequency range for the channels is referred to as a channel range. For example, a bandwidth of one channel range may be 9 kHz in the case of analog AM radio broadcasting in Japan, Europe, or Asia, 10 kHz in the case of analog AM radio broadcasting in North America or South America, and 30 kHz in the case of AM digital radio (IBOC) in North America. As S402, the broadcast signal detector103calculates a power of each channel as shown inFIG.7. For example, the broadcast signal detector103sums up squares of amplitude of signals of respective frequencies belonging to one channel range in the fftData(f) and thus calculates a channel power corresponding to the channel range. As S403, the broadcast signal detector103calculates an average of power of a predetermined number of channels from a low power side among all the channels, and sets the average as a noise average power. Hereinafter, the noise average power may be expressed as aveNoiseChPowData. For example, as shown inFIG.8, the broadcast signal detector103rearranges the channels in an order from a channel having a small power to a channel having a large power, calculates an average of power of a predetermined number of lower power channels, and sets the average as aveNoiseChPowData. The predetermined number may be a half of a total number of channels. However, the predetermined number is not limited to a half of the total number of channels, and may be larger or smaller than the half. In addition, when a half of the total number of channels is a decimal, the broadcast signal detector103may increase, decrease, or round off a decimal part thereof such that the predetermined number becomes an integer. As S404, the broadcast signal detector103determines that a channel having power per channel that is equal to or higher than a predetermined multiple of the aveNoiseChPowData is a channel including the broadcast electric signal. When each process such as determination of the frequency shift amount, calculation of the amplitude correction factor, and calculation of the phase correction factor that will be described later is performed, there are cases where the calculation cannot be accurately performed if the channel including the broadcast electric signal having power larger than the predetermined multiple of the noise average power is used, and thus such a channel is identified as a channel including the broadcast electric signal. The predetermined multiple may be four times. However, the predetermined multiple is not limited to four times, and may be larger or smaller than four times. In addition, the broadcast electric signal included in the channel may be a tuned channel broadcast signal or an other channel broadcast signal. Therefore, a channel containing the tuned channel broadcast signal may be read as the tuned channel frequency range, and a channel containing the other channel broadcast signal may be read as the other channel frequency range. As S405, the broadcast signal detector103determines whether each frequency of the fftData(f) belongs to the channel including the broadcast electric signal. Then, as shown inFIG.9, the broadcast signal detector103generates the broadcast detection information detBcResult that includes a result of the determination. In the detBcResult shown inFIG.9, “1” is associated with a frequency that includes the broadcast electric signal, and “0” is associated with a frequency that does not include the broadcast electric signal. Therefore, the frequency that includes the broadcast electric signal and the frequency that does not include the broadcast electric signal can be distinguished from each other by referring to the detBcResult. In other words, it is possible to identify the frequency range of any broadcast by referring to the detBcResult. Through the above processing, the broadcast signal detector103can generate the broadcast detection information detBcResult. A broadcast electromagnetic wave from the other broadcast station, which is a broadcast station other than the tuned station, may also be detected by a method different from the above-described method of detecting a broadcast electromagnetic wave from a received signal waveform. For example, a reception frequency of a radio tuner may be scanned by using the radio tuner or an external radio tuner in advance, presence or absence of the broadcast electromagnetic wave may be determined for each station, and a frequency at which the broadcast electromagnetic wave is present may be stored. Alternatively, information on a broadcast station that can be received at a position of a user may be acquired through the Internet by using position information on a location where the radio tuner is used. The location where the radio tuner is used may be, for example, a location where an antenna is mounted. Information on a frequency of the broadcast electromagnetic wave from the other broadcast station obtained from these methods that do not use the received signal waveform may be used, or these methods and the method that uses the received signal waveform described above may be used in combination. <Details of Processing of Noise Frequency Interval Calculator> Next, the processing of the noise frequency interval calculator104will be described in detail. That is, the process of S500shown inFIG.4will be described in detail.FIG.10is a flowchart showing an example of the processing of the noise frequency interval calculator104according to Embodiment 1.FIG.11shows a method of calculating a correlation function.FIG.12shows a relationship between the correlation function and a first peak list. As S501, the noise frequency interval calculator104sets a segment ccSlit(f) within a frequency range of the fftData(f) as shown inFIG.11. The segment ccSlit(f) includes the fftData(f) in a frequency range in which the segment ccSlit(f) is set. For example, when the noise frequency interval is any frequency between 1 kHz to 10 kHz, a frequency range of the segment ccSlit(f) may be 100 kHz. However, the frequency range of the segment ccSlit(f) is not limited to 100 kHz, and may be larger or smaller than 100 kHz. The frequency range of the segment ccSlit(f) may also include the tuned channel frequency range. The frequency range of the segment ccSlit(f) may also include the other channel broadcast signal. This is because the tuned channel broadcast signal and the other channel broadcast signal do not generate any peak that has a larger amplitude than an amplitude of the noise having the periodic spectrum pattern in the frequency domain when a correlation function R(m) is calculated. The segment ccSlit(f) may also be set in a frequency range excluding the tuned channel frequency range of the fftData(f) so as to reduce a calculation load. This is because there is a broadcast electric signal in the tuned channel frequency range, and if the noise having the periodic spectrum pattern in the frequency domain is superimposed on the broadcast electric signal in the tuned channel frequency range, there is a high possibility that peak data of the correlation function R(m) that can be used to calculate the noise frequency interval cannot be calculated. However, on the contrary, when the tuned channel frequency range is excluded, the frequency range that can be used to calculate the correlation function R(m) becomes narrow, and there are cases where a plurality of peaks of correlation functions R(m) cannot be detected. In such cases, the segment ccSlit(f) may be set as wide as possible within a frequency range in which the fftData(f) can be used, including the tuned channel frequency range. As S502, the noise frequency interval calculator104sets a window ccData(f) in the segment ccSlit(f). The window ccData(f) includes the fftData(f) in a frequency range in which the window ccData(f) is set. At this time, the noise frequency interval calculator104refers to the broadcast detection information detBcResult and sets the window ccData(f) in a frequency range excluding the other channel frequency range. In addition, the window ccData(f) is set in a frequency range excluding the tuned channel frequency range. For example, when the noise frequency interval is any frequency between 1 kHz to 10 kHz, a frequency range of the window ccData may be 30 kHz. However, the frequency range of the window ccData is not limited to 30 kHz, and may be larger or smaller than 30 kHz. In S503, the noise frequency interval calculator104calculates a square integral SccData of amplitude of each frequency of the window ccData(f) by the following Formula 1. SccData=∫fafb|ccData(f)|2df(Formula 1) Here, as shown inFIG.11, fa and fb are values indicating a lower limit frequency and an upper limit frequency of the window ccData, respectively. |A(x)|represents an absolute value of a complex number A(x). Amplitude of the complex number A(x) can be calculated as an absolute value. As S504, the noise frequency interval calculator104sets 0 Hz as a window shift amount m, which is a minimum value of the window shift amount m. It should be noted that the minimum value of the window shift amount m may be larger or smaller than 0 Hz. As S505, the noise frequency interval calculator104calculates an integral R0(m) of a product of the amplitude of the segment ccSlit(f) of the predetermined frequency range and the amplitude of the window ccData by the following Formula 2. R0(m)=∫fafb|ccData(f)|·|ccSlit(f+m)|df(Formula 2) As S506, the noise frequency interval calculator104calculates a square integral S(m) of amplitude of each frequency in a segment ccSlit(f+m) of the same frequency range by the following Formula 3. S(m)=∫fafb|ccSlit(f+m)|2df(Formula 3) As S507, the noise frequency interval calculator104calculates the correlation function R(m) by the following Formula 4. That is, the R0(m) is divided by a square root of the product of the S(m) and the SccData so as to obtain the correlation function R(m). The correlation function R(m) is a correlation function on a frequency axis for amplitude of the fftData(f). R(m)=R0(m)/√{square root over (S(m)·SccData)} (Formula 4) The reason why the conversion from the R0(m) to the R(m) is performed is as follows. That is, if the R0(m) is directly used, a calculation result depends not only on a waveform correlation but also on magnitude of the amplitude of the ccSlit(f). Therefore, the square integral S(m) of the amplitude of the ccSlit(f) and the square integral SccData(f) of the amplitude of the ccData(f) are calculated, and the R0(m) is normalized by Formula 4. As S508, the noise frequency interval calculator104adds a unit frequency of the frequency axis of the input frequency domain signal fftData(f) to the window shift amount m. The unit frequency of the frequency axis of the input frequency domain signal fftData(f) may be expressed as unitF. The unitF is a very small value as compared with the total signal bandwidth BWtotal of the input frequency domain signal fftData(f), and is, for example, 10 [Hz]. However, the unitF may also be a value larger than 10 [Hz] or a value smaller than 10 [Hz]. As S509, the noise frequency interval calculator104determines whether the window shift amount m has reached an upper limit of the frequency range of the segment ccSlit(f). When it is determined that the window shift amount m has reached the upper limit of the frequency range of the segment ccSlit(f) (S509: YES), the noise frequency interval calculator104proceeds to the process of S510. When it is determined that the window shift amount m does not reach the upper limit of the frequency range of the segment ccSlit(f) (S509: NO), the noise frequency interval calculator104returns to the process of S505. By the processing of S505to S509, as shown inFIG.12, it is possible to obtain the correlation function R(m) that emphasizes the periodic spectrum pattern of the noise peak as compared with a spectrum of the original segment ccSlit(f). As S510, the noise frequency interval calculator104selects at least two window shift amounts m at which the correlation function R(m) has a local maximum value, as shown inFIG.12. The noise frequency interval calculator104includes the selected window shift amounts m in the first peak list information. Hereinafter, the first peak list information may be referred to as RpeakList. For example, the noise frequency interval calculator104selects at least two, preferably six or more window shift amounts m at which the correlation function R(m) has a local maximum value. The number of selected window shift amounts m may be changed according to the number of window shift amounts m at which the correlation function R(m) has a local maximum value. This is because, if the number of window shift amounts m at which the correlation function R(m) has a local maximum value is large while the number of selected window shift amounts m is small, there is a high possibility that an erroneous noise frequency interval fcyc is calculated. In addition, the noise frequency interval calculator104may select a window shift amount m at which R(m) is equal to or higher than a predetermined threshold value. As a result, a local maximum value that is not originated from periodic noise peaks can be excluded. The predetermined threshold value may be 0.5. However, the predetermined threshold value may be larger or smaller than 0.5. As S511, the noise frequency interval calculator104calculates a difference between two window shift amounts m adjacent to each other in the RpeakList. The noise frequency interval calculator104includes the calculated difference in difference list information. Hereinafter, the difference list information may be expressed as difRIndexList. As S512, the noise frequency interval calculator104sets a value that appears most frequently in the difRIndexList, that is, a mode value, as the noise frequency interval fcyc. For example, when 8.013 kHz is the mode value in the difRIndexList, the noise frequency interval calculator104sets fcyc=8.013 kHz. Through the above processing, the noise frequency interval calculator104can calculate the noise frequency interval fcyc. Although R0(m) is calculated by using the amplitude (that is, the absolute values) of the ccData(f) and the ccSlit(f) in Formula 2 in the above example, the same calculation may be performed by using a square (that is, a power) of the amplitude instead of the amplitude. In this case, in Formula 4, the correlation function R(m) can be calculated by setting R(m)=R0(m)/(S(m) SccData) without processing the square root on a right side. By this processing, the noise frequency interval fcyc can still be calculated in the same manner as described above. <Details of Processing of Peak Frequency Detector> Next, the processing of the peak frequency detector105will be described in detail. That is, the process of S600shown inFIG.4will be described in detail.FIG.13is a flowchart showing an example of the processing of the peak frequency detector105according to Embodiment 1;FIG.14shows detection of a peak frequency. As S601, the peak frequency detector105extracts frequencies having particularly large amplitude from the segment ccSlit(f) where the correlation function R(m) is calculated. For example, the peak frequency detector105extracts a predetermined number of frequencies in descending order of amplitude from the segment ccSlit(f). As shown inFIG.14, the peak frequency detector105includes the extracted frequencies in second peak list information. Hereinafter, the second peak list information may be referred to as peakIndexList. As S602, the peak frequency detector105refers to the broadcast detection information detBcResult, and excludes each frequency that does not satisfy a condition B from the peakIndexList. Each frequency that satisfies the condition B is a frequency that is not included in the frequency range of any broadcast. Specifically, the peak frequency detector105leaves each frequency that is not included in the frequency range of any broadcast in the peakIndexList. A column of “condition B” shown inFIG.14shows a determination result of whether the condition B is satisfied provided by the peak frequency detector105at each frequency. The determination result is indicated by “o” when the condition B is satisfied, and is indicated by “×” when the condition B is not satisfied. In the example shown inFIG.14, it is determined that all frequencies satisfy the condition B. As S603, the peak frequency detector105excludes each frequency that does not satisfy a condition A from the peakIndexList. Each frequency that satisfies the condition A is a frequency that has higher amplitude than preceding and succeeding frequencies, that is, a frequency having amplitude that is a local maximum value. Specifically, the peak frequency detector105leaves, in the peakIndexList, each frequency having a higher amplitude than an amplitude of a frequency lower by the unit frequency unitF than itself and a higher amplitude than an amplitude of the frequency higher by the unit frequency unitF than itself, that is, a frequency having a local maximum amplitude. In a column of “condition A” shown inFIG.14, a determination result of whether the condition A is satisfied provided by the peak frequency detector105at each frequency is shown. The determination result is indicated by “o” when the condition A is satisfied, and is indicated by “×” when the condition A is not satisfied. As S604, the peak frequency detector105extracts each frequency that satisfies a condition C from the frequencies included in the peakIndexList. Each frequency satisfying the condition C is a frequency at which both amplitude of a frequency obtained by adding the noise frequency interval fcyc and amplitude of a frequency obtained by subtracting the noise frequency interval fcyc have local maximum values. Specifically, the peak frequency detector105extracts, from the peakIndexList, each frequency at which amplitude of a frequency shifted by the noise frequency interval fcyc also has a local maximum value. In other words, the peak frequency detector105selects one frequency to be processed from the peakIndexList, and extracts the selected frequency if the selected frequency has one local maximum value that is periodically generated on a spectrum for each noise frequency interval fcyc. By performing S604, even if a frequency of noise that does not appear at regular frequency intervals is detected by the peak frequency detector105, the detected frequency can be prevented from being extracted as the extracted noise peak frequency. In a column of “condition C” shown inFIG.14, a determination result of whether the condition C is satisfied provided by the peak frequency detector105at each frequency is shown. The determination result is indicated by “o” when the condition C is satisfied, and is indicated by “×” when the condition C is not satisfied. As S605, the peak frequency detector105sets a frequency that has largest amplitude among the frequencies extracted in S604as the extracted noise peak frequency Xpeak. For example, in the case ofFIG.14, among the frequencies satisfying all the conditions A, B, and C in the peakIndexList, a frequency having largest amplitude is 73.98987 kHz, and thus the peak frequency detector105sets Xpeak=73.98987 kHz. <Details of Processing of Frequency Shift Amount Determining Unit> Next, the processing of the frequency shift amount determining unit106will be described in detail. That is, the process of S700shown inFIG.4will be described in detail.FIG.15is a flowchart showing an example of the processing of the frequency shift amount determining unit106according to Embodiment 1.FIG.16shows a relationship between the tuned channel frequency range and the noise frequency interval.FIG.17shows a case where the tuned channel frequency range of the input frequency domain signal overlaps with the tuned channel frequency range of the frequency-shifted frequency domain signal.FIG.18shows a case where the tuned channel frequency range of the input frequency domain signal overlaps with the other channel frequency range of the frequency-shifted frequency domain signal.FIG.19shows a frequency range that overlaps with the tuned channel frequency range after a frequency shift. As S701, as shown inFIG.16, the frequency shift amount determining unit106determines an initial frequency shift amount in such a manner that the initial frequency shift amount is larger than the tuned channel frequency range and is an integral multiple of the noise frequency interval fcyc. Hereinafter, the tuned channel frequency range may be expressed as sigRange, and the initial frequency shift amount may be expressed as extFShift. For example, the frequency shift amount determining unit106calculates the extFShift by the following Formula 5. extFShift=(floor(sigRange/fcyc)+1)×fcyc (Formula 5) Here, floor(x) is a function that returns a value obtained by truncating a decimal part of x. A reason why the initial frequency shift amount extFShift is determined to be larger than the tuned channel frequency range sigRange and to be an integral multiple of the noise frequency interval fcyc in S701is as follows. That is, as shown inFIG.17, when the frequency shift amount is narrower than the tuned channel frequency range, the tuned channel frequency range of the output signal of the second converter111overlaps the tuned channel frequency range of the output signal of the delay device112when the noise canceller113performs the signal synthesis by which the output signal of the second converter111is subtracted from the output signal of the delay device112. It should be noted that the output signal of the second converter111corresponds to a signal obtained by converting the postFftDataShift(f) into a time domain signal. The tuned channel frequency range of the output signal of the delay device112corresponds to the tuned channel frequency range of the fftData(f). The tuned channel frequency range of the output signal of the second converter111corresponds to the tuned channel frequency range of the postFftDataShift(f). As a result, tuned channel broadcast signals interfere with each other, and thus new noise is included in a radio sound. The new noise is noise generated by interference of broadcast electromagnetic waves, and is noise different from the noise having peaks of amplitude at regular frequency intervals in the frequency domain, which is a target of the signal processing device of the present disclosure. Therefore, in a case where the tuned channel frequency range sigRange=20 kHz and the noise frequency interval fcyc=8.013 kHz as shown inFIG.16, for example, the frequency shift amount determining unit106may calculate the initial frequency shift amount extFShift as floor ((20/8.013)+1)×8.013=3×8.013=24.039 kHz by Formula 5. As S702, the frequency shift amount determining unit106determines whether the correlation function R(m) has a local maximum value after a frequency shift of m=extFShift. It should be noted that m is the frequency shift amount of the window ccData(f) in the calculation of the correlation function R(m), and hereinafter, the frequency shift amount may also be referred to as a frequency. That is, when a lowest frequency among peaks of the correlation function R(m) is m0 [kHz], it is determined whether the correlation function R(m) satisfies a condition of a peak even at a frequency of m0+extFShift [kHz]. Here, each peak of the correlation function R(m) may satisfy the same detection condition as the RpeakList. As described above, the detection condition of the RpeakList includes two conditions, one is that the correlation function R(m) has a local maximum value and the other is that the correlation function R(m) is equal to or higher than the predetermined threshold value. When a frequency range of the ccData(f) is set in such a manner that m0=0 [kHz], it may be determined whether the correlation function R(m) satisfies the condition of the peak at a frequency of m=extFShift [kHz]. The peak may also satisfy the same detection condition as the RpeakList. When the frequency range of the ccData(f) is selected in such a manner that the lower limit frequency of the ccData(f) and a lower limit frequency of the ccSlit(f) are the same frequency, the correlation function R(m) has a peak at m=0 [kHz]. In this case, since the frequency m0 that has the lowest frequency peak is 0 [kHz], it may be determined whether the correlation function R(m) satisfies the condition of the peak at the frequency of m=extFShift [kHz]. It should be noted that the peak may also satisfy the same detection condition as the RpeakList.FIG.12shows an example of a waveform of the correlation function R(m) in such a case. When it is determined that the correlation function R(m) does not have the local maximum value after the frequency shift of m=extFShift (S702: NO), as S703, the frequency shift amount determining unit106adds the noise frequency interval fcyc to the extFShift, and returns to the process of S702. When the frequency shift amount determining unit106determines that the correlation function R(m) has a local maximum value after the frequency shift of m=extF Shift (S702: YES), the process proceeds to S704. When the extFShift is an integral multiple of the fcyc, the determination in S702is YES in principle, and thus the frequency shift amount determining unit106may not perform the processes of S702and S703. However, it is not self-evident whether the correlation function R(m) has a local maximum value that is equal to or higher than the predetermined threshold value (for example, 0.5) at all frequencies that are integral multiples of the noise frequency interval fcyc. When the value of the correlation function R(m) is low at the selected frequency of the extFShift, an expected noise cancellation effect may not be obtained. Therefore, in order to improve reliability of the selected frequency of the extFShift, it is desirable to perform the processes of S702and S703. As S704, the frequency shift amount determining unit106calculates a frequency range B1that overlaps the tuned channel frequency range when the frequency shift is performed by the extFShift in the fftData(f). For example, when the tuned channel frequency range is (−sigRange/2 to +sigRange/2), the frequency shift amount determining unit106calculates ((−extFShift−sigRange/2) to (−extFShift+sigRange/2)) as the frequency range B1. As S705, the frequency shift amount determining unit106refers to the broadcast detection information detBcResult and determines whether the broadcast electric signal is included in the frequency range B1. When it is determined that the broadcast electric signal is included in the frequency range B1(S705: YES), as S706, the frequency shift amount determining unit106adds the fcyc to the extFShift, and returns to the process of S702. A reason why S706is performed when S705is YES is as follows. That is, as shown inFIG.18, if the frequency is directly shifted by the extFShift, the other channel broadcast signal included in the frequency range B1interferes with the tuned channel broadcast signal in the tuned channel frequency range. When the frequency shift amount determining unit106determines that the broadcast electric signal is not included in the frequency range B1(S705: NO), the process proceeds to S707. As S707, the frequency shift amount determining unit106determines the current extFShift as the frequency shift amount fShift. As a result, as shown inFIG.19, the frequency shift amount determining unit106can determine the frequency shift amount fShift in such a manner that the frequency range B1that includes neither the tuned channel broadcast signal nor the other channel broadcast signal overlaps the tuned channel frequency range. <Details of Processing of Amplitude Correction Factor Determining Unit> Next, the processing of the amplitude correction factor determining unit107will be described in detail. That is, the process of S800shown inFIG.4will be described in detail.FIG.20is a flowchart showing an example of the processing of the amplitude correction factor determining unit107according to Embodiment 1.FIG.21shows a method of calculating a peak frequency used for calculating an amplitude correction factor.FIG.22shows a method of calculating the amplitude correction factor. There are cases where a noise peak frequency of the frequency-shifted frequency domain signal fftDataShift(f) coincides with that of the input frequency domain signal fftData(f) before the frequency shift, while at least one of amplitude and phase thereof does not coincide with that of the input frequency domain signal fftData(f) before the frequency shift. Therefore, when the noise canceller113performs the signal synthesis by which the output signal of the second converter111is subtracted from the output signal of the delay device112, even if the signal synthesis is performed by directly inputting the fftDataShift(f) to the second converter111and subtracting the output signal output from the second converter111, a sufficient noise cancellation effect may not be obtained. Therefore, the amplitude and the phase of the frequency-shifted frequency domain signal fftDataShift(f) are preferably corrected so as to coincide with the amplitude and the phase of the input frequency domain signal fftData(f) before the frequency shift. Through verification, the inventors have found that amplitude and phase of a noise spectrum have a proportional relationship with a frequency. Therefore, in the present embodiment, an amplitude difference between the two noise peaks per one noise frequency interval fcyc is calculated as an amplitude correction factor, a phase difference between the two noise peaks per one noise frequency interval fcyc is calculated as a phase correction factor, and the amplitude and the phase of the fftDataShift(f) are corrected by utilizing the proportional relationship of the amplitude and the phase. As S801, the amplitude correction factor determining unit107calculates a peak frequency existing in the tuned channel frequency range by using the extracted noise peak frequency Xpeak as shown inFIG.21. Hereinafter, the peak frequency existing in the tuned channel frequency range may be expressed as GbaseIndex. For example, the amplitude correction factor determining unit107calculates the GbaseIndex by the following Formula 6. GbaseIndex=Xpeak−floor(Xpeak/fcyc)×fcyc (Formula 6) As S802, as shown inFIG.21, the amplitude correction factor determining unit107calculates a peak frequency that exists on a higher frequency side than the tuned channel frequency range outside the tuned channel frequency range. Hereinafter, the peak frequency that exists on the higher frequency side than the tuned channel frequency range outside the tuned channel frequency range may be expressed as G1baseIndex. As S803, the amplitude correction factor determining unit107refers to the broadcast detection information detBcResult and determines whether the G1baseIndex is included in the frequency range of any broadcast. When it is determined that the G1baseIndex is included in the frequency range of any broadcast (S803: YES), as S804, the amplitude correction factor determining unit107adds the fcyc to the G1baseIndex, and returns to the process of S803. When it is determined that the G1baseIndex is not included in the frequency range of any broadcast (S803: NO), the amplitude correction factor determining unit107proceeds to the process of S805. As S805, as shown inFIG.21, the amplitude correction factor determining unit107calculates a peak frequency that exists on a lower frequency side than the tuned channel frequency range outside the tuned channel frequency range. Hereinafter, the peak frequency that exists on the lower frequency side than the tuned channel frequency range outside the tuned channel frequency range may be expressed as G2baseIndex. As S806, the amplitude correction factor determining unit107refers to the broadcast detection information detBcResult and determines whether the G2baseIndex is included in the frequency range of any broadcast. When it is determined that the G2baseIndex is included in the frequency range of any broadcast (S806: YES), as S807, the amplitude correction factor determining unit107subtracts the fcyc from the G2baseIndex, and returns to the process of S805. When it is determined that the G2baseIndex is a not included in the frequency range of any broadcast (S806: NO), the amplitude correction factor determining unit107proceeds to the process of S808. As S808, as shown inFIG.22, the amplitude correction factor determining unit107calculates an amplitude (voltage) difference GdB between the two noise peaks per one noise frequency interval fcyc from amplitude (voltage) P1(dB unit system) of the noise peak in the G1baseIndex and amplitude (voltage) P2(dB unit system) of the noise peak in the G2baseIndex, and sets the GdB as the amplitude correction factor. Here, the amplitude P1may be 20×log10(noise peak amplitude V1(unit: volt)), and the amplitude P2may be 20×log10(noise peak amplitude V2(unit: volt)). For example, the amplitude correction factor determining unit107calculates the amplitude correction factor GdB by the following Formula 7. GdB=(P1−P2)/((G1baseIndex−G2baseIndex)/fcyc) (Formula 7) Through the above processing, the amplitude correction factor determining unit107can calculate the amplitude correction factor GdB. <Details of Processing of Phase Correction Factor Determining Unit> Next, the processing of the phase correction factor determining unit108will be described in detail. That is, the process of S900shown inFIG.4will be described in detail.FIG.23is a flowchart showing an example of the processing of the phase correction factor determining unit108according to Embodiment 1.FIG.24shows a method of calculating a peak frequency used for calculating a phase correction factor. As S901, for example, as shown inFIG.24, the phase correction factor determining unit108calculates a peak frequency present on a higher frequency side than the extracted noise peak frequency Xpeak by the following Formula 8. Hereinafter, the peak frequency existing on the higher frequency side than the extracted noise peak frequency Xpeak may be expressed as phaseCheckIndex. phaseCheckIndex=Xpeak+fcyc (Formula 8) As S902, the phase correction factor determining unit108refers to the broadcast detection information detBcResult and determines whether the phaseCheckIndex is included in the frequency range of any broadcast. When it is determined that the phaseCheckIndex is included in the frequency range of any broadcast (S902: YES), as S903, the phase correction factor determining unit108adds the fcyc to the phaseCheckIndex, and returns to the process of S902. When it is determined that the phaseCheckIndex is not included in the frequency range of any broadcast (S902: NO), the phase correction factor determining unit108proceeds to the process of S904. As S904, the phase correction factor determining unit108calculates a phase difference deltaD between the two noise peaks per one noise frequency interval fcyc based on a phase D1of a noise peak in the Xpeak and a phase D2of a noise peak in the phaseCheckIndex, and sets the deltaD as the phase correction factor. For example, the phase correction factor determining unit108calculates the phase correction factor deltaD by the following Formula 9. deltaD=(D1−D2)/((Xpeak−phaseCheckIndex)/fcyc) (Formula 9) Through the above processing, the phase correction factor determining unit108can calculate the phase correction factor deltaD. <Details of Processing of Frequency Shifter> Next, the processing of the frequency shifter109will be described in detail. That is, the process of S1000shown inFIG.4will be described in detail.FIG.25is a flowchart showing an example of the processing of the frequency shifter109according to Embodiment 1.FIG.26shows an example of the frequency-shifted frequency domain signal. As S1000, as shown inFIG.26, the frequency shifter109shifts the frequency of the fftData(f) by the frequency shift amount fShift to obtain the frequency-shifted frequency domain signal fftDataShift(f). The frequency shifter109may set voltage amplitude at the frequency freed by the frequency shift to 0 in the fftDataShift(f). For example, the frequency shifter109obtains the fftDataShift(f) by the following Formula 10. fftShift=0, in the case of(−BWtotal/2 to −BWtotal/2+fShift)fftDataShift=fftData(f+fShift), in the case of(−BWtotal/2+fShift to BWtotal/2) (Formula 10) It should be noted that the voltage amplitude at the frequency freed by the frequency shift may be set to a minute random value instead of 0. Specifically, for example, in the case ofFIG.26, random voltage amplitude in a range of −100 dBFS±10 dB may be used. The phase may also be random. As a result, it is possible to reduce signal distortion that may occur due to waveform discontinuity between a frequency range (−BWtotal/2 to −BWtotal/2+fShift) that is freed by the frequency shift, and a frequency range (−BWtotal/2+fShift to BWtotal/2) that is on a higher frequency side and has finite voltage amplitude from the beginning. <Details of Processing of Corrector> Next, the processing of the corrector110will be described in detail. That is, the process of S1100shown inFIG.4will be described in detail.FIG.27is a flowchart showing an example of the processing of the corrector110according to Embodiment 1.FIG.28shows amplitude correction and phase correction according to Embodiment 1. As S1101, the corrector110sets a frequency f to a lower limit frequency of the total signal bandwidth BWtotal of the input frequency domain signal. As S1102, the corrector110calculates amplitude and phase before correction for a signal voltage (complex number) of the frequency f in the frequency-shifted frequency domain signal fftDataShift(f), for example, by the following Formula 11. Hereinafter, dB unit system amplitude (unit: dBV) before correction may be expressed as preAmpldB, antilogarithm unit system amplitude (unit: V) may be expressed as preAmpl, and a phase before correction may be expressed as prePhaseRad (unit: radian). It should be noted that the fftDataShift(f) is a complex number function that has a voltage dimension like the fftData(f) and is represented by a unit system of V (volt). Amplitude:preAmpldB=20×log(preAmpl)=20×log(abs(fftDataShift(f))) Phase:prePhaseRad=arctan(imaginary(fftDataShift(f))/real(fftDataShift(f)) (Formula 11) Here, abs(x) is a function that returns an absolute value of a complex number x. imaginary(x) is a function that returns a value b of an imaginary part of the complex number x=a+jb. real(x) is a function that returns a value a of a real part of the complex number x=a+jb. As S1103, the corrector110calculates a shifted frequency interval number indicating how many noise frequency intervals fcyc the frequency shift amount fShift corresponds to, for example, by the following Formula 12. Hereinafter, the shifted frequency interval number may be expressed as numCycle. numCycle=round(fShift/fcyc) (Formula 12) Here, round(x) is a function that returns a value obtained by rounding off x. As S1104, the corrector110performs phase correction according to the shifted frequency interval number numCycle on the phase prePhaseRad of the frequency-shifted frequency domain signal fftDataShift(f). For example, the corrector110calculates a corrected phase by the following Formula 13. Hereinafter, the corrected phase may be expressed as postPhaseRad (unit: radian). postPhaseRad=prePhaseRad+numCycle×deltaD (Formula 13) As S1105, the corrector110performs amplitude correction according to the shifted frequency interval number numCycle on the amplitude preAmpldB of the frequency-shifted frequency domain signal fftDataShift(f). For example, the corrector110calculates corrected amplitude by the following Formula 14. Hereinafter, the corrected amplitude may be expressed as postAmpldB (unit: dBV). postAmpldB=preAmpldB+numCycle×GdB (Formula 14) As S1106, the corrector110converts a unit system of the corrected amplitude into an antilogarithm unit system (that is, V (volt)). For example, the corrector110calculates an antilogarithm unit system value (unit: V) of the corrected amplitude by the following Formula 15. Hereinafter, the antilogarithm unit system value of the corrected amplitude may be expressed as postAmpl. postAmpl=10{circumflex over ( )}(postAmpldB/20) (Formula 15) As S1107, the corrector110calculates a corrected signal voltage by the following Formula 16. Hereinafter, the corrected signal voltage corresponds to the postFftDataShift(f). postFftDataShift(f)=postAmpl×(cos(postPhaseRad)+j×sin(postPhaseRad)) (Formula 16) Here, j is an imaginary unit. As S1108, the corrector110determines whether the frequency f is an upper limit frequency of the total signal bandwidth BWtotal. When it is determined that the frequency f is not the upper limit frequency of the total signal bandwidth BWtotal (S1108: NO), as S1109, the corrector110increases the frequency f by the unit frequency unitF of the frequency domain signal fftDataShift(f), and returns to the process of S1102. When it is determined that the frequency f is the upper limit frequency of the total signal bandwidth BWtotal (S1108: YES), correction is completed for all frequency domains of the fftDataShift(f), and thus the corrector110outputs the corrected frequency domain signal as S1110. The corrected frequency domain signal corresponds to the postFftDataShift(f). Through the above processing, the corrector110can calculate the corrected frequency domain signal postFftDataShift(f). The corrected frequency domain signal postFftDataShift(f) output by the corrector110is converted into the noise time domain signal noiseSig(t) by the second converter111. The noise canceller113subtracts the noise time domain signal noiseSig(t) from the input time domain signal inSig(t) output from the delay device112, and outputs the output signal outSig(t). outSig(t) is a time domain signal, andFIG.29shows a result of acquiring a frequency spectrum thereof. In addition, Table 1 shows an example of a result of comparing the input frequency domain signal fftData(f) corresponding to a frequency spectrum of the input time domain signal inSig(t) and the frequency spectrum of the outSig(t), as shown inFIG.3. TABLE 1INPUTOUTPUTSIGNALSIGNALinSig(t)outSig(t)(FIG. 3)(FIG. 29)DIFFERENCETUNED CHANNEL−30 dBFS−30 dBFS0 dBFREQUENCYAMPLITUDENOISE PEAK−50 dBFS−68 dBFSLOWEREDAMPLITUDE (ATBY 18 dBFREQUENCYNEAR TUNEDCHANNELFREQUENCY) The amplitude of the broadcast electric signal at the tuned channel frequency is −30 dBFS for both the input signal and the output signal, and is substantially not attenuated by the signal processing of the present disclosure. Meanwhile, amplitude of a noise peak having a periodic spectrum pattern on a frequency axis is −50 dBFS in the input signal and −68 dBFS in the output signal at a frequency near the tuned channel frequency. Therefore, it is confirmed that the signal processing method of the present disclosure has an effect of attenuating noise signal amplitude without attenuating the selected frequency signal amplitude. Therefore, for example, when a tuned channel broadcast signal is demodulated, volume of noise included in a sound is lowered, and thus a sound of a broadcast electromagnetic wave can be easily heard. Embodiment 2 Next, Embodiment 2 will be described. In the description of Embodiment 2, the contents already described in Embodiment 1 may be omitted. <Configuration of Signal Processing Device> FIG.30is a block diagram showing an example of a configuration of the signal processing device100according to Embodiment 2. The signal processing device100includes: the A/D converter101; the first converter102; the broadcast signal detector103; the noise frequency interval calculator104; the peak frequency detector105; the frequency shift amount determining unit106; the amplitude correction factor determining unit107; the phase correction factor determining unit108; the frequency shifter109; the corrector110; the second converter111; the delay device112; the noise canceller113; a signal removing unit121; a signal extracting unit122; and a synthesizer123. The A/D converter101, the first converter102, the broadcast signal detector103, the noise frequency interval calculator104, the peak frequency detector105, the frequency shift amount determining unit106, the amplitude correction factor determining unit107, and the phase correction factor determining unit108are the same as those in Embodiment 1, and thus description thereof will be omitted. The signal removing unit121removes a signal in the tuned channel frequency range from the input frequency domain signal fftData(f) input from the first converter102. Hereinafter, a signal obtained by removing the signal in the tuned channel frequency range from the input frequency domain signal fftData(f) may be expressed as fftDataWoRec(f). The signal removing unit121outputs the fftDataWoRec(f) to the synthesizer123. Details of processing of the signal removing unit121will be described later. The signal extracting unit122calculates a frequency range B2that overlaps the tuned channel frequency range of fftData(f) after a frequency shift using the frequency shift amount fShift. Then, a signal in the frequency range B2is extracted from the input frequency domain signal fftData(f) input from the first converter102. The signal extracting unit122outputs the signal in the frequency range B2to the frequency shifter109. Details of processing of the signal extracting unit122will be described later. The frequency shifter109shifts a frequency of the signal in the frequency range B2input from the signal extracting unit122by the frequency shift amount fShift input from the frequency shift amount determining unit106. Hereinafter, a signal obtained by shifting the frequency of the signal in the frequency range B2may be expressed as fftDataShiftOfRec(f). The frequency shifter109outputs the fftDataShiftOfRec(f) to the corrector110. Details of processing of the frequency shifter109will be described later. The corrector110corrects amplitude and phase of the fftDataShiftOfRec(f) in the same manner as in Embodiment 1. Hereinafter, a signal obtained by correcting the amplitude and the phase of the fftDataShiftOfRec(f) may be expressed as postFftDataShiftOfRec(f). The corrector110outputs the postFftDataShiftOfRec(f) to the synthesizer123. Details of processing of the corrector110will be described later. The synthesizer123performs signal synthesis by which the postFftDataShiftOfRec(f) input from the corrector110is added to the fftDataWoRec(f) input from the signal removing unit121. A signal obtained by adding the postFftDataShiftOfRec(f) to the fftDataWoRec(f) corresponds to the postFftDataShift(f) in Embodiment 1. The synthesizer123outputs the postFftDataShift(f) to the second converter111. <Outline of Processing of Signal Processing Device> FIG.31is a flowchart showing an outline of processing of the signal processing device100according to Embodiment 2.FIG.32shows the frequency range B2that falls within the tuned channel frequency range after the frequency shift. The signal processing device100performs the same processes as S100to S900shown inFIG.4. As S2001, the signal extracting unit122extracts the signal in the frequency range B2that overlaps the tuned channel frequency range after the frequency shift from the input frequency domain signal fftData(f). Details of the process2001will be described later. As S2002, the frequency shifter109shifts the frequency of the signal in the frequency range B2extracted in S2001by the frequency shift amount fShift and outputs the fftDataShiftOfRec(f). Details of the process of S2002will be described later. As S2100, the corrector110performs amplitude correction and phase correction on the fftDataShiftOfRec(f), and outputs the postFftDataShiftOfRec(f). Details of the process of S2100will be described later. As S2200, the synthesizer123performs a signal synthesis by which the postFftDataShiftOfRec(f) is added to the fftDataWoRec(f), and outputs the postFftDataShift(f). As S2300, the second converter111converts the postFftDataShift(f) into the noise time domain signal noiseSig(t). As S2400, the noise canceller113performs a signal synthesis by which the noise time domain signal noiseSig(t) is subtracted from the input time domain signal inSig(t), and outputs the output signal outSig(t). As a result, the signal processing device100can output the outSig(t) in which noise is cancelled in the tuned channel frequency range. <Details of Processing of Signal Extracting Unit and Frequency Shifter> Next, the processing of the signal extracting unit122and the frequency shifter109will be described in detail. That is, the processes of S2001and S2002shown inFIG.31will be described in detail.FIG.33is a flowchart showing an example of the processing of the signal extracting unit122and the frequency shifter109. As S2001, when the frequency shift by the frequency shift amount fShift is performed, the signal extracting unit122extracts the signal in the frequency range B2that overlaps the tuned channel frequency range from the input frequency domain signal fftData(f). For example, when the tuned channel frequency range is (−sigRange/2 to +sigRange/2), the signal extracting unit122calculates the frequency range B2as ((−fShift−sigRange/2) to (−fShift+sigRange/2)). Then, the signal extracting unit122extracts the signal included in the frequency range B2from the fftData(f). Here, “extracting a signal” may mean that information on a frequency and a voltage (amplitude and phase) of a signal within a predetermined frequency range is left while information on a frequency and a voltage of a signal whose frequency is outside the predetermined frequency range is removed. As S2002, the frequency shifter109shifts the frequency of the signal included in the frequency range B2extracted in S2001by the frequency shift amount fShift, and outputs the fftDataShiftOfRec(f). A frequency range of the fftDataShiftOfRec(f) may be the same as the tuned channel frequency range, which is (−sigRange/2 to +sigRange/2). <Details of Processing of Corrector> Next, the processing of the corrector110will be described in detail. That is, the process of S2100shown inFIG.31will be described in detail.FIG.34is a flowchart showing an example of the processing of the corrector110according to Embodiment 2.FIG.35shows amplitude correction and phase correction according to Embodiment 2. As S2101to52106, the corrector110performs a process in which the fftDataShift(f) in S1101to S1106shown inFIG.27is replaced with the fftDataShiftOfRec(f). As S2107, the corrector110calculates the corrected signal voltage postFftDataShiftOfRec(f) by the following Formula 17. postFftDataShiftOfRec(f)=postAmpl×(cos(postPhaseRad)+j×sin(postPhaseRad)) (Formula 17) Then, as S2108, the corrector110determines whether the frequency f is an upper limit frequency of the tuned channel frequency range. When the frequency f is not the upper limit frequency of the tuned channel frequency range (S2108: NO), as S2109, the corrector110increases the frequency f by a unit frequency width of the frequency domain signal fftDataShiftOfRec(f), and returns to the process of S2102. When the frequency f is the upper limit frequency of the tuned channel frequency range (S2108: YES), the corrector110outputs the postFftDataShiftOfRec(f), which is the corrected frequency domain signal, as S2110. Through the above processing, the corrector110can output the postFftDataShiftOfRec(f) that is a signal obtained by correcting the amplitude and the phase of the fftDataShiftOfRec(f). <Details of Processing of Signal Removing Unit and Synthesizer> Next, the processing of the signal removing unit121and the synthesizer123will be described in detail. That is, the process of S2200shown inFIG.31will be described in detail.FIG.36is a flowchart showing an example of the processing of the signal removing unit121and the synthesizer123according to Embodiment 2.FIG.37shows the signal synthesis of the synthesizer123according to Embodiment 2. As S2201, as shown inFIG.37, the signal removing unit121removes a signal voltage in the tuned channel frequency range from the input frequency domain signal fftData(f), and outputs the frequency domain signal fftDataWoRec(f). The signal voltage in the tuned channel frequency range (−sigRange/2 to +sigRange/2) of fftDataWoRec(f) is 0. As S2202, as shown inFIG.37, the synthesizer123performs the signal synthesis by which the postFftDataShiftOfRec(f) is added to the fftDataWoRec(f), and outputs the frequency domain signal postFftDataShift(f). Through the above processing, the synthesizer123can output the frequency domain signal postFftDataShift(f) that does not include any tuned channel broadcast signal. That is, signals in the tuned channel frequency range in the postFftDataShift(f) are mainly noise signals. Therefore, as in the case of Embodiment 1, the noise canceller113can remove noise in the tuned channel frequency range. According to Embodiment 2, since processing of the frequency shift and the correction of amplitude and phase is limited to be within the tuned channel frequency range, an amount of computation can be reduced as compared with Embodiment 1. In addition, according to Embodiment 2, a reduction amount of noise voltages outside the tuned channel frequency range is usually larger as compared with Embodiment 1. In Embodiment 1, in order to shift the frequency of the input frequency domain signal fftData(f) as a whole, for example, a signal voltage of a freed frequency in a low frequency band is set to 0. Therefore, in Embodiment 1, noise in the low frequency band in which the signal voltage is set to 0 is not reduced in the signal synthesis of the noise canceller113. On the other hand, in Embodiment 2, the signal voltage of the frequency in the low frequency band is not 0. Therefore, in Embodiment 2, the noise in the low frequency band is also reduced in the signal synthesis of the noise canceller113. (Hardware Configuration) FIG.38is a block diagram showing an example of a hardware configuration of the signal processing device100according to the present disclosure. As shown inFIG.38, the signal processing device100may include a processor1001, a memory1002, a signal input interface (I/F)1003, a signal output I/F1004, and a communication device1005. The processor1001may execute a computer program stored in the memory1002so as to implement processes of blocks101to113and121to123included in the signal processing device100described above. The processor1001may be replaced with other terms such as a control unit, a control device, a control circuit, a controller, a central processing unit (CPU), a micro processing unit (MPU), a large scale integration (LSI), an application specific integrated circuit (ASIC), a programmable logic device (PLD), and a field-programmable gate array (FPGA). The memory1002stores computer programs and data handled by the signal processing device100. The memory1002may include a read-only memory (ROM) and a random access memory (RAM). In addition, the memory1002may include a volatile memory and a non-volatile memory. The signal input I/F1003may be connected to the antenna6. The signal input I/F1003may output the received signal input from the antenna6to the processor1001. The signal output I/F1004may be connected to the radio receiver7. The signal output I/F1004may output the output signal input from the processor1001to the radio receiver7. The communication device1005may be connected to a communication network in the vehicle1. Examples of the communication network include a controller area network (CAN), LIN, and Flex Ray. The processor1001may transmit and receive information to and from each device included in the vehicle1through the communication device1005and the communication network. It should be noted that at least a part of the blocks101to113and121to123of the signal processing device100may be implemented as an LSI that is an integrated circuit. The blocks101to123may each be formed into one chip, or may be formed into one chip that includes a part or all of the blocks101to123. Here, the term LSI may also be referred to as an IC, a system LSI, a super LSI, or an ultra LSI depending on a degree of integration. Further, if an integrated circuit technology that replaces the LSI emerges due to a progress of a semiconductor technology or another derivative technology, the technology may naturally be used to integrate the blocks. Although the embodiments have been described with reference to the accompanying drawings, the present disclosure is not limited thereto. It will be apparent to those skilled in the art that various modifications, corrections, substitutions, additions, removes, and equivalents can be conceived within the scope described in the claims, and it is to be understood that such modifications, corrections, substitutions, additions, removes, and equivalents also fall within the technical scope of the present disclosure. In addition, the constituent elements in the above-described embodiments may be combined as desired within a range not departing from the spirit of the invention. The technology of the present disclosure is useful for removing noise included in a signal received by an antenna. This application is based on Japanese Patent Application No. 2021-030458 filed on Feb. 26, 2021, the entire contents of which are incorporated herein by reference. | 78,016 |
11863213 | DETAILED DESCRIPTION Reference now will be made in detail to embodiments, one or more examples of which are illustrated in the drawings. Each example is provided by way of explanation of the embodiments, not limitation of the present disclosure. In fact, it will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments without departing from the scope or spirit of the present disclosure. For instance, features illustrated or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that aspects of the present disclosure cover such modifications and variations. Example aspects of the present disclosure are directed to methods for configuring a multi-mode antenna system for use with a multi-channel communication system implementing any suitable protocol (e.g., UHF, VHF, Wifi, cellular, etc.). The multi-mode antenna system can include an antenna and a controller. The antenna can include one or more antenna elements. The controller can be configured to implement methods for configuring the multi-mode antenna system in one of a plurality of operating modes, with each operating mode of the plurality of operating modes having a distinct radiation pattern. In some implementations, a method for configuring the antenna system includes obtaining channel selection data indicating the antenna system is tuned to a first frequency channel of a plurality of frequency channels. The method can include configuring the antenna system in at least one of the plurality of operating modes. The method can include obtaining data indicative of a channel quality indicator for the at least one operating mode. The method can include determining a selected operating mode for the antenna system for the first channel based, at least in part, on the data indicative of the channel quality indicator for the at least one operating mode. The method can include configuring the antenna system in the selected operating mode. In some implementations, a method for configuring the antenna system can include configuring the antenna system in each operating mode of the plurality of operating modes. The method can include obtaining data indicative of a channel quality indicator for each operating mode. The method can include determining configuration data for the antenna system based, at least in part, on the data indicative of the channel quality indicator. The configuration data can link each frequency channel of the plurality of frequency channels with an operating mode of the plurality of operating modes. For instance, the first frequency channel may be linked with a first operating mode, whereas a second frequency channel may be linked with a different operating mode, such as a second or third operating mode. The configuration data can be stored in memory associated with the controller. The method can include obtaining channel selection data indicating the antenna system is tuned to the first frequency channel. The method can include determining a selected operating mode for the antenna system based, at least in part, on the configuration data and the channel selection data. The method can include configuring the antenna system in the selected operating mode. As used in the specification and the appended claims, the terms “first” and “second” may be used interchangeably to distinguish one component from another and are not intended to signify location or importance of the individual components. The singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise. The use of the term “about” in conjunction with a numerical value is intended to refer to within ten percent (10%) of the stated numerical value. As used herein, a “multi-mode antenna” refers to an antenna capable of operating in a plurality of modes wherein each mode is associated with a distinct radiation pattern. As used herein, a “receiver” refers to a receiver capable of being selectively tuned to one of a plurality of frequency channels. Additionally, the “receiver” refers to a receiver capable of obtaining one or more metrics while tuned to one of the plurality of frequency channels. In some embodiments, the “receiver” includes an analog front end comprising a RF analog power detector and a plurality of filters. Additionally, in some embodiments, the “receiver” can include a digital back-end comprising a demodulator. It should be appreciated that the digital back end can be in communication with the analog front end via an analog-to-digital converter. It should be appreciated that, in some embodiments, the receiver may not include the demodulator. For instance, the antenna system may not include a demodulator. Alternatively, the demodulator may be a separate component of the antenna system. Referring now toFIG.1, a block diagram of a multi-mode antenna system100is provided according to example embodiments of the present disclosure. As shown, the multi-mode antenna system100can include a multi-mode antenna110. It should be appreciated that the multi-mode antenna system100can be configured to receive radio frequency (RF) signals within any suitable frequency band. The multi-mode antenna system100can further include a controller120. As will be discussed below in more detail, the controller120can implement various operations (e.g., processes) to configure the multi-mode antenna system100for use with a connected device130. Examples of the connected device130can include, without limitation, a media device (e.g., television), a set-top box, a smartphone, or any other suitable computing device.FIG.2illustrates an example embodiment of the multi-mode antenna110according to example embodiments of the present disclosure. The multi-mode antenna110can include a circuit board212(e.g., including a ground plane) and a driven antenna element214disposed on the circuit board212. An antenna volume may be defined between the circuit board212(e.g., and the ground plane) and the driven antenna element214. As shown, the multi-mode antenna110can include a first parasitic element215positioned at least partially within the antenna volume. The multi-mode antenna110can further include a first tuning element216coupled with the first parasitic element215. The first tuning element216can be a passive or active component or series of components and can be configured to alter a reactance on the first parasitic element215either by way of a variable reactance or shorting to ground. It should be appreciated that altering the reactance of the first parasitic element215results in a frequency shift of the antenna. It should also be appreciated that the first tuning element216can include at least one of a tunable capacitor, MEMS device, tunable inductor, switch, a tunable phase shifter, a field-effect transistor, or a diode. In example embodiments, the multi-mode antenna110can include a second parasitic element218disposed adjacent the driven antenna element214and outside of the antenna volume. The multi-mode antenna110can further include a second tuning element220. In example embodiments, the second tuning element220can be a passive or active component or series of components and may be configured to alter a reactance on the second parasitic element218by way of a variable reactance or shorting to ground. It should be appreciated that altering the reactance of the second parasitic element218result in a frequency shift of the antenna. It should also be appreciated that the second tuning element220can include at least one of a tunable capacitor, MEMS device, tunable inductor, switch, a tunable phase shifter, a field-effect transistor, or a diode. In example embodiments, operation of the first tuning element216and/or the second tuning element220can shift the radiation pattern characteristics of the driven antenna element214by varying a reactance thereof. Shifting the antenna radiation pattern can be referred to as “beam steering”. However, in instances where the antenna radiation pattern includes a null, a similar operation, commonly referred to as “null steering”, can be performed to shift the null to an alternative position about the antenna element214(e.g., to reduce interference). Referring now toFIG.3, an example embodiment of the multi-mode antenna system100is provided. As shown, the multi-mode antenna110of the multi-mode antenna system100can include a first antenna element312and a second antenna element314. It should be appreciated, however, that the multi-mode antenna110can include more or fewer antenna elements. In some embodiments, the first antenna element312and the second antenna element314can each have a fixed radiation pattern and/or polarization. For example, an antenna polarization of the first antenna element312can be different than an antenna polarization of the second antenna element314. For instance, the first antenna element312can have a horizontal polarization, whereas the second antenna element314can have a vertical polarization. It should be appreciated, however, that the first antenna element312and the second antenna element314can have any suitable antenna polarization. It should also be appreciated that the first antenna element312and the second antenna element314can each be associated with an independent RF feed. For instance, the first antenna element312can be associated with a first RF feed, whereas the second antenna element314can be associated with a second RF feed that is different than the first RF feed. As shown, the multi-mode antenna system100can include a switching device340coupled to the first antenna element312and the second antenna element314via conductors350and352, respectively. Additionally, the switching device340can be coupled to a receiver360of the multi-mode antenna system100via one or more conductors362. As will be discussed below, the switching device340can be configurable in at least two different states to configure the multi-mode antenna system100in a first operating mode or a second operating mode. When the multi-mode antenna system100is configured in the first operating mode, the switching device340couples the first antenna element312to the receiver360. In contrast, when the multi-mode antenna system100is configured in the second operating mode, the switching device340couples the second antenna element314to the receiver360. Furthermore, since the polarization of the first antenna element312is different than the polarization of the second antenna element314, it should be appreciated that the radiation pattern associated with the first operating mode is different than the radiation pattern associated with the second operating mode. In example embodiments, the receiver360can be tuned to any one of a plurality of different frequency channels. For instance, when the receiver360is coupled to the first antenna element312and tuned to a first frequency channel of the plurality of frequency channels, the receiver360can be configured to process one or more RF signals received at the first antenna element312and corresponding to the first frequency channel. In this manner, the receiver360can process the one or more RF signals received at the first antenna element312to obtain data indicative of a channel quality indicator (CQI) for the first operating mode of the multi-mode antenna system100. In contrast, when the receiver360is coupled to the second antenna element314and tuned to the first frequency channel, the receiver360can be configured to process one or more RF signals received at the second antenna element314and corresponding to the first frequency channel. In this manner, the receiver360can process the one or more RF signals received at the second antenna element314to obtain data indicative of a channel quality indicator for the second operating mode of the multi-mode antenna system100. It should be appreciated that examples of data indicative of the CQI can include at least one of a received signal strength indicator (RSSI), a signal to noise ratio (SNR), a signal to interference plus noise ratio (SNIR), a magnitude error ratio (MER), an error vector magnitude (EVM), a bit error rate (BER), a block error rate (BLER), and a packet error rate (PER). In example embodiments, the receiver360can include an analog front end (not shown) configured to process the one or more RF signals. The analog front end can include a RF analog power detector and a plurality of filters. In example embodiments, the receiver360, specifically the analog front end thereof, can obtain a RSSI measurement based, at least in part, on the one or more RF signals. Additionally, in some embodiments, the receiver360can include a digital back end. The digital back end can be in communication with the analog front end via an analog-to-digital (A/D) converter. The A/D converter can be configured to receive analog signals from the analog front end, convert the analog signals into digital signals, and provide the digital signals to the digital back end. In example embodiments, the digital back end can include a demodulator. The demodulator can be configured to demodulate the digital signals received from the A/D converter. In example embodiments, data indicative of the performance of the current operating mode of the antenna system can be obtained based, at least in part, on the demodulated signals output by the demodulator. More specifically, the data indicative of the performance of the current operating mode can include, without limitation, at least one of SNR, SINR, EVN, MER, EVM, BER, BLER, and PER. As shown, the controller120can be in communication with the receiver360. In this manner, the controller120can receive one or more signals382from the receiver360. In example embodiments, the one or more signals382can include data indicative of the channel quality indicator for the first operating mode or the second operating mode. For instance, the one or more signals382can include the RSSI measurement obtained based, at least in part, on the RF signal(s) processed via the analog front end of the receiver360. Alternatively and/or additionally, the one or more signals382can include data (e.g., SNR, SINR, EVN MER, EVM, BER, BLER, PER) obtained based, at least in part, on the demodulated signal(s) output via the digital back end of the receiver360. In example embodiments, the controller120can be in communication with the switching device340. In this manner, the controller120can control operation of the switching device340to configure the multi-mode antenna system100in the first operating mode or the second operating mode. As will be discussed below in more detail, the controller120can be configured to implement any of the processes discussed above with reference toFIGS.6through11to determine a selected operating mode for the multi-mode antenna system100and configure the multi-mode antenna system100in the selected operating mode. Referring now toFIG.4, another example embodiment of the multi-mode antenna system100is provided according to example embodiments of the present disclosure. The multi-mode antenna system100is configured in substantially the same manner as the multi-mode antenna system100ofFIG.3. However, in contrast to the multi-mode antenna system100ofFIG.3, the multi-mode antenna system100ofFIG.4includes a first parasitic element416and a second parasitic element418. As shown, the first parasitic element416can be positioned adjacent the first antenna element312and coupled to ground GND via a first shunt switch417. The second parasitic element418can be positioned adjacent the second antenna element314and coupled to GND via a second shunt switch419. In example embodiments, the first parasitic element416can be configured to modify (e.g., adjust) the radiation pattern associated with the first antenna element312. In this manner, the radiation pattern associated with the first antenna element312can be modified via the first parasitic element416to obtain a third operating mode of the multi-mode antenna system100. Additionally, the second parasitic element418can modify the radiation pattern associate with the second antenna element314. In this manner, the radiation pattern associated with the second antenna element314can be modified via the second parasitic element418to obtain a fourth operating mode of the multi-mode antenna system100. It should be appreciated that the first, second, third, and fourth operating modes of the multi-mode antenna system100can each have a distinct radiation pattern. It should also be appreciated that the first parasitic element416and the second parasitic element418can each be configured to provide any suitable number of operating modes beyond the first operating mode and the second operating mode discussed above with reference toFIG.3. Referring now toFIG.5, yet another example embodiment of the multi-mode antenna system100is provided according to the present disclosure. As shown, the multi-mode antenna system100can include a first switching device510and a second switching device512. It should be appreciated, however, that the multi-mode antenna system100can include more or fewer switching devices. As shown, the first switching device510can be coupled to the first antenna element312and the second antenna element314via a first conductor520and a second conductor522, respectively. Additionally, the second switching device512can be coupled to the first antenna element312and the second antenna element314via the first conductor520and the second conductors522, respectively. In example embodiments, the multi-mode antenna system100can include a first receiver530and a second receiver532. It should be appreciated, however, that the multi-mode antenna system100can include more or fewer receivers. It should also be appreciated that the first receiver530and the second receiver532can be configured in substantially the same manner as the receiver360(FIG.3) discussed above with reference toFIG.3. For instance, in some embodiments, the first receiver530and the second receiver532can each include an analog front end. In alternative implementations, the first receiver530and the second receiver532can each include the analog front end and a digital back end. More specifically, the digital back end can be in communication with the analog front end via an A/D converter. In some implementations, the first receiver530and the second receiver532can be disposed on the same printed circuit board. In alternative implementations, the first receiver530and the second receiver532can be disposed on separate printed circuit boards. As shown, the first receiver530can be coupled to the first switching device510via a third conductor534. Additionally, the second receiver532can be coupled to the second switching device512via a fourth conductor536. The first receiver530can be selectively coupled to one of the first antenna element312and the second antenna element314via the first switching device510. Additionally, the second receiver532can be selectively coupled to one of the first antenna element312and the second antenna element314via the second switching device512. In example embodiments, the first receiver530is couplable to a first media device540(e.g., television) via any suitable wired or wireless communication link. Additionally, the second receiver532is couplable to a second media device542(e.g., television) via any suitable wired or wireless communication link. In this manner, content (e.g., local programming) associated with RF signals received at one of the first antenna element312and the second antenna element314can be provided to the first media device540(e.g., via the first receiver530) and the second media device542(e.g., via the second receiver532). When the first switching device510is configured in a first state, the first receiver530is coupled to the first antenna element312. In this manner, RF signals received at the first antenna element312can be provided to the first receiver530via the first switching device510. In contrast, when the first switching device510is configured in a second state, the first receiver530can be coupled to the second antenna element314. In this manner, the plurality of RF signals received at the second antenna element314can be provided to the first receiver530via the first switching device510. In some implementations, the first receiver530can be to one of a plurality of frequency channels. For instance, the first receiver530can be tuned to a first frequency channel of the plurality of frequency channels. In this manner, the first receiver530can process one or more RF signals corresponding to the first frequency channel to obtain data indicative of a channel quality indicator for one of the operating modes of the multi-mode antenna system100. For instance, if the first receiver530is coupled to the first antenna element312via the first switching device510, the first receiver530can obtain data indicative of a channel quality indicator for the first operating mode of the multi-mode antenna system100. Alternatively, if the first receiver530is coupled to the second antenna element314via the first switching device510, the first receiver530can obtain data indicative of a channel quality indicator for the second operating mode of the multi-mode antenna system100. Examples of the one or more metrics can include, without limitation, RSSI, SNR, SINR, MER, EVM, BER, BLER, and PER. When the second switching device512is configured in a first state, the second receiver532can be coupled to the first antenna element312. In this manner, the plurality of RF signals received at the first antenna element312can be provided to the second receiver532via the second switching device512. When the second switching device512is configured in a second state, the second receiver532can be coupled to the second antenna element314. In this manner, the plurality of RF signals received at the second antenna element314can be provided to the second receiver532via the second switching device512. In some implementations, the second receiver532can be tuned to one of the plurality of different frequency channels. For instance, the second receiver532can be tuned to a second frequency channel of the plurality of frequency channels. In this manner, the second receiver532can process one or more RF signals corresponding to the second channel to obtain data indicative of a channel quality indicator for one of the operating modes of the multi-mode antenna system100. For instance, if the second receiver532is coupled to the first antenna element312via the second switching device512, the second receiver532can obtain data indicative of a channel quality indicator for the first operating mode of the multi-mode antenna system100. Alternatively, if the second receiver532is coupled to the second antenna element314via the second switching device512, the second receiver532can obtain data indicative of a channel quality indicator for the second operating mode of the multi-mode antenna system100. In example embodiments, the multi-mode antenna system100can include a first low noise amplifier550and a second low noise amplifier552. It should be appreciated, that the multi-mode antenna system100can include more or fewer low noise amplifiers. As shown, the first low noise amplifier550can be coupled between the switching devices510,512and the antenna element312,314. In this manner, the first low noise amplifier550can amplify RF signals received at the first antenna element312and the second antenna element314, respectively. Additionally, the second low noise amplifier552can be coupled between the switching devices510,512and the antenna elements312,314. In this manner, the second low noise amplifier552can amplify RF signals received at the first antenna element312and second antenna element314, respectively. Referring now toFIG.6, a flow diagram of a method600for configuring a multi-mode antenna system is provided according to example embodiments of the present disclosure. It should be appreciated that the method600can be implemented by the controller120(FIG.1) of the multi-mode antenna system100(FIG.1).FIG.6depicts steps performed in a particular order for purposes of illustration and discussion. Those of ordinary skill in the art, using the disclosures provided herein, will understand that various steps of the method600may be adapted, modified, rearranged, performed simultaneously or modified in various ways without deviating from the scope of the present disclosure. At (602), the method600includes obtaining channel selection data indicating the multi-mode antenna system is tuned to a first channel of a plurality of channels. For example, the controller can obtain the channel selection data from a receiver of the antenna system. At (604), the method600includes configuring the multi-mode antenna system in at least one operating mode of a plurality of operating modes. In example embodiments, each operating mode of the plurality of operating modes can have a distinct radiation pattern. At (606), the method600includes obtaining data indicative of a channel quality indicator for the at least one operating mode. In example embodiments, data indicative of the channel quality indicator can include at least one of RSSI, SNR, SNIR, MER, EVM, BER, a BLER, and PER. At (608), the method600includes determining a selected operating mode for the multi-mode antenna system for the first channel of the plurality of channels based, at least in part, on the data obtained at (606). At (610), the method600includes configuring the multi-mode antenna system in the selected operating mode when the multi-mode antenna system is tuned to the first channel of the plurality of channels. Referring now toFIG.7, a flow diagram of a method700for configuring a multi-mode antenna system is provided according to example embodiments of the present disclosure. It should be appreciated that the method700can be implemented by a controller of the multi-mode antenna system.FIG.7depicts steps performed in a particular order for purposes of illustration and discussion. Those of ordinary skill in the art, using the disclosures provided herein, will understand that various steps of the method700may be adapted, modified, rearranged, performed simultaneously or modified in various ways without deviating from the scope of the present disclosure. At (702), the method700includes obtaining channel selection data indicating the multi-mode antenna system is tuned to a first channel of a plurality of channels. Additionally, a channel counter variable, n, is assigned a numerical value of 1. At (704), the method700includes configuring the multi-mode antenna system in a first operating mode of a plurality of operating modes. Additionally, a mode counter variable, m, can be assigned a numerical value of 1. At (706), the method700includes obtaining data indicative of a channel quality indicator (CQI) for mode m. In example embodiments, the data indicative of the CQI can include at least one of RSSI, SNR, SINR, MER, EVM, BER, BLER, and PER. After obtaining data indicative of the CQI for mode m, the method700proceeds to (708). At (708), the method700includes determining whether the CQI for mode m is below a threshold value. In example embodiments, the multi-mode antenna system can be configured for use with one or more media devices (e.g., television), and the threshold value can correspond to a value needed to view content (e.g., local news) associated with the first channel of the plurality of channels. More specifically, the threshold value can correspond to a predetermined value based, at least in part, on the data indicative of the CQI. For example if the data indicative of the CQI includes SNR, the threshold value can correspond to a predetermined value based, at least in part, on the SNR. If the CQI for mode m is below the threshold value, the method700proceeds to (710). Otherwise, the method700proceeds to (718). At (710), the method700includes incrementing the mode counter variable, m. Once the mode counter variable has been incremented, the method700proceeds to (712). At (712), the method700includes determining whether the present value of the mode counter variable, m, is less than the total number of operating modes, M, in which the multi-mode antenna system can be configured. If the present value of the mode counter variable, m, is equal to the M, the method proceeds to (714). Otherwise, the method700proceeds (716). At (714), the method700includes generating notification. In example embodiments, the notification can indicate that none of the plurality of operating modes are optimal or near optimal when the multi-mode antenna system is tuned to the first channel. Additionally and/or alternatively, the method700can revert to (704). At (716), the method700includes reconfiguring the antenna system based, at least in part, on the present value of the mode counter variable, m. After reconfiguring the antenna system at (716), the method reverts to (706). It should be appreciated that, in some implementations, multiple iterations of steps (706), (708), (710), (712), and (716) may be performed before determining the CQI for one of the plurality of operating modes equals or exceeds the threshold value. At (718), the method700includes redetermining whether the CQI of the selected operating mode for the multi-mode antenna system is greater than or equal to the threshold value. If the CQI of the selected operating mode is now below the threshold value, the method700reverts to (704). Otherwise the method700proceeds to (720). At (720), the method700includes determining whether the multi-mode antenna system is still tuned to the first channel. If the multi-mode antenna system is still tuned to the first channel, the multi-mode antenna system remains configured in the selected operating mode and the method700reverts to (718). If, however, the multi-mode antenna system is no longer tuned to the first channel, the method700reverts to (704). Referring now toFIG.8, a flow diagram of a method800for configuring a multi-mode antenna system is provided according to example embodiments of the present disclosure. It should be appreciated that the method800can be implemented by the controller120(FIG.1) of the multi-mode antenna system100(FIG.1).FIG.8depicts steps performed in a particular order for purposes of illustration and discussion. Those of ordinary skill in the art, using the disclosures provided herein, will understand that various steps of the method800may be adapted, modified, rearranged, performed simultaneously or modified in various ways without deviating from the scope of the present disclosure. At (802), the method800includes obtaining channel selection data indicating the multi-mode antenna system is tuned to a first channel of a plurality of channels. In example embodiments, the channel selection data can be obtained from a receiver of the antenna system that is tuned to the first frequency channel. At (804), the method800includes configuring the multi-mode antenna system in each operating mode of a plurality of operating modes. In example embodiments, each operating mode of the plurality of operating modes can have a distinct radiation pattern. At (806), the method800includes obtaining data indicative of a channel quality indicator for each operating mode. In example embodiments, the data indicative of the channel quality indicator can include at least one of RSSI, SNR, SNIR, MER, EVM, BER, BLER, and PER. At (808), the method800includes determining a selected operating mode for the antenna system for the first channel based, at least in part, on the data indicative of the channel quality indicator for each operating mode. At (810), the method800includes configuring the multi-mode antenna system in the selected operating mode when the multi-mode antenna system is tuned to the first channel of the plurality of channels. Referring now toFIG.9, a flow diagram of a method900for configuring a multi-mode antenna system is provided according to example embodiments of the present disclosure. It should be appreciated that the method900can be implemented by the controller120(FIG.1) of the multi-mode antenna system100(FIG.1).FIG.9depicts steps performed in a particular order for purposes of illustration and discussion. Those of ordinary skill in the art, using the disclosures provided herein, will understand that various steps of the method900may be adapted, modified, rearranged, performed simultaneously or modified in various ways without deviating from the scope of the present disclosure. At (902), the method900includes obtaining channel selection data indicating the multi-mode antenna system is tuned to a first channel of a plurality of channels. Additionally, a channel counter variable, n, can be assigned a numerical value of 1. At (904), the method900includes configuring the multi-mode antenna system in one of a plurality of operating modes. Additionally, a mode counter variable, m, can be assigned a numerical value of 1. At (906), the method900includes obtaining data indicative of a channel quality indicator for the current operating mode. For example, the data indicative of the CQI can include at least one of RSSI, SNR, SINR, MER, EVM, BER, BLER, and PER. After determining the CQI for mode m, the method900proceeds to (908). At (908), the method900includes determining a mode score, Sm, for the current operating mode of the multi-mode antenna system. In example embodiments, the mode score Smcan be determined as shown in the below Equation: Sm=wn×CQIm,nEquation In the Equation, wncorresponds to the weighting factor assigned to channel n. Additionally, CQIm,ncorresponds to the channel quality indicator for mode m when the multi-mode antenna system is tuned to channel n. After determining the mode score, Sm, for the current operating mode of the multi-mode antenna system, the method900proceeds to (910). At (910), the method900includes determining whether the present value of the mode counter variable, m, is less than the total number of operating modes, M, of the antenna system. If the present value of the mode counter variable, m, is less than M, the method proceeds to (912). Otherwise, the method900proceeds (916). At (912), the method900includes incrementing the mode counter variable, m. Once the mode counter variable has been incremented at (912), the method900proceeds to (914). At (914), the method900includes reconfiguring the multi-mode antenna system based, at least in part, on the present value of the mode counter variable, m. After reconfiguring the antenna system at (914), the method reverts to (906). It should be appreciated that, in some implementations, multiple iterations of steps (906), (908), (910), (912), and (914) may be performed until the present value of the mode counter variable, m, is equal to the total number of operating modes, M. At (916), the method900includes determining a selected operating mode for the multi-mode antenna system for channel n based, at least in part, on the mode score (Sm) determined for each operating mode of the plurality of operating modes. In example embodiments, the selected operating mode can correspond to the operating mode having the highest mode score, Sm. After determining the selected operating mode for the multi-mode antenna system for channel n, the method900proceeds to (918). At (918), the method900includes configuring the multi-mode antenna to operate in a selected operating mode. In some implementations, the selected operating mode can correspond to the operating mode (e.g., 1 through M) with the highest mode score (Sm) determined at (906). Once the multi-mode antenna is configured in the selected operating mode, the method proceeds to (920). At (920), the method900includes determining whether the multi-mode antenna system is still tuned to the first channel. If the multi-mode antenna system is no longer tuned to the first channel, the method900proceeds to902. If, however, the multi-mode antenna system is still tuned to the first channel, the multi-mode antenna system remains in the selected operating mode and the method900proceeds to (922). At (922), the method900includes entering a standby mode for a predetermined amount of time. When the predetermined amount of time lapses, the method900can revert to (920) to determine whether the multi-mode antenna system is still tuned to the first channel. Referring now toFIG.10, a flow diagram of a method1000for configuring a multi-mode antenna system is provided according to example embodiments of the present disclosure. It should be appreciated that the method1000can be implemented by the controller120(FIG.1) of the multi-mode antenna system100(FIG.1).FIG.10depicts steps performed in a particular order for purposes of illustration and discussion. Those of ordinary skill in the art, using the disclosures provided herein, will understand that various steps of the method1000may be adapted, modified, rearranged, performed simultaneously or modified in various ways without deviating from the scope of the present disclosure. At (1002), the method1000includes configuring the multi-mode antenna system in each operating mode of a plurality of operating modes. In example embodiments, each operating mode of the plurality of operating modes can have a distinct radiation pattern. At (1004), the method1000includes obtaining data indicative of a CQI for each operating mode. In example embodiments, the data indicative of the CQI can include at least one of RSSI, SNR, SNIR, MER, EVM, BER, BLER, and PER. At (1006), the method1000includes determining configuration data for the multi-mode antenna system for each channel of a plurality of channels based, at least in part, on the data indicative of the CQI for each operating mode. As illustrated in the below Table, the configuration data can link each channel of the plurality of channels with one of the plurality of operating modes. TABLEConfigurationData for Multi-ModeAntenna SystemChannelOperating Mode1First2First3Second4Third In example embodiments, configuration data can include data indicative of the CQI for the selected operating mode. For instance, the configuration data for channel one shown in the above Table can include the data indicative of the CQI for the first operating mode. Additionally, the configuration data can include data indicative of the CQI for other operating modes (e.g., second, third, etc.) of the antenna system that were not selected as the operating mode for the antenna system when tuned to channel one. At (1008), the method1000includes obtaining channel selection data indicating the multi-mode antenna system is tuned to a first channel of a plurality of channels. At (1010), the method1000includes determining a selected operating mode for the antenna system based, at least in part, on the channel selection data obtained at (1008) and the configuration data determined at (1006). Referring now toFIG.11, a flow diagram of a method1100for determining configuration data for a multi-mode antenna system is provided according to example embodiments of the present disclosure. It should be appreciated that the method1100can be implemented by the controller120(FIG.1) of the multi-mode antenna system100(FIG.1).FIG.11depicts steps performed in a particular order for purposes of illustration and discussion. Those of ordinary skill in the art, using the disclosures provided herein, will understand that various steps of the method1100may be adapted, modified, rearranged, performed simultaneously or modified in various ways without deviating from the scope of the present disclosure. At (1102), the method1100includes tuning the multi-mode antenna system to a first channel of a plurality of frequency channels. Additionally, a channel counter variable, n, can be assigned a numerical value of 1. At (1104), the method1100includes configuring the multi-mode antenna system in one of a plurality of operating modes. Additionally, a mode counter variable, m, can be assigned a numerical value of 1. At (1106), the method1100includes obtaining data indicative of a channel quality indicator (CQI) for mode m. In example embodiments, the data indicative of the CQI can include at least one of RSSI, SNR, SINR, MER, EVM, BER, BLER, and PER. After obtaining data indicative of the CQI for mode m, the method1100proceeds to (1108). At (1108), the method1100includes determining whether the present value of the mode counter variable, m, is less than the total number of operating modes, M. If the present value of the mode counter variable, m, is less than M, the method1100proceeds to (1110). Otherwise, the method1100proceeds (1114). At (1110), the method1100includes incrementing the mode counter variable, m. Once the mode counter variable has been incremented, the method1100proceeds to (1112). At (1112), the method1100includes reconfiguring the antenna system based, at least in part, on the present value of the mode counter variable, m. After reconfiguring the antenna system at (1112), the method1100reverts to (1106). It should be appreciated that, in some implementations, multiple iterations of steps (1106), (1108), (1110), and (1112) may be performed until the present value of the mode counter variable, m, is equal to the total number of operating modes, M. At (1114), the method1100includes generating configuration data for the multi-mode antenna system based, at least in part, on the data obtained at (1106) for each operating mode of the plurality of operating modes. In example embodiments, the controller can compare the data obtained indicative of the CQI for each operating mode of the multi-mode antenna system to generate the configuration data. For example, if the data indicative of the CQI for a first operating mode of the multi-mode antenna system is better compared to data indicative of the CQI obtained for every other operating mode of the multi-mode antenna system, then the controller can generate configuration data linking channel n (e.g., the first channel) to the first operating mode of the multi-mode antenna. In this manner, the multi-mode antenna system can be configured in the first operating mode when tuned to the first channel of the plurality of channels. At (1116), the method1100includes storing the configuration data. In example embodiments, configuration data can be stored in one or more memory devices (FIG.12) associated with the controller of the antenna system. It should be appreciated, however, that the configuration data can be stored at any suitable location. Once the configuration data is stored, the method1100proceeds to (1118). At (1118), the method1100includes comparing the present value of the channel counter variable, n, against a total number of channels, N. If the present value of the channel counter variable, n, is less than the total number of channels N, then the method1100proceeds to (1120). Otherwise, the method1100continues to (1124). At (1120), the method1100includes incrementing the present value of the channel counter variable, n. After the channel counter variable, n, is incremented at (1120), the method1100proceeds to (1122). At (1122), the method1100includes tuning the multi-mode antenna system based, at least in part, on the present value of the channel counter variable, n. After tuning the multi-mode antenna system at (1122), the method1100reverts to (1104). It should be appreciated that multiple iterations of steps (1104) through (1122) may be performed until may be performed until the present value of the channel counter variable, n, is equal to the total number of channels, N. At (1124), the method1100may continue. In example embodiments, the method1100may enter a wait period at (1124) until the channel selection data is obtained. During the wait period, the method1100may revert to (1102). In this manner, configuration data can be updated to account for various conditions (e.g. weather, interference, etc.) affecting performance of the multi-mode antenna system. In some implementations, two or more sets of configuration data can be generated and/or updated at various portions of the day. For example, configuration data can include a first set of configuration data corresponding to a first portion of the day (e.g., morning), a second set of configuration data corresponding to a second portion of the day (e.g., afternoon), and a third set of configuration data corresponding to a third portion (e.g., evening) of the day. In example embodiments, the controller can be configured to access one of the first set of configuration data, the second set of configuration data, and the third set of configuration data based, at least in part, on a time of day at which channel selection data is obtained. For instance, if channel selection data is obtained during the morning (e.g., between 6 AM and noon), the controller can be configured to determine a selected operating mode based, at least in part, on the channel selection data and the first set of configuration data. Alternatively, if the channel selection data is obtained during the afternoon (e.g., between noon and 5 PM), the controller can be configured to determine the selected operating mode based, at least in part, on the channel selection data and the second set of configuration data. Furthermore, if the channel selection data is obtained during the evening (e.g., between 5 PM and 6 AM), the controller can be configured to determine the selected operating mode based, at least in part, on the channel selection data and the third set of configuration data. It should be appreciated that the first, second, and third set of configuration data may differ from one another. For instance, the first set of configuration data may link a first channel to a first operating mode. In contrast, the second set of configuration data may, due to weather conditions, indicate that a second operating mode of the antenna system is better than the first operating mode when the antenna system is tuned to the first channel. As such, the second set of configuration data may link the first channel with the second operating mode. In some embodiments, the wait period at (1124) expires when the controller receives channel selection data indicating the antenna system is tuned to one of the plurality of channels. However, in some implementations, multiple iterations of steps (1102) through (1122) can be performed even after expiration of the wait period. More specifically, data indicative of the channel quality indicator for each of the operating modes can be obtained via one or more idle receiver (e.g., receivers not tuned to one of the plurality channels). For example, if the first receiver of the antenna system is tuned to one of the plurality of channels, the controller may continue to obtain data indicative of the channel quality indicator from a second receiver of the antenna system that is not currently tuned to one of the plurality of channels. In this manner, data indicative of the channel quality indicator for each operating mode can be reobtained and used to update the selected operating mode of the antenna system based on time-dependent changes (e.g., noise and interference) in the selected frequency channel. Referring now toFIG.12, a block diagram of the controller120of the multi-mode antenna system100(FIG.1) is provided according to example embodiments of the present disclosure. As shown, the controller120can include one or more processors122configured to perform a variety of computer-implemented functions (e.g., performing the methods, steps, calculations and the like disclosed herein). As used herein, the term “processor” refers not only to integrated circuits referred to in the art as being included in a computer, but also refers to a controller, microcontroller, a microcomputer, a programmable logic controller (PLC), an application specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA), and other programmable circuits. As shown, controller120can include a memory device124. Examples of the memory device124can include computer-readable media including, but not limited to, non-transitory computer-readable media, such as RAM, ROM, hard drives, flash drives, or other suitable memory devices. The memory device124can store information accessible by the one or more processors122, including computer-readable instructions126that can be executed by the one or more processors122. The computer-readable instructions126can be any set of instructions that, when executed by the one or more processors122, cause the processor(s)722to perform operations. The computer-readable instructions126can be software written in any suitable programming language or can be implemented in hardware. The memory device124may also store data accessible by the one or more processors122, such as configuration data for the multi-mode antenna system100(FIG.1). While the present subject matter has been described in detail with respect to specific example embodiments thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, the scope of the present disclosure is by way of example rather than by way of limitation, and the subject disclosure does not preclude inclusion of such modifications, variations and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art. | 49,720 |
11863214 | DETAILED DESCRIPTION OF EMBODIMENTS The technical solutions in the embodiments of the present disclosure are described below clearly with reference to the accompanying drawings in the embodiments of the present disclosure. Apparently, the described embodiments are some rather than all of the embodiments of the present disclosure. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present disclosure shall fall within the protection scope of the present disclosure. At present, a network state of playing games in landscape mode is one of key complaints, which can be effectively solved by four-antenna switching. A new switch needs to be introduced to the four-antenna switching. At present, according to switching rules of a four-antenna switching algorithm, a selected radio frequency frame is as follows:first, double pole double throw (DPTP) switch is combined with three pole three throw (3P3T) switch; orsecond, only four pole four throw (4P4T) switch is used. However, parameters for 4P4T switch industry have not met application requirements. In addition, according to switch architecture, an insertion loss of 4P4T switch is similar to that of 3P3T switch and DPDT switch, and a path insertion loss will be greater than that of 3P3T switch. However, if two 3P3T switches are used to build architecture, lines will be extremely complicated, which will lead to a serious increase in wiring loss and device insertion loss. Embodiments of the present disclosure provide an antenna switching circuit. As shown inFIG.1andFIG.2, the antenna switching circuit includes:a first switching circuit, where the first switching circuit is electrically connected with at least two first radio frequency paths and at least two first antennas4, respectively, and the first switching circuit includes at least one first state, in a first state, one of the first radio frequency paths is connected with one of the first antennas4, and an operating band of the first radio frequency path is a first frequency band; anda second switching circuit2, where the second switching circuit2is electrically connected with at least two second radio frequency paths and at least two second antennas5, respectively, and the second switching circuit2includes at least one second state, in a second state, one of the second radio frequency paths is connected with one of the second antennas5, and an operating band of the second radio frequency band is a second frequency band, where the first frequency band is lower than the second frequency band, that is, all frequency points of the first frequency band are smaller than a minimum frequency point of the second frequency band. Alternatively, the first frequency band is a 4G frequency band, and the second frequency band is a SUB 6G frequency band. In other words, the first frequency band is 698 MHz-960 MHz, and the second frequency band is 1710 MHz-5000 MHz. Alternatively, the first frequency band is 960 MHz-1710 MHz (including 1710 MHz), and the second frequency band is 1710 MHz-5000 MHz (excluding 1710 MHz). Operating frequency points of any two of the at least two first radio frequency paths are different, and operating frequency points of any two of the at least two second radio frequency paths are different. In addition, the first switching circuit is configured to conduct a path between any of the at least two first radio frequency paths and any of the at least two first antennas4, and the second switching circuit2is configured to conduct a path between any of the at least two second radio frequency paths and any of the at least two second antennas5. Therefore, according to this embodiment of the present disclosure, if a path between a first antenna4and a first radio frequency path operating in a low frequency band and a path between a second antenna5and a second radio frequency path operating in a high frequency band are split, the first radio frequency path and the second radio frequency path will not be combined at a front end, thus reducing an insertion loss caused by a combiner. Furthermore, paths between the first radio frequency path and the second radio frequency path operating in different frequency bands and the antenna are split, so that lines of the antenna switching circuit is simpler. Alternatively, the first switching circuit includes:a first control switch1, where the first control switch1includes two first input terminals, two first output terminals, and a first connection switch. The first connection switch can switch a connection state between any of the two first input terminals and any of the two first output terminals. The two first input terminals are respectively electrically connected with two of the first radio frequency paths in one-to-one correspondence. The two first output terminals are respectively electrically connected with two of the first antennas4in one-to-one correspondence. As shown inFIG.1, the first control switch1is a DPTP switch. The first control switch1can conduct a path between any of two of the first radio frequency paths and any of two of the first antennas4. Alternatively, if a first radio frequency channel may operate at a low band (LB), switching between two antennas at a low frequency band may be implemented through the first control switch1. Alternatively, the second switching circuit2includes:a second control switch201, where the second control switch201includes two second input terminals, two second output terminals, and a second connection switch, where the second connection switch can switch a connection state between any of the two second input terminals and any of the two second output terminals; anda third control switch202, where the third control switch202includes three third input terminals, three third output terminals, and a third connection switch, where the third connection switch can switch a connection state between any of the three third input terminals and any of the three third output terminals. The two second input terminals and two first target input terminals are electrically connected with four of the second radio frequency paths in one-to-one correspondence. A first target output terminal and the three third output terminals are electrically connected with four of the second antennas5in one-to-one correspondence, and a second target output terminal is electrically connected with a second target input terminal. The second target input terminal is one of the three third input terminals, and the first target input terminal is one of the three third input terminals except the second target input terminal. The first target output terminal is one of the two second output terminals, and the second target output terminal is the other of the two second output terminals. As shown inFIG.2, a second control switch201is a DPDT switch, and a third control switch202is a 3P3T switch. The second control switch201and the third control switch202cooperate with each other to conduct a path between any of four of the second radio frequency paths and any of four of the second antennas5. Alternatively, a second radio frequency path may operate in a medium-high band (MHB). Therefore, four-antenna switching in the medium-high band may be implemented through the cooperation of the second control switch201and the third control switch202. Alternatively, as shown inFIG.1, the antenna switching circuit further includes:a first combiner3, where the first combiner3includes a fourth input terminal, a fifth input terminal, and a fourth output terminal. The fourth input terminal is electrically connected with the first switching circuit. The fourth output terminal is electrically connected with a first target antenna. The first target antenna is one of the at least two first antennas4. The fifth input terminal is electrically connected with the second switching circuit2. The second switching circuit2further includes a third state. In the third state, one of the at least two second radio frequency paths is connected with the first target antenna. The combiner is configured to combine input multi-band signals and output them together. Alternatively, as shown inFIG.1, the first combiner3combines input signals in LB and MHB, and then is connected with a first antenna4by a feeder, which not only saves one feeder and reduces the number of second antennas5, but also avoids switching between different antennas. Alternatively, when a path between a first radio frequency path operating in a first frequency band and the first antenna4connected with the first combiner3is conducted, the first antenna4operates in the first frequency band; when a path between a second radio frequency path operating in a second frequency band and the first antenna4connected with the first combiner3is conducted, the first antenna4operates in the second frequency band, that is, the first combiner3may be directly controlled to output a signal in the first frequency band or a signal in the second frequency band, so that one antenna may operate in different frequency bands, avoiding switching between different antennas when signals in different frequency bands need to be transmitted. Alternatively, the antenna switching circuit further includes:a second combiner6, where the second combiner6includes a sixth input terminal, a seventh input terminal, and a fifth output terminal. The sixth input terminal is electrically connected with the first switching circuit. The fifth output terminal is electrically connected with a second target antenna. The second target antenna is one of the at least two second antennas5. The seventh input terminal is electrically connected with the second switching circuit2. The first switching circuit further includes a fourth state. In the fourth state, one of the at least two first radio frequency paths is connected with the second target antenna. The combiner may further be disposed between the second antenna5and the second radio frequency path. For example, as shown inFIG.2, the second combiner6combines input signals in LB and MHB, and then is connected with a second antenna5by a feeder, which not only saves one feeder and reduces the number of first antennas4, but also avoids switching between different antennas. Alternatively, when a path between a first radio frequency path operating in a first frequency band and a second antenna5connected with the second combiner6is conducted, the second antenna5operates in the first frequency band; when a path between a second radio frequency path operating in a second frequency band and the second antenna5connected with the second combiner6is conducted, the second antenna5operates in the second frequency band, that is, the second combiner6may be directly controlled to output a signal in the first frequency band or a signal in the second frequency band, so that one antenna may operate in different frequency bands, avoiding switching between different antennas when signals in different frequency bands need to be transmitted. Alternatively, the two first input terminals are electrically connected with a first terminal and a second terminal, respectively. The first terminal is a common terminal for primary transmitting and receiving of one of the first radio frequency paths, and the second terminal is a diversity receiving terminal of one of the first radio frequency paths. Alternatively, because the first radio frequency path may operate in a LB, two terminals electrically connected with the first control switch1may be a low band transceiver (LB TRX) and a low band discontinuous reception (LB DRX). In addition, for example, as shown inFIG.1, a first port K and a second port L form two first input terminals of the first control switch1, and a third port M and a fourth port N form two first output terminals of the first control switch1. When the first port K is electrically connected with the third port M, a path between LB TRX and the first combiner3or the first antenna4electrically connected with the first combiner3is conducted. When the second port L is electrically connected with the fourth port N, a path between LB DRX and the first antenna4is conducted. When the first port K is electrically connected with the fourth port N, a path between LB TRX and the first antenna4is conducted. When the second port L is electrically connected with the third port M, a path between LB DRX and the first combiner3or the first antenna5electrically connected with the first combiner3is conducted. In addition, conduction of the first control switch1inFIG.2is similar to that inFIG.1, which will not be repeated again. In other words, switching among a first port K, a second port L, a third port M, and a fourth port N is performed through the first control switch1, so that switching between two antennas in a low frequency band may be implemented. Alternatively, the two second input terminals are respectively electrically connected with a third terminal and a fourth terminal in one-to-one correspondence, and the two first target input terminals are respectively electrically connected with a fifth terminal and a sixth terminal in one-to-one correspondence. The third terminal is a common terminal for primary receiving and transmitting of one of the second radio frequency paths, and the fourth terminal is a diversity receiving terminal of one of the second radio frequency paths. The fifth terminal is a diversity receiving terminal of one of the second radio frequency paths, and the sixth terminal is a diversity receiving terminal of one of the second radio frequency paths. The third terminal, the fourth terminal, the fifth terminal, and the sixth terminal are terminals of different second radio frequency paths respectively. Alternatively, because the second radio frequency path may operate in a MHB, two terminals electrically connected with a second control switch201may be a common terminal for primary receiving and transmitting in a medium-high band (MHB TRX) and a third multiple-input multiple-output terminal (MIMO3) (i.e. a diversity receiving terminal). Two terminals electrically connected with a third control switch202may be a diversity receiving terminal in a medium-high band (MHB DRX) and a second multi-input multi-output terminal (MIMO2) (that is, a diversity receiving terminal). In addition, for example, as shown inFIG.1, a fifth port A and a sixth port B form two second input terminals of the second control switch201, a seventh port C and an eighth port D form two second output terminals of the second control switch201, a ninth port E, a tenth port F, and an eleventh port G form three third input terminals of the third control switch202, and a twelfth port H, a thirteenth port I, and a fourteenth port J form three third output terminals of the third control switch202. For example, conduction of one of the second control terminal201and the third control terminal202is used as an example. When the fifth port A is electrically connected with the eighth port D, a path between MHB TRX and a second antenna5is conducted. When the sixth port B is electrically connected with the seventh port C, and the ninth port E is electrically connected with the fourteenth port J, a path between MIMO3 and a second antenna5is conducted. When the tenth port F is electrically connected with the thirteenth port I, a path between MHB DRX and a second antenna5is conducted. When the eleventh port G is electrically connected with the twelfth port H, a path between MIMO2 and the first combiner3or the first antenna5electrically connected with the first combiner3is conducted. Certainly, it can be understood that other conductions for the second control switch201and the third control switch202are not listed herein. Similarly, conduction of the second control switch201and the third control switch202inFIG.2is similar to the principle inFIG.1, which will not be repeated herein again. In addition, DPDT (i.e. the second control switch201inFIG.1andFIG.2) needs to be added to a MHB TRX path and MIMO3 path. Due to a long radio frequency path and a via hole, a great path insertion loss will be caused, which will affect the conduction performance. Therefore, to reduce the insertion loss, the DPDT switch and a radio frequency front-end device may be combined to reduce the complexity of wiring. For example, if a QM77038 power amplifier is used to replace the second control switch201inFIG.2, an insertion loss caused by using this architecture may be effectively reduced. To sum up, through the antenna switching circuit according to the embodiments of the present disclosure, the radio frequency path in a LB is separated from the radio frequency path in a MHB, that is, the radio frequency path in a LB and the radio frequency path in a MHB are not combined at the front end, thus reducing a loss of a combiner for antenna combination. Moreover, two-antenna switching may be used in the LB situation, while four-antenna switching may be used in the MHB situation. An embodiment of the present disclosure further provides a terminal, including the antenna switching circuit. The antenna switching circuit splits a path between a first antenna and a first radio frequency path operating in a first frequency band from a path between a second antenna and a second radio frequency path operating in a second frequency band, so that the first radio frequency path operating in the first frequency band and the second radio frequency path operating in the second frequency band will not be combined at a front end, thus reducing an insertion loss caused by a combiner. Furthermore, splitting of a path between the first radio frequency path operating in a frequency band and an antenna from a path between the second radio frequency path operating in a different frequency band and an antenna helps to simplify lines of the antenna switching circuit. Therefore, the terminal according to this embodiment of the present disclosure can operate by using the antenna switching circuit with a smaller insertion loss, improving the communication performance of the terminal. The foregoing descriptions are merely the optional implementations of the present disclosure. It should be noted that those of ordinary skill in the art may further make several improvements and refinements without departing from the principles described in the present disclosure, and these improvements and refinements also fall within the protection scope of the present disclosure. | 18,547 |
11863215 | DETAILED DESCRIPTION OF THE DISCLOSURE Hereinafter, embodiments of the present disclosure will be described in detail using working examples and the drawings. It should be noted that all embodiments described below indicate comprehensive or specific examples. Numerical values, shapes, materials, constituent elements, arrangement and connection forms of the constituent elements, and the like, which will be described in the following embodiments, are examples, and are not intended to limit the present disclosure. Constituent elements which are not described in independent claims among the constituent elements in the following embodiments are described as arbitrary constituent elements. In addition, sizes or size ratios of the constituent elements illustrated in the drawings are not necessarily strict. Embodiment 1 1.1 Configurations of High-Frequency Front End Module2A and Communication Device1A FIG.1is a circuit configuration diagram of a communication device1A according to Embodiment 1. As illustrated in the diagram, the communication device1A includes a high-frequency front end module2A, an RF signal processing circuit (RFIC)3, and a baseband signal processing circuit (BBIC)4. The RFIC3is an RF signal processing circuit that processes a high-frequency signal transmitted/received through an antenna of the high-frequency front end module2A. Specifically, the RFIC3performs signal processing on a high-frequency reception signal inputted through the high-frequency front end module2A by down-conversion or the like, and outputs a reception signal generated by the signal processing to the BBIC4. Furthermore, the RFIC3performs signal processing on a transmission signal inputted from the BBIC4by up-conversion or the like, and outputs a high-frequency transmission signal generated by the signal processing to the transmission-side signal path of the high-frequency front end module2A. The BBIC4is a circuit that performs signal processing by using an intermediate frequency band having a lower frequency than that of a high-frequency signal propagating in the high-frequency front end module2A. A signal processed by the BBIC4is used, for example, as an image signal for image display, or is used as a voice signal for a call through a speaker. Furthermore, the RFIC3also has a function as a control unit that controls the connection of a switch circuit (described later) included in the high-frequency front end module2A on the basis of a band (frequency band) to be used. Specifically, the RFIC3switches the connection of the switch circuit included in the high-frequency front end module2A by a control signal (not illustrated). Note that the control unit may be provided outside the RFIC3, or may be provided in, for example, the high-frequency front end module2A or the BBIC4. Next, the detailed configuration of the high-frequency front end module2A will be described. As illustrated inFIG.1, the high-frequency front end module2A includes a primary antenna11and a secondary antenna12, switch circuits20and50, transmission filters31T and32T, reception filters31R and32R, and transmission amplifiers41and42. According to the configuration described above, the high-frequency front end module2A can execute two-uplink in which a signal in a first transmission band (A-Tx) included in a first frequency band (Band A) and a signal in a second transmission band (B-Tx) included in a second frequency band (Band B) which is different from the first frequency band are simultaneously transmitted, and two-downlink in which a signal in a first reception band (A-Rx) included in the first frequency band (Band A) and a signal in a second reception band (B-Rx) included in the second frequency band (Band B) are simultaneously received. The primary antenna11is an antenna that is used in preference to the secondary antenna12in terms of antenna performance and the like, and is an antenna element capable of transmitting and receiving signals in Band A and Band B. Furthermore, the secondary antenna12is an antenna element capable of transmitting and receiving signals in Band A and Band B. The transmission filter31T is a first transmission filter whose input terminal is connected to the transmission amplifier41, whose output terminal is connected to the switch circuit20, and which takes A-Tx as a pass band. The transmission filter32T is a second transmission filter whose input terminal is connected to the transmission amplifier42, whose output terminal is connected to the switch circuit20, and which takes B-Tx as a pass band. The reception filter31R is a first reception filter whose input terminal is connected to the switch circuit20, and which takes A-Rx as a pass band. The reception filter32R is a second reception filter whose input terminal is connected to the switch circuit20, and which takes B-Rx as a pass band. The transmission filter31T and the reception filter31R constitute a first multiplexer that selectively transmits and receives a high-frequency signal in Band A. Note that the first multiplexer does not have a transmission filter which takes B-Tx as a pass band. Furthermore, the first multiplexer does not have a reception filter which takes B-Rx as a pass band. The transmission filter32T and the reception filter32R constitute a second multiplexer that selectively transmits and receives a high-frequency signal in Band B. Note that the second multiplexer does not have a transmission filter which takes A-Tx as a pass band. Furthermore, the second multiplexer does not have a reception filter which takes A-Rx as a pass band. Note that in the present specification, the first multiplexer and the second multiplexer are each defined as a portion including a duplexer in which the output terminal of the transmission filter and the input terminal of the reception filter are commonly connected at the switch circuit20, as in the present embodiment. The switch circuit20is a first switch circuit having a terminal20a(third terminal), a terminal20b(fourth terminal), a terminal20c(first terminal), and a terminal20d(second terminal). The terminal20cis connected to the primary antenna11, and the terminal20dis connected to the secondary antenna12. Furthermore, the terminal20ais connected to the output terminal of the transmission filter31T and the input terminal of the reception filter31R, and the terminal20bis connected to the output terminal of the transmission filter32T and the input terminal of the reception filter32R. In the switch circuit20, conduction between the terminal20aand the terminal20cand conduction between the terminal20aand the terminal20dare exclusively switched, and conduction between the terminal20band the terminal20cand conduction between the terminal20band the terminal20dare exclusively switched. Note that in the switch circuit, “conduction between a terminal A and a terminal B and conduction between a terminal C and a terminal D are exclusively switched” means that (1) in a state in which the terminal A and the terminal B are conductive to each other, the terminal C and the terminal D are non-conductive to each other, and (2) in a state in which the terminal C and the terminal D are conductive to each other, the terminal A and the terminal B are non-conductive to each other. The switch circuit20is, for example, a DPDT (Double Pole Double Throw) type switch circuit having the terminals20aand20b, and the terminals20cand20d. Note that the switch circuit20may be a switch circuit of a DP3T type, a DP4T type, or the like, and in this case, necessary terminals may be used in accordance with the number of bands to be used. The high-frequency front end module2A includes the primary antenna11and the secondary antenna12, the switch circuit20, the first multiplexer, and the second multiplexer described above, thereby making it possible to arbitrarily distribute high-frequency signals in Band A and Band B to the primary antenna11and the secondary antenna12by switching the connection state of the switch circuit20, and execute CA of two-uplink two-downlink. Here, since the first multiplexer does not have a transmission filter and a reception filter of Band B, and the second multiplexer does not have a transmission filter and a reception filter of Band A, it is possible to provide the high-frequency front end module2A which is reduced in size and in which CA of two-uplink two-downlink can be performed. Note that the high-frequency front end module2A can execute so-called one-uplink two-downlink CA in which only one of a high-frequency signal in Band A and a high-frequency signal in Band B is transmitted and a high-frequency signal in Band A and a high-frequency signal in Band B are simultaneously received by the above-described configuration. The transmission amplifier41is a first amplifier whose output terminal is connected to the input terminal of the transmission filter31T, and is a power amplifier constituted by a transistor or the like, for example. Furthermore, the transmission amplifier42is a second amplifier whose output terminal is connected to the input terminal of the transmission filter32T, and is a power amplifier constituted by a transistor or the like, for example. The switch circuit50is a second switch circuit having a terminal50a(seventh terminal), a terminal50b(eighth terminal), a terminal50c(fifth terminal), and a terminal50d(sixth terminal). The terminal50cis connected to an input terminal of the transmission amplifier41, and the terminal50dis connected to an input terminal of the transmission amplifier42. Furthermore, the terminal50ais connected to an output terminal3aof the RFIC3, and a transmission signal for the primary antenna11is inputted thereto. Furthermore, the terminal50bis connected to an output terminal3bof the RFIC3, and a transmission signal for the secondary antenna12is inputted thereto. In the switch circuit50, when conduction between the terminal20aand the terminal20cof the switch circuit20is selected, conduction between the terminal50aand the terminal50cis selected, and when conduction between the terminal20aand the terminal20dof the switch circuit20is selected, conduction between the terminal50band the terminal50cis selected. Furthermore, when conduction between the terminal20band the terminal20cof the switch circuit20is selected, conduction between the terminal50aand the terminal50dis selected, and when conduction between the terminal20band the terminal20dof the switch circuit20is selected, conduction between the terminal50band the terminal50dis selected. The switch circuit50is, for example, a DPDT type switch circuit having the terminals50aand50b, and the terminals50cand50d. Note that the switch circuit50may be a switch circuit of a DP3T type, a DP4T type, or the like, and in this case, necessary terminals may be used in accordance with the number of bands to be used. With this configuration, since the switch circuit50achieves a connection state corresponding to a connection state of the switch circuit20, it is possible to output or input a signal for the primary antenna11and a signal for the secondary antenna12without changing terminal arrangement of the RFIC3. Accordingly, it is possible to simplify the circuit configurations of the high-frequency front end module2A and the communication device1A. Note that the RFIC3may be constituted of two RF signal processing circuits, for example, may be constituted of a circuit that processes a signal for Band A and a circuit that processes a signal for Band B, or may be constituted of a circuit that processes a signal for the primary antenna11and a circuit that processes a signal for the secondary antenna12. 1.2 Connection State of High-Frequency Front End Module2A FIG.2is a circuit state diagram in CA of the high-frequency front end module2A according to Embodiment 1. This diagram illustrates a circuit connection state in (1) a case of two-uplink of Band A and Band B and two-downlink of Band A and Band B (mode 1: two-uplink two-downlink), and (2) a case of one-uplink of Band A or Band B and two-downlink of Band A and Band B (mode 2: one-uplink two-downlink). In both the mode 1 and the mode 2, as illustrated inFIG.2, by the control unit, the terminal20aand the terminal20care connected to each other, and the terminal20band the terminal20dare connected to each other, in the switch circuit20(first connection state). Furthermore, the terminal50aand the terminal50care connected to each other, and the terminal50band the terminal50dare connected to each other, in the switch circuit50. In this connection state, in the mode 1, a transmission signal in one of Band A and Band B is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier41, the first multiplexer, the switch circuit20, and the primary antenna11, and a transmission signal in the other of Band A and Band B is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier42, the second multiplexer, the switch circuit20, and the secondary antenna12. Furthermore, a reception signal in one of Band A and Band B is received by the RFIC3through the primary antenna11, the switch circuit20, and the first multiplexer, and a reception signal in the other of Band A and Band B is received by the RFIC3through the secondary antenna12, the switch circuit20, and the second multiplexer. Furthermore, in the mode 2, when a transmission signal in one of Band A and Band B is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier41, the first multiplexer, the switch circuit20, and the primary antenna11, a reception signal in one of Band A and Band B is received by the RFIC3through the primary antenna11, the switch circuit20, and the first multiplexer, and a reception signal in the other of Band A and Band B is received by the RFIC3through the secondary antenna12, the switch circuit20, and the second multiplexer. Alternatively, in both the mode 1 and the mode 2, as illustrated inFIG.2, by the control unit, the terminal20aand the terminal20dare connected to each other, and the terminal20band the terminal20care connected to each other, in the switch circuit20(second connection state). Furthermore, the terminal50aand the terminal50dare connected to each other, and the terminal50band the terminal50care connected to each other, in the switch circuit50. In this connection state, in the mode 1, a transmission signal in one of Band A and Band B is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier42, the second multiplexer, the switch circuit20, and the primary antenna11, and a transmission signal in the other of Band A and Band B is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier41, the first multiplexer, the switch circuit20, and the secondary antenna12. Furthermore, a reception signal in one of Band A and Band B is received by the RFIC3through the primary antenna11, the switch circuit20, and the second multiplexer, and a reception signal in the other of Band A and Band B is received by the RFIC3through the secondary antenna12, the switch circuit20, and the first multiplexer. Furthermore, in the mode 2, when a transmission signal in one of Band A and Band B is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier42, the second multiplexer, the switch circuit20, and the primary antenna11, a reception signal in one of Band A and Band B is received by the RFIC3through the primary antenna11, the switch circuit20, and the second multiplexer, and a reception signal in the other of Band A and Band B is received by the RFIC3through the secondary antenna12, the switch circuit20, and the first multiplexer. 1.3 Comparison of High-Frequency Front End Modules According to Embodiment 1 and Comparative Example 1 FIG.3is a circuit configuration diagram of a high-frequency front end module502according to Comparative Example 1. Note that the diagram also illustrates the RFIC3connected to the high-frequency front end module502according to Comparative Example 1. As illustrated in the diagram, the high-frequency front end module502includes a primary circuit502aand a secondary circuit502b. The primary circuit502aincludes the primary antenna11, a switch circuit561, transmission filters31T1and32T1, reception filters31R1and32R1, and the transmission amplifier41. The transmission filters31T1and32T1and the reception filters31R1and32R1constitute a first multiplexer. The secondary circuit502bincludes the secondary antenna12, a switch circuit562, transmission filters31T2and32T2, reception filters31R2and32R2, and the transmission amplifier42. The transmission filters31T2and32T2and the reception filters31R2and32R2constitute a second multiplexer. The high-frequency front end module502according to Comparative Example 1 is different from the high-frequency front end module2A according to Embodiment 1 in the configurations of the first multiplexer, the second multiplexer, and the switch circuits. Hereinafter, the high-frequency front end module502according to Comparative Example 1 will be described focusing on the differences from the high-frequency front end module2A according to Embodiment 1. The switch circuit561is an SPDT (Single Pole Double Throw) type switch circuit having a common terminal561aand selection terminals561cand561d. The common terminal561ais connected to the output terminal of the transmission amplifier41. The switch circuit562is an SPDT type switch circuit having a common terminal562aand selection terminals562cand562d. The common terminal562ais connected to the output terminal of the transmission amplifier42. The transmission filter31T1is a transmission filter whose input terminal is connected to the selection terminal561c, whose output terminal is connected to the primary antenna11, and which takes A-Tx as a pass band. The transmission filter32T1is a transmission filter whose input terminal is connected to the selection terminal561d, whose output terminal is connected to the primary antenna11, and which takes B-Tx as a pass band. The reception filter31R1is a reception filter whose input terminal is connected to the primary antenna11, and which takes A-Rx as a pass band. The reception filter32R1is a reception filter whose input terminal is connected to the primary antenna11, and which takes B-Rx as a pass band. The transmission filter31T2is a transmission filter whose input terminal is connected to the selection terminal562c, whose output terminal is connected to the secondary antenna12, and which takes A-Tx as a pass band. The transmission filter32T2is a transmission filter whose input terminal is connected to the selection terminal562d, whose output terminal is connected to the secondary antenna12, and which takes B-Tx as a pass band. The reception filter31R2is a reception filter whose input terminal is connected to the secondary antenna12, and which takes A-Rx as a pass band. The reception filter32R2is a reception filter whose input terminal is connected to the secondary antenna12, and which takes B-Rx as a pass band. According to the configuration described above, the high-frequency front end module502can execute two-uplink in which a signal in the first transmission band (A-Tx) included in Band A and a signal in the second transmission band (B-Tx) included in Band B are simultaneously transmitted, and two-downlink in which a signal in the first reception band (A-Rx) included in Band A and a signal in the second reception band (B-Rx) included in Band B are simultaneously received. For example, in a state in which the common terminal561aand the selection terminal561care connected to each other and the common terminal562aand the selection terminal562dare connected to each other, a transmission signal in Band A is transmitted through the output terminal3a, the transmission amplifier41, the first multiplexer, and the primary antenna11, and a transmission signal in Band B is transmitted through the output terminal3b, the transmission amplifier42, the second multiplexer, and the secondary antenna12. Furthermore, a reception signal in Band A is received by the RFIC3through the primary antenna11and the first multiplexer, and a reception signal in Band B is received by the RFIC3through the secondary antenna12and the second multiplexer. Furthermore, in a state in which the common terminal561aand the selection terminal561dare connected to each other and the common terminal562aand the selection terminal562care connected to each other, a transmission signal in Band B is transmitted through the output terminal3a, the transmission amplifier41, the first multiplexer, and the primary antenna11, and a transmission signal in Band A is transmitted through the output terminal3b, the transmission amplifier42, the second multiplexer, and the secondary antenna12. Furthermore, a reception signal in Band B is received by the RFIC3through the primary antenna11and the first multiplexer, and a reception signal in Band A is received by the RFIC3through the secondary antenna12and the second multiplexer. In the high-frequency front end module502according to Comparative Example 1, in order to ensure signal quality such as isolation and the like of high-frequency signals in Band A and Band B simultaneously transmitted/received, two antenna elements, such as the primary antenna11which is preferentially used and the secondary antenna12which is secondarily used, are disposed. In this case, because of necessity of making it possible to transmit/receive each of the high-frequency signals in Band A and Band B even by any of the antennas, a transmission path and a reception path of Band A and a transmission path and a reception path of Band B are connected to the primary antenna11, and a transmission path and a reception path of Band A and a transmission path and a reception path of Band B are connected and disposed also to the secondary antenna12. A filter for selectively allowing a desired frequency band to pass therethrough is arranged in each signal path, and in the configuration of the high-frequency front end module502according to Comparative Example 1, four filters of the transmission filters31T1and32T1and the reception filters31R1and32R1are connected to the primary antenna11. Furthermore, four filters of the transmission filters31T2and32T2and the reception filters31R2and32R2are connected to the secondary antenna12. That is, in the front end module to which the primary antenna11and the secondary antenna12are applied, in order to achieve two-uplink two-downlink of the two frequency bands of Band A and Band B, a total of eight filters are required, and the circuit is enlarged. In contrast, the high-frequency front end module2A according to the present embodiment includes the primary antenna11and the secondary antenna12, the switch circuit20, the first multiplexer, and the second multiplexer, thereby making it possible to arbitrarily distribute high-frequency signals in Band A and Band B to the primary antenna11and the secondary antenna12by switching the connection state of the switch circuit20, and execute CA of two-uplink two-downlink. Therefore, in the first multiplexer connected to one of the antennas, the transmission filter of Band B can be reduced. In the same manner, in the second multiplexer connected to the other of the antennas, the transmission filter of Band A can be reduced. That is, two or more filters can be reduced as compared with the configuration of the high-frequency front end module502according to Comparative Example 1. In the configuration of the high-frequency front end module2A according to the present embodiment, in comparison with the high-frequency front end module502according to Comparative Example 1, the one switch circuit20of a two-input two-output type is added, but the switch circuit20is sufficiently smaller than the transmission filter and the reception filter. Accordingly, it is possible to provide the high-frequency front end module2A which is reduced in size and in which CA of two-uplink two-downlink can be performed. Furthermore, in the high-frequency front end module2A according to the present embodiment, by including the primary antenna11and the secondary antenna12, the switch circuit20, the first multiplexer, and the second multiplexer, even in the case of one-uplink two-downlink, by using both the primary antenna11and the secondary antenna12, it is possible to reduce the reception filter of Band B in the first multiplexer connected to one of the antennas. Furthermore, in the second multiplexer connected to the other of the antennas, the reception filter of Band A can be reduced. That is, four or more filters in total can be reduced as compared with the configuration of the high-frequency front end module502according to Comparative Example 1. Accordingly, it is possible to provide the high-frequency front end module which is further reduced in size and in which CA of two-uplink two-downlink and one-uplink two-downlink can be performed. 1.4 Configurations of High-Frequency Front End Module2B and Communication Device1B According to Modification 1 FIG.4Ais a circuit configuration diagram of a communication device1B according to Modification 1 of Embodiment 1. As illustrated in the diagram, the communication device1B includes a high-frequency front end module2B, the RFIC3, and the BBIC4. The communication device1B according to the present modification differs from the communication device1A according to Embodiment 1 in the configuration of the high-frequency front end module. Hereinafter, the communication device1B according to the present modification will be described focusing on the differences from the communication device1A according to Embodiment 1. As illustrated inFIG.4A, the high-frequency front end module2B includes the primary antenna11and the secondary antenna12, the switch circuits20and50, the transmission filters31T1and32T2, the reception filters31R1,31R2,32R1, and32R2, and the transmission amplifiers41and42. According to the configuration described above, the high-frequency front end module2B can execute two-uplink in which a signal in the first transmission band (A-Tx) included in the first frequency band (Band A) and a signal in the second transmission band (B-Tx) included in the second frequency band (Band B) which is different from the first frequency band are simultaneously transmitted, and two-downlink in which a signal in the first reception band (A-Rx) included in the first frequency band (Band A) and a signal in the second reception band (B-Rx) included in the second frequency band (Band B) are simultaneously received. The high-frequency front end module2B according to Modification 1 is different from the high-frequency front end module2A according to Embodiment 1 in the configurations of the first multiplexer and the second multiplexer. Hereinafter, the high-frequency front end module2B according to Modification 1 will be described focusing on the differences from the high-frequency front end module2A according to Embodiment 1. The transmission filter31T1is a first transmission filter whose input terminal is connected to the transmission amplifier41, whose output terminal is connected to the switch circuit20, and which takes A-Tx as a pass band. The transmission filter32T2is a second transmission filter whose input terminal is connected to the transmission amplifier42, whose output terminal is connected to the switch circuit20, and which takes B-Tx as a pass band. The reception filter31R1is a first reception filter whose input terminal is connected to the switch circuit20, and which takes A-Rx as a pass band. The reception filter32R1is a fourth reception filter whose input terminal is connected to the switch circuit20, and which takes B-Rx as a pass band. The reception filter32R2is a second reception filter whose input terminal is connected to the switch circuit20, and which takes B-Rx as a pass band. The reception filter31R2is a third reception filter whose input terminal is connected to the switch circuit20, and which takes A-Rx as a pass band. The transmission filter31T1and the reception filters31R1and32R1constitute a first multiplexer that can transmit a high-frequency signal in Band A and receive high-frequency signals in Band A and Band B. Note that the first multiplexer does not have a transmission filter which takes B-Tx as a pass band. The transmission filter32T2and the reception filters32R2and31R2constitute a second multiplexer that can transmit a high-frequency signal in Band B and receive high-frequency signals in Band A and Band B. Note that the second multiplexer does not have a transmission filter which takes A-Tx as a pass band. 1.5 Connection State of High-Frequency Front End Module2B According to Modification 1 FIG.4Bis a circuit state diagram in a case of two-uplink two-downlink of the high-frequency front end module2B according to Modification 1 of Embodiment 1. This diagram illustrates a circuit connection state in a case of two-uplink of Band A and Band B and two-downlink of Band A and Band B (mode 1: two-uplink two-downlink). In the mode 1, as illustrated inFIG.4B, by the control unit, the terminal20aand the terminal20care connected to each other, and the terminal20band the terminal20dare connected to each other, in the switch circuit20(first connection state). Furthermore, the terminal50aand the terminal50care connected to each other, and the terminal50band the terminal50dare connected to each other, in the switch circuit50. In this connection state, in the mode 1, a transmission signal in one of Band A and Band B is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier41, the first multiplexer, the switch circuit20, and the primary antenna11, and a transmission signal in the other of Band A and Band B is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier42, the second multiplexer, the switch circuit20, and the secondary antenna12. Furthermore, a reception signal in one of Band A and Band B is received by the RFIC3through the primary antenna11, the switch circuit20, and the first multiplexer, and a reception signal in the other of Band A and Band B is received by the RFIC3through the secondary antenna12, the switch circuit20, and the second multiplexer. Alternatively, in the mode 1, as illustrated inFIG.4B, by the control unit, the terminal20aand the terminal20dare connected to each other, and the terminal20band the terminal20care connected to each other, in the switch circuit20(second connection state). Furthermore, the terminal50aand the terminal50dare connected to each other, and the terminal50band the terminal50care connected to each other, in the switch circuit50. In this connection state, in the mode 1, a transmission signal in one of Band A and Band B is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier42, the second multiplexer, the switch circuit20, and the primary antenna11, and a transmission signal in the other of Band A and Band B is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier41, the first multiplexer, the switch circuit20, and the secondary antenna12. Furthermore, a reception signal in one of Band A and Band B is received by the RFIC3through the primary antenna11, the switch circuit20, and the second multiplexer, and a reception signal in the other of Band A and Band B is received by the RFIC3through the secondary antenna12, the switch circuit20, and the first multiplexer. FIG.4Cis a circuit state diagram in a case of one-uplink two-downlink of the high-frequency front end module2B according to Modification 1 of Embodiment 1. This diagram illustrates a circuit connection state in a case of one-uplink of Band A or Band B and two-downlink of Band A and Band B (mode 2: one-uplink two-downlink). In the mode 2, as illustrated inFIG.4C, by the control unit, the terminal20aand the terminal20care connected to each other in the switch circuit20(third connection state). Furthermore, the terminal50aand the terminal50care connected to each other in the switch circuit50. In this connection state, in the mode 2, a transmission signal in Band A is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier41, the first multiplexer, the switch circuit20, and the primary antenna11, and reception signals in Band A and Band B are received by the RFIC3through the primary antenna11, the switch circuit20, and the first multiplexer. Alternatively, in the mode 2, as illustrated inFIG.4C, by the control unit, the terminal20band the terminal20care connected to each other in the switch circuit20(fifth connection state). Furthermore, the terminal50aand the terminal50dare connected to each other in the switch circuit50. In this connection state, in the mode 2, a transmission signal in Band B is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier42, the second multiplexer, the switch circuit20, and the primary antenna11, and reception signals in Band A and Band B are received by the RFIC3through the primary antenna11, the switch circuit20, and the second multiplexer. Note that in both the above-described two types of connection forms, the case of one-uplink two-downlink by passing through the output terminal3aand the primary antenna11has been described as an example, but one-uplink two-downlink by passing through the output terminal3band the secondary antenna12(fourth connection state or sixth connection state) is also possible. The high-frequency front end module2B according to Modification 1 includes the primary antenna11and the secondary antenna12, the switch circuit20, the first multiplexer, and the second multiplexer, thereby making it possible to arbitrarily distribute high-frequency signals in Band A and Band B to the primary antenna11and the secondary antenna12by switching the connection state of the switch circuit20, and execute CA of two-uplink two-downlink. Therefore, in the first multiplexer connected to one of the antennas, the transmission filter of Band B can be reduced. In the same manner, in the second multiplexer connected to the other of the antennas, the transmission filter of Band A can be reduced. That is, two filters can be reduced as compared with the configuration of the high-frequency front end module502according to Comparative Example 1. In the configuration of the high-frequency front end module2B according to Modification 1, in comparison with the high-frequency front end module502according to Comparative Example 1, the one switch circuit20of the two-input two-output type is added, but the switch circuit20is sufficiently smaller than the transmission filter and the reception filter. Accordingly, it is possible to provide the high-frequency front end module2B which is reduced in size and in which CA of two-uplink two-downlink can be performed. Furthermore, in the high-frequency front end module2B according to Modification 1, in comparison with the high-frequency front end module2A according to Embodiment 1, since the first multiplexer further includes the reception filter32R1corresponding to Band B, in the case of one-uplink two-downlink in which a high-frequency signal in Band A is transmitted, only one of the primary antenna11and the secondary antenna12may be used. Furthermore, since the second multiplexer further includes the reception filter31R2corresponding to Band A, in the case of one-uplink two-downlink in which a high-frequency signal in Band B is transmitted, only one of the primary antenna11and the secondary antenna12may be used. Accordingly, it is possible to simplify the CA operation of one-uplink two-downlink. 1.6 Configurations of High-Frequency Front End Module2H and Communication Device1H According to Modification 2 FIG.5is a circuit configuration diagram of a communication device1H according to Modification 2 of Embodiment 1. As illustrated in the diagram, the communication device1H includes a high-frequency front end module2H, the RFIC3, and the BBIC4. The communication device1H according to the present modification differs from the communication device1A according to Embodiment 1 in the configuration of the high-frequency front end module. Hereinafter, the communication device1H according to the present modification will be described focusing on the differences from the communication device1A according to Embodiment 1. As illustrated inFIG.5, the high-frequency front end module2H includes the primary antenna11and the secondary antenna12, switch circuits20,50,67,68,69, and73, the transmission filters31T and32T, the reception filters31R and32R, a transmission/reception filter38TR, the transmission amplifiers41and42, and a reception amplifier51. Furthermore, the secondary antenna12, the switch circuits67,68,69, and73, the transmission filter32T, the reception filter32R, the transmission/reception filter38TR, the transmission amplifier42, and the reception amplifier51constitute a sub-module5H. The transmission/reception filter38TR is a filter whose input terminal is connected to the switch circuit73, whose output terminal is connected to the switch circuit69, and which takes C-Rx as a pass band. The reception amplifier51is an amplifier whose input terminal is connected to a common terminal of the switch circuit68, and whose output terminal is connected to the RFIC3. The switch circuit67has a common terminal connected to the output terminal of the transmission amplifier42, one selection terminal connected to the input terminal of the transmission filter32T, and another selection terminal connected to one selection terminal of the switch circuit69. The switch circuit68has a common terminal connected to the input terminal of the reception amplifier51, one selection terminal connected to the output terminal of the reception filter32R, and another selection terminal connected to another selection terminal of the switch circuit69. The switch circuit69has a common terminal connected to the transmission/reception filter38TR. The switch circuit73has a common terminal connected to the terminal20b, one selection terminal connected to the output terminal of the transmission filter32T and the input terminal of the reception filter32R, and another selection terminal connected to the transmission/reception filter38TR. An existing sub-module has a circuit configuration of only reception systems, but in the high-frequency front end module2H according to the present modification, the sub-module5H includes the transmission system circuit, which makes it possible to execute two-uplink CA. That is, the sub-module5H has the transmission filter32T and the reception filter32R (duplexer) used in the case of frequency division duplex (FDD) and the transmission/reception filter38TR used in the case of time division duplex (TDD). Note that, in the existing configuration, to the switch circuit73, a reception filter for the reception band B-Rx is connected instead of the transmission filter32T and the reception filter32R (duplexer), and a reception filter for the reception band C-Rx is connected instead of the transmission/reception filter38TR, but by disposing the transmission filter32T and the reception filter32R (duplexer) and the transmission/reception filter38TR, it is not necessary to connect a duplicate reception filter to the switch circuit73. With this, miniaturization of the sub-module5H and the high-frequency front end module2H is achieved. According to the configuration described above, the high-frequency front end module2H can execute two-uplink in which a signal in the transmission band (A-Tx) included in Band A, and a signal in the transmission band (B-Tx) included in Band B or a signal in the transmission band (C-Tx) included in Band C are simultaneously transmitted, and two-downlink in which a signal in the reception band (A-Rx) included in Band A, and a signal in the reception band (B-Rx) included in Band B or a signal in the reception band (C-Rx) included in Band C are simultaneously received. Embodiment 2 2.1 Configurations of High-Frequency Front End Module2C and Communication Device1C Although Embodiment 1 has described the configurations of the communication device and the high-frequency front end module for executing CA in two frequency bands, the present embodiment describes the configurations of a communication device and a high-frequency front end module for executing CA of two frequency bands among three frequency bands. FIG.6is a circuit configuration diagram of a communication device1C according to Embodiment 2. As illustrated in the diagram, the communication device1C includes a high-frequency front end module2C, the RFIC3, and the BBIC4. The communication device1C according to the present embodiment differs from the communication device1A according to Embodiment 1 in the configuration of the high-frequency front end module. Hereinafter, the communication device1C according to the present embodiment will be described focusing on the differences from the communication device1A according to Embodiment 1. As illustrated inFIG.6, the high-frequency front end module2C includes a primary antenna13and a secondary antenna14, switch circuits20,50,61, and62, transmission filters31T1,32T1,32T2, and33T2, reception filters31R1,32R1,33R1,31R2,32R2, and33R2, and transmission amplifiers43and44. According to the configuration described above, the high-frequency front end module2C can execute two-uplink in which two signals among a signal in the first transmission band (A-Tx) included in the first frequency band (Band A), a signal in the second transmission band (C-Tx) included in the second frequency band (Band C in the present embodiment) which is different from the first frequency band, and a signal in a third transmission band (B-Tx) included in a third frequency band (Band B in the present embodiment) which is different from the first frequency band and the second frequency band are simultaneously transmitted, and two-downlink in which two signals among a signal in the first reception band (A-Rx) included in the first frequency band (Band A), a signal in the second reception band (C-Rx) included in the second frequency band (Band C), and a signal in a third reception band (B-Rx) included in the third frequency band (Band B) which is different from the first frequency band and the second frequency band are simultaneously received. The high-frequency front end module2C according to the present embodiment is different from the high-frequency front end module2A according to Embodiment 1 in a point that the configuration for transmitting/receiving signals in three frequency bands is included. Hereinafter, the high-frequency front end module2C according to the present embodiment will be described focusing on the differences from the high-frequency front end module2A according to Embodiment 1. The primary antenna13is an antenna that is used in preference to the secondary antenna14in terms of antenna performance and the like, and is an antenna element capable of transmitting and receiving signals in Band A, Band B, and Band C. The secondary antenna14is an antenna element capable of transmitting and receiving signals in Band A, Band B, and Band C. The switch circuit61is an SPDT type switch circuit having a common terminal61aand selection terminals61cand61d. The common terminal61ais connected to an output terminal of the transmission amplifier43. The switch circuit62is an SPDT type switch circuit having a common terminal62aand selection terminals62cand62d. The common terminal62ais connected to an output terminal of the transmission amplifier44. The transmission filter31T1is a first transmission filter whose input terminal is connected to the selection terminal61c, whose output terminal is connected to the switch circuit20, and which takes A-Tx as a pass band. The transmission filter32T1is a fifth transmission filter whose input terminal is connected to the selection terminal61d, whose output terminal is connected to the switch circuit20, and which takes B-Tx as a pass band. The reception filter31R1is a first reception filter whose input terminal is connected to the switch circuit20, and which takes A-Rx as a pass band. The reception filter32R1is a fifth reception filter whose input terminal is connected to the switch circuit20, and which takes B-Rx as a pass band. The reception filter33R1is a fourth reception filter whose input terminal is connected to the switch circuit20, and which takes C-Rx as a pass band. The transmission filter32T2is a sixth transmission filter whose input terminal is connected to the selection terminal62c, whose output terminal is connected to the switch circuit20, and which takes B-Tx as a pass band. The transmission filter33T2is a second transmission filter whose input terminal is connected to the selection terminal62d, whose output terminal is connected to the switch circuit20, and which takes C-Tx as a pass band. The reception filter31R2is a third reception filter whose input terminal is connected to the switch circuit20, and which takes A-Rx as a pass band. The reception filter32R2is a sixth reception filter whose input terminal is connected to the switch circuit20, and which takes B-Rx as a pass band. The reception filter33R2is a second reception filter whose input terminal is connected to the switch circuit20, and which takes C-Rx as a pass band. The transmission filters31T1and32T1and the reception filters31R1,32R1, and33R1constitute a first multiplexer that can selectively transmit high-frequency signals in Band A and Band B and receive high-frequency signals in Band A, Band B, and Band C. Note that the first multiplexer does not have a transmission filter which takes C-Tx as a pass band. The transmission filters32T2and33T2and the reception filters31R2,32R2, and33R2constitute a second multiplexer that can selectively transmit high-frequency signals in Band B and Band C and receive high-frequency signals in Band A, Band B, and Band C. Note that the second multiplexer does not have a transmission filter which takes A-Tx as a pass band. The high-frequency front end module2C described above includes the primary antenna13and the secondary antenna14, the switch circuits20,61, and62, the first multiplexer, and the second multiplexer described above, thereby making it possible to arbitrarily distribute high-frequency signals in Band A, Band B, and Band C to the primary antenna13and the secondary antenna14by switching the connection state of the switch circuits20,61, and62, and execute CA of two-uplink two-downlink. Here, since the first multiplexer does not have a transmission filter of Band C and the second multiplexer does not have a transmission filter of Band A, it is possible to provide the high-frequency front end module2C which is reduced in size and in which CA of two-uplink two-downlink can be performed. 2.2 Connection State of High-Frequency Front End Module2C FIG.7Ais a circuit state diagram in a case of two-uplink two-downlink of the high-frequency front end module2C according to Embodiment 2. This diagram illustrates a circuit connection state in a case of two-uplink of Band A and Band C and two-downlink of Band A and Band C (mode 1: two-uplink two-downlink). In the mode 1, as illustrated inFIG.7A, by the control unit, the terminal20aand the terminal20care connected to each other, and the terminal20band the terminal20dare connected to each other, in the switch circuit20(first connection state). Furthermore, the terminal50aand the terminal50care connected to each other, and the terminal50band the terminal50dare connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal61aand the selection terminal61care connected to each other in the switch circuit61, and the common terminal62aand the selection terminal62dare connected to each other in the switch circuit62. In this connection state, in the mode 1, a transmission signal in Band A is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier43, the switch circuit61, the first multiplexer, the switch circuit20, and the primary antenna13, and a transmission signal in Band C is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier44, the switch circuit62, the second multiplexer, the switch circuit20, and the secondary antenna14. Furthermore, a reception signal in Band A is received by the RFIC3through the primary antenna13, the switch circuit20, and the first multiplexer, and a reception signal in Band C is received by the RFIC3through the secondary antenna14, the switch circuit20, and the second multiplexer. Alternatively, in the mode 1, as illustrated inFIG.7A, by the control unit, the terminal20aand the terminal20dare connected to each other, and the terminal20band the terminal20care connected to each other, in the switch circuit20(second connection state). Furthermore, the terminal50aand the terminal50dare connected to each other, and the terminal50band the terminal50care connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal61aand the selection terminal61care connected to each other in the switch circuit61, and the common terminal62aand the selection terminal62dare connected to each other in the switch circuit62. In this connection state, in the mode 1, a transmission signal in Band C is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier44, the switch circuit62, the second multiplexer, the switch circuit20, and the primary antenna13, and a transmission signal in Band A is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier43, the switch circuit61, the first multiplexer, the switch circuit20, and the secondary antenna14. Furthermore, a reception signal in Band C is received by the RFIC3through the primary antenna13, the switch circuit20, and the second multiplexer, and a reception signal in Band A is received by the RFIC3through the secondary antenna14, the switch circuit20, and the first multiplexer. Note that although not illustrated inFIG.7A, a circuit connection state in a case of two-uplink two-downlink of Band B and Band C is as follows. That is, by the control unit, the terminal20aand the terminal20care connected to each other, and the terminal20band the terminal20dare connected to each other, in the switch circuit20(first connection state). Furthermore, the terminal50aand the terminal50care connected to each other, and the terminal50band the terminal50dare connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal61aand the selection terminal61dare connected to each other in the switch circuit61, and the common terminal62aand the selection terminal62dare connected to each other in the switch circuit62. In this connection state, a transmission signal in Band B is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier43, the switch circuit61, the first multiplexer, the switch circuit20, and the primary antenna13, and a transmission signal in Band C is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier44, the switch circuit62, the second multiplexer, the switch circuit20, and the secondary antenna14. Furthermore, a reception signal in Band B is received by the RFIC3through the primary antenna13, the switch circuit20, and the first multiplexer, and a reception signal in Band C is received by the RFIC3through the secondary antenna14, the switch circuit20, and the second multiplexer. Alternatively, by the control unit, the terminal20aand the terminal20dare connected to each other, and the terminal20band the terminal20care connected to each other, in the switch circuit20(second connection state). Furthermore, the terminal50aand the terminal50dare connected to each other, and the terminal50band the terminal50care connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal61aand the selection terminal61dare connected to each other in the switch circuit61, and the common terminal62aand the selection terminal62dare connected to each other in the switch circuit62. In this connection state, a transmission signal in Band C is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier44, the switch circuit62, the second multiplexer, the switch circuit20, and the primary antenna13, and a transmission signal in Band B is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier43, the switch circuit61, the first multiplexer, the switch circuit20, and the secondary antenna14. Furthermore, a reception signal in Band C is received by the RFIC3through the primary antenna13, the switch circuit20, and the second multiplexer, and a reception signal in Band B is received by the RFIC3through the secondary antenna14, the switch circuit20, and the first multiplexer. Furthermore, although not illustrated inFIG.7A, a circuit connection state in a case of two-uplink two-downlink of Band A and Band B is as follows. That is, by the control unit, the terminal20aand the terminal20care connected to each other, and the terminal20band the terminal20dare connected to each other, in the switch circuit20(first connection state). Furthermore, the terminal50aand the terminal50care connected to each other, and the terminal50band the terminal50dare connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal61aand the selection terminal61care connected to each other in the switch circuit61, and the common terminal62aand the selection terminal62care connected to each other in the switch circuit62. In this connection state, a transmission signal in Band A is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier43, the switch circuit61, the first multiplexer, the switch circuit20, and the primary antenna13, and a transmission signal in Band B is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier44, the switch circuit62, the second multiplexer, the switch circuit20, and the secondary antenna14. Furthermore, a reception signal in Band A is received by the RFIC3through the primary antenna13, the switch circuit20, and the first multiplexer, and a reception signal in Band B is received by the RFIC3through the secondary antenna14, the switch circuit20, and the second multiplexer. Alternatively, by the control unit, the terminal20aand the terminal20dare connected to each other, and the terminal20band the terminal20care connected to each other, in the switch circuit20(second connection state). Furthermore, the terminal50aand the terminal50dare connected to each other, and the terminal50band the terminal50care connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal61aand the selection terminal61care connected to each other in the switch circuit61, and the common terminal62aand the selection terminal62care connected to each other in the switch circuit62. In this connection state, a transmission signal in Band B is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier44, the switch circuit62, the second multiplexer, the switch circuit20, and the primary antenna13, and a transmission signal in Band A is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier43, the switch circuit61, the first multiplexer, the switch circuit20, and the secondary antenna14. Furthermore, a reception signal in Band B is received by the RFIC3through the primary antenna13, the switch circuit20, and the second multiplexer, and a reception signal in Band A is received by the RFIC3through the secondary antenna14, the switch circuit20, and the first multiplexer. FIG.7Bis a circuit state diagram in a case of one-uplink two-downlink of the high-frequency front end module2C according to Embodiment 2. This diagram illustrates a circuit connection state in a case of one-uplink of Band A and two-downlink of Band A and Band C (mode 2: one-uplink two-downlink). In the mode 2, as illustrated inFIG.7B, by the control unit, the terminal20aand the terminal20care connected to each other in the switch circuit20. Furthermore, the terminal50aand the terminal50care connected to each other in the switch circuit50. Furthermore, by the control unit, the common terminal61aand the selection terminal61care connected to each other in the switch circuit61. In this connection state, in the mode 2, a transmission signal in Band A is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier43, the switch circuit61, the first multiplexer, the switch circuit20, and the primary antenna13, and reception signals in Band A and Band C are received by the RFIC3through the primary antenna13, the switch circuit20, and the first multiplexer. Alternatively, in the mode 2, one-uplink of Band C and two-downlink of Band A and Band C (mode 2: one-uplink two-downlink) can be performed. That is, by the control unit, the terminal20band the terminal20care connected to each other in the switch circuit20. Furthermore, the terminal50aand the terminal50dare connected to each other in the switch circuit50. Furthermore, by the control unit, the common terminal62aand the selection terminal62dare connected to each other in the switch circuit62. In this connection state, in the mode 2, a transmission signal in Band C is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier44, the switch circuit62, the second multiplexer, the switch circuit20, and the primary antenna13, and reception signals in Band A and Band C are received by the RFIC3through the primary antenna13, the switch circuit20, and the second multiplexer. Note that in both the above-described two types of connection forms, the case of one-uplink two-downlink by passing through the output terminal3aand the primary antenna13has been described as an example, but one-uplink two-downlink by passing through the output terminal3band the secondary antenna14is also possible. Furthermore, although not illustrated inFIG.7B, a circuit connection state in a case of one-uplink of Band B and two-downlink of Band B and Band C (mode 2: one-uplink two-downlink) is as follows. In the mode 2, as illustrated inFIG.7B, by the control unit, the terminal20aand the terminal20care connected to each other in the switch circuit20. Furthermore, the terminal50aand the terminal50care connected to each other in the switch circuit50. Furthermore, by the control unit, the common terminal61aand the selection terminal61dare connected to each other in the switch circuit61. In this connection state, in the mode 2, a transmission signal in Band B is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier43, the switch circuit61, the first multiplexer, the switch circuit20, and the primary antenna13, and reception signals in Band B and Band C are received by the RFIC3through the primary antenna13, the switch circuit20, and the first multiplexer. Alternatively, in the mode 2, one-uplink of Band C and two-downlink of Band B and Band C (mode 2: one-uplink two-downlink) can be performed. That is, by the control unit, the terminal20band the terminal20care connected to each other in the switch circuit20. Furthermore, the terminal50aand the terminal50dare connected to each other in the switch circuit50. Furthermore, by the control unit, the common terminal62aand the selection terminal62dare connected to each other in the switch circuit62. In this connection state, in the mode 2, a transmission signal in Band C is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier44, the switch circuit62, the second multiplexer, the switch circuit20, and the primary antenna13, and reception signals in Band B and Band C are received by the RFIC3through the primary antenna13, the switch circuit20, and the second multiplexer. Note that in both the above-described two types of connection forms, the case of one-uplink two-downlink by passing through the output terminal3aand the primary antenna13has been described as an example, but one-uplink two-downlink by passing through the output terminal3band the secondary antenna14is also possible. Furthermore, although not illustrated inFIG.7B, a circuit connection state in a case of one-uplink of Band A and two-downlink of Band A and Band B (mode 2: one-uplink two-downlink) is as follows. In the mode 2, as illustrated inFIG.7B, by the control unit, the terminal20aand the terminal20care connected to each other in the switch circuit20. Furthermore, the terminal50aand the terminal50care connected to each other in the switch circuit50. Furthermore, by the control unit, the common terminal61aand the selection terminal61care connected to each other in the switch circuit61. In this connection state, in the mode 2, a transmission signal in Band A is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier43, the switch circuit61, the first multiplexer, the switch circuit20, and the primary antenna13, and reception signals in Band A and Band B are received by the RFIC3through the primary antenna13, the switch circuit20, and the first multiplexer. Alternatively, in the mode 2, one-uplink of Band B and two-downlink of Band A and Band B (mode 2: one-uplink two-downlink) can be performed. That is, by the control unit, the terminal20band the terminal20care connected to each other in the switch circuit20. Furthermore, the terminal50aand the terminal50dare connected to each other in the switch circuit50. Furthermore, by the control unit, the common terminal62aand the selection terminal62care connected to each other in the switch circuit62. In this connection state, in the mode 2, a transmission signal in Band B is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier44, the switch circuit62, the second multiplexer, the switch circuit20, and the primary antenna13, and reception signals in Band A and Band B are received by the RFIC3through the primary antenna13, the switch circuit20, and the second multiplexer. Alternatively, in the mode 2, by the control unit, the terminal20aand the terminal20care connected to each other in the switch circuit20. Furthermore, the terminal50aand the terminal50care connected to each other in the switch circuit50. Furthermore, by the control unit, the common terminal61aand the selection terminal61dare connected to each other in the switch circuit61. In this connection state, in the mode 2, a transmission signal in Band B is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier43, the switch circuit61, the first multiplexer, the switch circuit20, and the primary antenna13, and reception signals in Band A and Band B are received by the RFIC3through the primary antenna13, the switch circuit20, and the first multiplexer. Note that, in all the above-described connection forms of one-uplink two-downlink, the case of one-uplink two-downlink by passing through the output terminal3aand the primary antenna13has been described as an example, but one-uplink two-downlink by passing through the output terminal3band the secondary antenna14is also possible. 2.3 Comparison of High-Frequency Front End Modules According to Embodiment 2 and Comparative Example 2 FIG.8is a circuit configuration diagram of a high-frequency front end module503according to Comparative Example 2. Note that the diagram also illustrates the RFIC3connected to the high-frequency front end module503according to Comparative Example 2. As illustrated in the diagram, the high-frequency front end module503includes a primary circuit503aand a secondary circuit503b. The primary circuit503aincludes the primary antenna13, a switch circuit563, transmission filters31T1,32T1, and33T1, the reception filters31R1,32R1, and33R1, and the transmission amplifier43. The transmission filters31T1,32T1, and33T1and the reception filters31R1,32R1, and33R1constitute a first multiplexer. The secondary circuit503bincludes the secondary antenna14, a switch circuit564, the transmission filters31T2,32T2, and33T2, the reception filters31R2,32R2, and33R2, and the transmission amplifier44. The transmission filters31T2,32T2, and33T2and the reception filters31R2,32R2, and33R2constitute a second multiplexer. The high-frequency front end module503according to Comparative Example 2 is different from the high-frequency front end module2C according to Embodiment 2 in the configurations of the first multiplexer, the second multiplexer, and the switch circuit. Hereinafter, the high-frequency front end module503according to Comparative Example 2 will be described focusing on the differences from the high-frequency front end module2C according to Embodiment 2. The switch circuit563is an SP3T (Single Pole 3 Throw) type switch circuit having a common terminal563aand selection terminals563c,563d, and563e. The common terminal563ais connected to the output terminal of the transmission amplifier43. The switch circuit564is an SP3T type switch circuit having a common terminal564aand selection terminals564c,564d, and564e. The common terminal564ais connected to the output terminal of the transmission amplifier44. The transmission filter31T1is a transmission filter whose input terminal is connected to the selection terminal563c, whose output terminal is connected to the primary antenna13, and which takes A-Tx as a pass band. The transmission filter32T1is a transmission filter whose input terminal is connected to the selection terminal563d, whose output terminal is connected to the primary antenna13, and which takes B-Tx as a pass band. The transmission filter33T1is a transmission filter whose input terminal is connected to the selection terminal563e, whose output terminal is connected to the primary antenna13, and which takes C-Tx as a pass band. The reception filter31R1is a reception filter whose input terminal is connected to the primary antenna13, and which takes A-Rx as a pass band. The reception filter32R1is a reception filter whose input terminal is connected to the primary antenna13, and which takes B-Rx as a pass band. The reception filter33R1is a reception filter whose input terminal is connected to the primary antenna13, and which takes C-Rx as a pass band. The transmission filter31T2is a transmission filter whose input terminal is connected to the selection terminal564c, whose output terminal is connected to the secondary antenna14, and which takes A-Tx as a pass band. The transmission filter32T2is a transmission filter whose input terminal is connected to the selection terminal564d, whose output terminal is connected to the secondary antenna14, and which takes B-Tx as a pass band. The transmission filter33T2is a transmission filter whose input terminal is connected to the selection terminal564e, whose output terminal is connected to the secondary antenna14, and which takes C-Tx as a pass band. The reception filter31R2is a reception filter whose input terminal is connected to the secondary antenna14, and which takes A-Rx as a pass band. The reception filter32R2is a reception filter whose input terminal is connected to the secondary antenna14, and which takes B-Rx as a pass band. The reception filter33R2is a reception filter whose input terminal is connected to the secondary antenna14, and which takes C-Rx as a pass band. According to the configuration described above, the high-frequency front end module503can execute two-uplink in which two signals among a signal in the first transmission band (A-Tx) included in Band A, a signal in the second transmission band (C-Tx) included in Band C, and a signal in the third transmission band (B-Tx) included in Band B are simultaneously transmitted, and two-downlink in which two signals among a signal in the first reception band (A-Rx) included in Band A, a signal in the second reception band (C-Rx) included in Band C, and a signal in the third reception band (B-Rx) included in Band B are simultaneously received. For example, in a state in which the common terminal563aand the selection terminal563care connected to each other and the common terminal564aand the selection terminal564eare connected to each other, it is possible to execute two-uplink two-downlink of Band A and Band C. That is, a transmission signal in Band A is transmitted through the output terminal3a, the transmission amplifier43, the first multiplexer, and the primary antenna13, and a transmission signal in Band C is transmitted through the output terminal3b, the transmission amplifier44, the second multiplexer, and the secondary antenna14. Furthermore, a reception signal in Band A is received by the RFIC3through the primary antenna13and the first multiplexer, and a reception signal in Band C is received by the RFIC3through the secondary antenna14and the second multiplexer. Furthermore, in a state in which the common terminal563aand the selection terminal563eare connected to each other and the common terminal564aand the selection terminal564care connected to each other as well, it is possible to execute two-uplink two-downlink of Band A and Band C. Furthermore, in a state in which the common terminal563aand the selection terminal563care connected to each other and the common terminal564aand the selection terminal564dare connected to each other, or in a state in which the common terminal563aand the selection terminal563dare connected to each other and the common terminal564aand the selection terminal564care connected to each other, it is possible to execute two-uplink two-downlink of Band A and Band B. Furthermore, in a state in which the common terminal563aand the selection terminal563dare connected to each other and the common terminal564aand the selection terminal564eare connected to each other, or in a state in which the common terminal563aand the selection terminal563eare connected to each other and the common terminal564aand the selection terminal564dare connected to each other, it is possible to execute two-uplink two-downlink of Band B and Band C. In the high-frequency front end module503according to Comparative Example 2, in order to ensure signal quality such as isolation and the like of high-frequency signals in two bands among Band A, Band B, and Band C simultaneously transmitted/received, two antenna elements, such as the primary antenna13which is preferentially used and the secondary antenna14which is secondarily used, are disposed. In this case, because of necessity of making it possible to transmit/receive each of the high-frequency signals in Band A, Band B, and Band C even by any of the antennas, a transmission path and a reception path of Band A, a transmission path and a reception path of Band B, and a transmission path and a reception path of Band C are connected to the primary antenna13, and a transmission path and a reception path of Band A, a transmission path and a reception path of Band B, and a transmission path and a reception path of Band C are connected and disposed also to the secondary antenna14. A filter for selectively allowing a desired frequency band to pass therethrough is arranged in each signal path, and in the configuration of the high-frequency front end module503according to Comparative Example 2, it is necessary to connect six filters to the primary antenna13, and to similarly connect six filters to the secondary antenna14. That is, in the front end module to which the primary antenna13and the secondary antenna14are applied, in order to achieve two-uplink two-downlink of two arbitrary frequency bands among Band A, Band B, and Band C, a total of 12 filters are required, and the circuit is enlarged. In contrast, according to the high-frequency front end module2C according to the present embodiment, it is possible to arbitrarily distribute high-frequency signals in Band A, Band B, and Band C to the primary antenna13and the secondary antenna14by switching the connection state of the switch circuit20, and execute CA of two-uplink two-downlink. Therefore, in the first multiplexer connected to one of the antennas, for example, the transmission filter of Band C can be reduced. In the same manner, in the second multiplexer connected to the other of the antennas, for example, the transmission filter of Band A can be reduced. That is, in the high-frequency front end module2C according to the present embodiment, a total of ten filters are disposed, and two filters can be reduced as compared with the configuration of the high-frequency front end module503according to Comparative Example 2. In the configuration of the high-frequency front end module2C according to the present embodiment, in comparison with the high-frequency front end module503according to Comparative Example 2, the one switch circuit20of the two-input two-output type is added, but the switch circuit20is sufficiently smaller than the transmission filter and the reception filter. Accordingly, it is possible to provide the high-frequency front end module2C which is reduced in size and in which CA of two-uplink two-downlink can be performed. 2.4 Configurations of High-Frequency Front End Module2D and Communication Device1D According to Modification FIG.9is a circuit configuration diagram of a communication device1D according to Modification of Embodiment 2. As illustrated in the diagram, the communication device1D includes a high-frequency front end module2D, the RFIC3, and the BBIC4. The communication device1D according to the present modification differs from the communication device1C according to Embodiment 2 in the configuration of the high-frequency front end module. Hereinafter, the communication device1D according to the present modification will be described focusing on the differences from the communication device1C according to Embodiment 2. As illustrated inFIG.9, the high-frequency front end module2D includes the primary antenna13and the secondary antenna14, the switch circuits20,62, and50, the transmission filters31T1,32T2, and33T2, reception filters31R1,32R1,31R2, and35R2, and the transmission amplifiers43and44. Note that the communication device1D according to the present modification is applied in a case where Band A, Band B, and Band C have the following frequency relationship. That is, the relationship is such that Band A overlaps with neither Band B nor Band C in the frequency band, and the reception band of Band B includes the reception band of Band C. According to the configuration described above, the high-frequency front end module2D can execute (1) two-uplink two-downlink of Band A and Band B, and (2) two-uplink two-downlink of Band A and Band C. Note that since Band B includes Band C, two-uplink two-downlink of Band B and Band C is not executed. The high-frequency front end module2D according to the present modification is different from the high-frequency front end module2C according to Embodiment 2 in the configurations of the first multiplexer and the second multiplexer. Hereinafter, the high-frequency front end module2D according to the present modification will be described focusing on the differences from the high-frequency front end module2C according to Embodiment 2. The switch circuit62is an SPDT type switch circuit having the common terminal62aand the selection terminals62cand62d. The common terminal62ais connected to the output terminal of the transmission amplifier44. The transmission filter31T1is a first transmission filter whose input terminal is connected to the transmission amplifier43, whose output terminal is connected to the switch circuit20, and which takes A-Tx as a pass band. The reception filter31R1is a first reception filter whose input terminal is connected to the switch circuit20, and which takes A-Rx as a pass band. The reception filter32R1is a fifth reception filter whose input terminal is connected to the switch circuit20, and which takes B-Rx and C-Rx as a pass band. The transmission filter32T2is a sixth transmission filter whose input terminal is connected to the selection terminal62c, whose output terminal is connected to the switch circuit20, and which takes B-Tx as a pass band. The transmission filter33T2is a second transmission filter whose input terminal is connected to the selection terminal62d, whose output terminal is connected to the switch circuit20, and which takes C-Tx as a pass band. The reception filter31R2is a third reception filter whose input terminal is connected to the switch circuit20, and which takes A-Rx as a pass band. The reception filter35R2is a second reception filter whose input terminal is connected to the switch circuit20, and which takes, as a pass band, a band which includes B-Rx and C-Rx. The transmission filter31T1and the reception filters31R1and32R1constitute a first multiplexer that can selectively transmit a high-frequency signal in Band A and receive high-frequency signals in Band A, Band B, and Band C. Note that the first multiplexer does not have a transmission filter which takes B-Tx as a pass band and a transmission filter which takes C-Tx as a pass band. The transmission filters32T2and33T2and the reception filters31R2and35R2constitute a second multiplexer that can selectively transmit high-frequency signals in Band B and Band C and receive high-frequency signals in Band A, Band B, and Band C. Note that the second multiplexer does not have a transmission filter which takes A-Tx as a pass band and a reception filter which takes C-Rx as a pass band and does not take part of B-Rx as a pass band. The high-frequency front end module2D described above includes the primary antenna13and the secondary antenna14, the switch circuits20and62, the first multiplexer, and the second multiplexer described above, thereby making it possible to arbitrarily distribute high-frequency signals in Band A, Band B, and Band C to the primary antenna13and the secondary antenna14by switching the connection state of the switch circuits20and62, and execute two-uplink two-downlink of Band A and Band B, and two-uplink two-downlink of Band A and Band C. Here, since the first multiplexer does not have a transmission filter of Band B, a transmission filter of Band C, and a reception filter of Band C, and the second multiplexer does not have a transmission filter of Band A and a reception filter dedicated to Band C, it is possible to provide the high-frequency front end module2D which is reduced in size and in which CA of two-uplink two-downlink in three bands including two bands in an overlapping relationship can be performed. 2.5 Connection State of High-Frequency Front End Module2D According to Modification FIG.10Ais a circuit state diagram in a case of two-uplink two-downlink of the high-frequency front end module2D according to Modification of Embodiment 2. This diagram illustrates a circuit connection state in a case of two-uplink of Band A and Band C and two-downlink of Band A and Band C (mode 1: two-uplink two-downlink). In the mode 1, as illustrated inFIG.10A, by the control unit, the terminal20aand the terminal20care connected to each other, and the terminal20band the terminal20dare connected to each other, in the switch circuit20(first connection state). Furthermore, the terminal50aand the terminal50care connected to each other, and the terminal50band the terminal50dare connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal62aand the selection terminal62dare connected to each other in the switch circuit62. In this connection state, in the mode 1, a transmission signal in Band A is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier43, the first multiplexer, the switch circuit20, and the primary antenna13, and a transmission signal in Band C is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier44, the switch circuit62, the second multiplexer, the switch circuit20, and the secondary antenna14. Furthermore, a reception signal in Band A is received by the RFIC3through the primary antenna13, the switch circuit20, and the first multiplexer, and a reception signal in Band C is received by the RFIC3through the secondary antenna14, the switch circuit20, and the second multiplexer (reception filter35R2). Alternatively, in the mode 1, as illustrated inFIG.10A, by the control unit, the terminal20aand the terminal20dare connected to each other, and the terminal20band the terminal20care connected to each other, in the switch circuit20(second connection state). Furthermore, the terminal50aand the terminal50dare connected to each other, and the terminal50band the terminal50care connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal62aand the selection terminal62dare connected to each other in the switch circuit62. In this connection state, in the mode 1, a transmission signal in Band C is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier44, the switch circuit62, the second multiplexer, the switch circuit20, and the primary antenna13, and a transmission signal in Band A is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier43, the first multiplexer, the switch circuit20, and the secondary antenna14. Furthermore, a reception signal in Band C is received by the RFIC3through the primary antenna13, the switch circuit20, and the second multiplexer (reception filter35R2), and a reception signal in Band A is received by the RFIC3through the secondary antenna14, the switch circuit20, and the first multiplexer. Furthermore, in the high-frequency front end module2D, two-uplink of Band A and Band B and two-downlink of Band A and Band B (mode 1: two-uplink two-downlink) can be performed. That is, by the control unit, the terminal20aand the terminal20care connected to each other, and the terminal20band the terminal20dare connected to each other, in the switch circuit20(first connection state). Furthermore, the terminal50aand the terminal50care connected to each other, and the terminal50band the terminal50dare connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal62aand the selection terminal62care connected to each other in the switch circuit62. In this connection state, in the mode 1, a transmission signal in Band A is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier43, the first multiplexer, the switch circuit20, and the primary antenna13, and a transmission signal in Band B is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier44, the switch circuit62(through the selection terminal62c), the second multiplexer, the switch circuit20, and the secondary antenna14. Furthermore, a reception signal in Band A is received by the RFIC3through the primary antenna13, the switch circuit20, and the first multiplexer, and a reception signal in Band B is received by the RFIC3through the secondary antenna14, the switch circuit20, and the second multiplexer (reception filter35R2). Alternatively, by the control unit, the terminal20aand the terminal20dare connected to each other, and the terminal20band the terminal20care connected to each other, in the switch circuit20(second connection state). Furthermore, the terminal50aand the terminal50dare connected to each other, and the terminal50band the terminal50care connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal62aand the selection terminal62care connected to each other in the switch circuit62. In this connection state, in the mode 1, a transmission signal in Band B is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier44, the switch circuit62(through the selection terminal62c), the second multiplexer, the switch circuit20, and the primary antenna13, and a transmission signal in Band A is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier43, the first multiplexer, the switch circuit20, and the secondary antenna14. Furthermore, a reception signal in Band B is received by the RFIC3through the primary antenna13, the switch circuit20, and the second multiplexer (reception filter35R2), and a reception signal in Band A is received by the RFIC3through the secondary antenna14, the switch circuit20, and the first multiplexer. FIG.10Bis a circuit state diagram in a case of one-uplink two-downlink of the high-frequency front end module2D according to Modification of Embodiment 2. This diagram illustrates a circuit connection state in a case of one-uplink of Band A and two-downlink of Band A and Band B (mode 2: one-uplink two-downlink). In the mode 2, as illustrated inFIG.10B, by the control unit, the terminal20aand the terminal20care connected to each other in the switch circuit20(third connection state). Furthermore, the terminal50aand the terminal50care connected to each other in the switch circuit50. In this connection state, in the mode 2, a transmission signal in Band A is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier43, the first multiplexer, the switch circuit20, and the primary antenna13, and reception signals in Band A and Band B are received by the RFIC3through the primary antenna13, the switch circuit20, and the first multiplexer. Furthermore, in the above-described connection state, one-uplink of Band A and two-downlink of Band A and Band C (mode 2: one-uplink two-downlink) can also be performed. That is, a transmission signal in Band A is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier43, the first multiplexer, the switch circuit20, and the primary antenna13, and reception signals in Band A and Band C are received by the RFIC3through the primary antenna13, the switch circuit20, and the first multiplexer. Furthermore, in the high-frequency front end module2D, one-uplink of Band B and two-downlink of Band A and Band B (mode 2: one-uplink two-downlink) can be performed. That is, by the control unit, the terminal20band the terminal20care connected to each other in the switch circuit20(fifth connection state). Furthermore, the terminal50aand the terminal50dare connected to each other in the switch circuit50. Furthermore, by the control unit, the common terminal62aand the selection terminal62care connected to each other in the switch circuit62. In this connection state, in the mode 2, a transmission signal in Band B is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier44, the switch circuit62(through the selection terminal62c), the second multiplexer, the switch circuit20, and the primary antenna13, and reception signals in Band A and Band B are received by the RFIC3through the primary antenna13, the switch circuit20, and the second multiplexer. Furthermore, in the above-described connection state, one-uplink of Band C and two-downlink of Band A and Band C (mode 2: one-uplink two-downlink) can also be performed. That is, by the control unit, the terminal20band the terminal20care connected to each other in the switch circuit20(fifth connection state). Furthermore, the terminal50aand the terminal50dare connected to each other in the switch circuit50. Furthermore, by the control unit, the common terminal62aand the selection terminal62dare connected to each other in the switch circuit62. That is, a transmission signal in Band C is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier44, the switch circuit62, the second multiplexer, the switch circuit20, and the primary antenna13, and reception signals in Band A and Band C are received by the RFIC3through the primary antenna13, the switch circuit20, and the second multiplexer. Note that in the above-described two types of connection forms, the case of one-uplink two-downlink by passing through the output terminal3aand the primary antenna13has been described as an example, but one-uplink two-downlink by passing through the output terminal3band the secondary antenna14is also possible. The high-frequency front end module2D according to the present modification includes the primary antenna13and the secondary antenna14, the switch circuit20, the first multiplexer, and the second multiplexer, thereby making it possible to execute two-uplink two-downlink of two bands of Band A and Band B, and two-uplink two-downlink of two bands of Band A and Band C, by switching the connection state of the switch circuit20. By arbitrarily distributing high-frequency signals in Band A and Band B to the primary antenna13and the secondary antenna14, and arbitrarily distributing high-frequency signals in Band A and Band C to the primary antenna13and the secondary antenna14, CA of two-uplink two-downlink can be performed. Therefore, in the first multiplexer connected to one of the antennas, the transmission filters of Band B and Band C can be reduced. In the same manner, in the second multiplexer connected to the other of the antennas, the transmission filter of Band A can be reduced. That is, three or more filters can be reduced as compared with the configuration of the high-frequency front end module503according to Comparative Example 2. Furthermore, in the configuration of the high-frequency front end module2D according to the present modification, as compared with the high-frequency front end module2C according to Embodiment 2, it is possible to further reduce three filters of the transmission filter of Band B and the reception filter dedicated to Band C as the filters constituting the first multiplexer, and the reception filter dedicated to Band C as the second multiplexer. Furthermore, in the case of one-uplink two-downlink, only one of the primary antenna13and the secondary antenna14may be used. Accordingly, it is possible to provide the high-frequency front end module2D which is further reduced in size and in which CA of two-uplink two-downlink can be performed and the CA operation of one-uplink two-downlink is simplified. Embodiment 3 3.1 Configurations of High-Frequency Front End Module2E and Communication Device1E Embodiment 1 has described the configurations of the communication device and the high-frequency front end module for executing CA in two frequency bands, and Embodiment 2 has described the configurations of the communication device and the high-frequency front end module for executing CA in two frequency bands among three frequency bands. In contrast, the present embodiment describes the configurations of a communication device and a high-frequency front end module for executing CA of two frequency bands among four frequency bands. FIG.11is a circuit configuration diagram of a communication device1E according to Embodiment 3. As illustrated in the diagram, the communication device1E includes a high-frequency front end module2E, the RFIC3, and the BBIC4. The communication device1E according to the present embodiment differs from the communication device1C according to Embodiment 2 in the configuration of the high-frequency front end module. Hereinafter, the communication device1E according to the present embodiment will be described focusing on the differences from the communication device1C according to Embodiment 2. As illustrated inFIG.11, the high-frequency front end module2E includes a primary antenna15and a secondary antenna16, switch circuits20,50,63,64,71, and72, transmission filters31T1,34T1,32T2, and33T2, reception filters31R1,32R1,33R1,34R1,31R2,32R2,33R2, and34R2, and transmission amplifiers45and46. According to the configuration described above, the high-frequency front end module2E can execute (1) two-uplink in which a transmission signal in a first transmission band (B66-Tx) included in a first frequency band (Band 66) and a transmission signal in a second transmission band (B25-Tx) included in a second frequency band (Band 25) are simultaneously transmitted, (2) two-downlink in which a reception signal in a first reception band (B66-Rx) included in the first frequency band (Band 66) and a reception signal in a second reception band (B25-Rx) included in the second frequency band (Band 25) are simultaneously received, (3) two-uplink in which a transmission signal in a third transmission band (B1-Tx) included in a third frequency band (Band 1) and a transmission signal in a fourth transmission band (B3-Tx) included in a fourth frequency band (Band 3) are simultaneously transmitted, and (4) two-downlink in which a reception signal in a third reception band (B1-Rx) included in the third frequency band (Band 1) and a reception signal in a fourth reception band (B3-Rx) included in the fourth frequency band (Band 3) are simultaneously received. Note that in the present embodiment, a working example in which each of the four frequency bands is allocated to a specific band of LTE (Long Term Evolution) is described. Note that Band 66 has a transmission band (about 1710-1780 MHz) and a reception band (about 2110-2200 MHz). Band 25 has a transmission band (about 1850-1915 MHz) and a reception band (about 1930-1995 MHz). Band 1 has a transmission band (about 1920-1980 MHz) and a reception band (about 2110-2170 MHz). Band 3 has a transmission band (about 1710-1785 MHz) and a reception band (about 1805-1880 MHz). In the frequency allocation described above, a relationship in which the transmission band of Band 3 includes the transmission band of Band 66 is established, and a relationship in which the reception band of Band 66 includes the reception band of Band 1 is established. In the four frequency bands, there is no other overlapping and inclusion relationship. By the relationship of the frequency band, the high-frequency front end module2E according to the present embodiment is configured such that two-uplink of Band 66 and Band 3 is not executed, and two-downlink of Band 66 and Band 1 is not executed. The high-frequency front end module2E according to the present embodiment is different from the high-frequency front end module2C according to Embodiment 2 in a point that the configuration for transmitting/receiving signals in four frequency bands is included. Hereinafter, the high-frequency front end module2E according to the present embodiment will be described focusing on the differences from the high-frequency front end module2C according to Embodiment 2. The primary antenna15is an antenna that is used in preference to the secondary antenna16in terms of antenna performance and the like, and is an antenna element capable of transmitting and receiving signals in Band 66, Band 25, Band 1, and Band 3. The secondary antenna16is an antenna element capable of transmitting and receiving signals in Band 66, Band 25, Band 1, and Band 3. The switch circuit63is an SPDT type switch circuit having a common terminal63aand selection terminals63cand63d. The common terminal63ais connected to an output terminal of the transmission amplifier45. The switch circuit64is an SPDT type switch circuit having a common terminal64aand selection terminals64cand64d. The common terminal64ais connected to an output terminal of the transmission amplifier46. The switch circuit71is an SPDT type switch circuit having a common terminal71cand selection terminals71aand71b. The common terminal71cis connected to the terminal20aof the switch circuit20. The switch circuit72is an SPDT type switch circuit having a common terminal72cand selection terminals72aand72b. The common terminal72cis connected to the terminal20bof the switch circuit20. The transmission filter31T1is a first transmission filter whose input terminal is connected to the selection terminal63c, whose output terminal is connected to the selection terminal71a, and which takes B66-Tx as a pass band. The transmission filter34T1is a seventh transmission filter whose input terminal is connected to the selection terminal63d, whose output terminal is connected to the selection terminal71b, and which takes B3-Tx as a pass band. The reception filter31R1is a first reception filter whose input terminal is connected to the selection terminal71a, and which takes B66-Rx as a pass band. The reception filter32R1is a fourth reception filter whose input terminal is connected to the selection terminal71a, and which takes B25-Rx as a pass band. The reception filter33R1is a fifth reception filter whose input terminal is connected to the selection terminal71b, and which takes B1-Rx as a pass band. The reception filter34R1is a seventh reception filter whose input terminal is connected to the selection terminal71b, and which takes B3-Rx as a pass band. The transmission filter32T2is a second transmission filter whose input terminal is connected to the selection terminal64c, whose output terminal is connected to the selection terminal72a, and which takes B25-Tx as a pass band. The transmission filter33T2is a sixth transmission filter whose input terminal is connected to the selection terminal64d, whose output terminal is connected to the selection terminal72b, and which takes B1-Tx as a pass band. The reception filter31R2is a third reception filter whose input terminal is connected to the selection terminal72a, and which takes B66-Rx as a pass band. The reception filter32R2is a second reception filter whose input terminal is connected to the selection terminal72a, and which takes B25-Rx as a pass band. The reception filter33R2is a sixth reception filter whose input terminal is connected to the selection terminal72b, and which takes B1-Rx as a pass band. The reception filter34R2is an eighth reception filter whose input terminal is connected to the selection terminal72b, and which takes B3-Rx as a pass band. The transmission filters31T1and34T1and the reception filters31R1,32R1,33R1, and34R1constitute a first multiplexer that can selectively transmit high-frequency signals in Band 66 and Band 3 and receive high-frequency signals in Band 66, Band 25, Band 1, and Band 3. Note that the first multiplexer does not have a transmission filter which takes B25-Tx as a pass band and a transmission filter which takes B1-Tx as a pass band. The transmission filters32T2and33T2and the reception filters31R2,32R2,33R2, and34R2constitute a second multiplexer that can selectively transmit high-frequency signals in Band 25 and Band 1 and receive high-frequency signals in Band 66, Band 25, Band 1, and Band 3. Note that the second multiplexer does not have a transmission filter which takes B66-Tx as a pass band and a transmission filter which takes B3-Tx as a pass band. The high-frequency front end module2E described above includes the primary antenna15and the secondary antenna16, the switch circuits20,63,64,71, and72, the first multiplexer, and the second multiplexer described above, thereby making it possible to arbitrarily distribute high-frequency signals in Band 66, Band 25, Band 1, and Band 3 to the primary antenna15and the secondary antenna16by switching the connection state of the switch circuits20,63,64,71, and72, and execute CAs of two-uplink two-downlink cited in (1)-(4) described above. Here, since the first multiplexer does not have the transmission filter of Band 25 and the transmission filter of Band 1, and the second multiplexer does not have the transmission filter of Band 66 and the transmission filter of Band 3, it is possible to provide the high-frequency front end module2E which is reduced in size and in which CA of two-uplink two-downlink can be performed. 3.2 Connection State of High-Frequency Front End Module2E FIG.12Ais a circuit state diagram in a case of two-uplink two-downlink of the high-frequency front end module2E according to Embodiment 3. This diagram illustrates a circuit connection state in a case of two-uplink of Band 66 and Band 25 and two-downlink of Band 66 and Band 25 (mode 1: two-uplink two-downlink). In the mode 1, as illustrated inFIG.12A, by the control unit, the terminal20aand the terminal20care connected to each other, and the terminal20band the terminal20dare connected to each other, in the switch circuit20(first connection state). Furthermore, the terminal50aand the terminal50care connected to each other, and the terminal50band the terminal50dare connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal63aand the selection terminal63care connected to each other in the switch circuit63, and the common terminal64aand the selection terminal64care connected to each other in the switch circuit64. In this connection state, in the mode 1, a transmission signal in Band 66 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier45, the switch circuit63, the first multiplexer, the switch circuit71, the switch circuit20, and the primary antenna15, and a transmission signal in Band 25 is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier46, the switch circuit64, the second multiplexer, the switch circuit72, the switch circuit20, and the secondary antenna16. Furthermore, a reception signal in Band 66 is received by the RFIC3through the primary antenna15, the switch circuit20, the switch circuit71, and the first multiplexer, and a reception signal in Band 25 is received by the RFIC3through the secondary antenna16, the switch circuit20, the switch circuit72, and the second multiplexer. Alternatively, in the mode 1, as illustrated inFIG.12A, by the control unit, the terminal20aand the terminal20dare connected to each other, and the terminal20band the terminal20care connected to each other, in the switch circuit20(second connection state). Furthermore, the terminal50aand the terminal50dare connected to each other, and the terminal50band the terminal50care connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal63aand the selection terminal63care connected to each other in the switch circuit63, and the common terminal64aand the selection terminal64care connected to each other in the switch circuit64. In this connection state, in the mode 1, a transmission signal in Band 25 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier46, the switch circuit64, the second multiplexer, the switch circuit72, the switch circuit20, and the primary antenna15, and a transmission signal in Band 66 is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier45, the switch circuit63, the first multiplexer, the switch circuit71, the switch circuit20, and the secondary antenna16. Furthermore, a reception signal in Band 25 is received by the RFIC3through the primary antenna15, the switch circuit20, the switch circuit72, and the second multiplexer, and a reception signal in Band 66 is received by the RFIC3through the secondary antenna16, the switch circuit20, the switch circuit71, and the first multiplexer. FIG.12Bis a circuit state diagram in a case of two-uplink two-downlink of the high-frequency front end module2E according to Embodiment 3. This diagram illustrates a circuit connection state in a case of two-uplink of Band 1 and Band 3 and two-downlink of Band 1 and Band 3 (mode 1: two-uplink two-downlink). In the mode 1, as illustrated inFIG.12B, by the control unit, the terminal20aand the terminal20care connected to each other, and the terminal20band the terminal20dare connected to each other, in the switch circuit20(first connection state). Furthermore, the terminal50aand the terminal50care connected to each other, and the terminal50band the terminal50dare connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal63aand the selection terminal63dare connected to each other in the switch circuit63, and the common terminal64aand the selection terminal64dare connected to each other in the switch circuit64. In this connection state, in the mode 1, a transmission signal in Band 1 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier45, the switch circuit63, the first multiplexer, the switch circuit71, the switch circuit20, and the primary antenna15, and a transmission signal in Band 3 is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier46, the switch circuit64, the second multiplexer, the switch circuit72, the switch circuit20, and the secondary antenna16. Furthermore, a reception signal in Band 1 is received by the RFIC3through the primary antenna15, the switch circuit20, the switch circuit71, and the first multiplexer, and a reception signal in Band 3 is received by the RFIC3through the secondary antenna16, the switch circuit20, the switch circuit72, and the second multiplexer. Alternatively, in the mode 1, as illustrated inFIG.12B, by the control unit, the terminal20aand the terminal20dare connected to each other, and the terminal20band the terminal20care connected to each other, in the switch circuit20(second connection state). Furthermore, the terminal50aand the terminal50dare connected to each other, and the terminal50band the terminal50care connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal63aand the selection terminal63dare connected to each other in the switch circuit63, and the common terminal64aand the selection terminal64dare connected to each other in the switch circuit64. In this connection state, in the mode 1, a transmission signal in Band 3 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier46, the switch circuit64, the second multiplexer, the switch circuit72, the switch circuit20, and the primary antenna15, and a transmission signal in Band 1 is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier45, the switch circuit63, the first multiplexer, the switch circuit71, the switch circuit20, and the secondary antenna16. Furthermore, a reception signal in Band 3 is received by the RFIC3through the primary antenna15, the switch circuit20, the switch circuit72, and the second multiplexer, and a reception signal in Band 1 is received by the RFIC3through the secondary antenna16, the switch circuit20, the switch circuit71, and the first multiplexer. FIG.12Cis a circuit state diagram in a case of one-uplink two-downlink of the high-frequency front end module2E according to Embodiment 3. This diagram illustrates a circuit connection state in a case of one-uplink of Band 66 and two-downlink of Band 66 and Band 25 (mode 2: one-uplink two-downlink). In the mode 2, as illustrated inFIG.12C, by the control unit, the terminal20aand the terminal20care connected to each other in the switch circuit20(third connection state). Furthermore, the terminal50aand the terminal50care connected to each other in the switch circuit50. Furthermore, by the control unit, the common terminal63aand the selection terminal63care connected to each other in the switch circuit63. In this connection state, in the mode 2, a transmission signal in Band 66 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier45, the switch circuit63, the first multiplexer, the switch circuit71, the switch circuit20, and the primary antenna15, and reception signals in Band 66 and Band 25 are received by the RFIC3through the primary antenna15, the switch circuit20, the switch circuit71, and the first multiplexer. Note that in the above-described connection form, the case of one-uplink two-downlink by passing through the output terminal3aand the primary antenna15has been described as an example, but one-uplink two-downlink by passing through the output terminal3band the secondary antenna16is also possible. Alternatively, in the mode 2, one-uplink of Band 25 and two-downlink of Band 66 and Band 25 (mode 2: one-uplink two-downlink) can be performed. That is, by the control unit, the terminal20band the terminal20care connected to each other in the switch circuit20(fifth connection state). Furthermore, the terminal50aand the terminal50dare connected to each other in the switch circuit50. Furthermore, by the control unit, the common terminal64aand the selection terminal64care connected to each other in the switch circuit64. In this connection state, in the mode 2, a transmission signal in Band 25 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier46, the switch circuit64, the second multiplexer, the switch circuit72, the switch circuit20, and the primary antenna15, and reception signals in Band 66 and Band 25 are received by the RFIC3through the primary antenna15, the switch circuit20, the switch circuit72, and the second multiplexer. Note that in both the above-described two types of connection forms, the case of one-uplink two-downlink by passing through the output terminal3aand the primary antenna15has been described as an example, but one-uplink two-downlink by passing through the output terminal3band the secondary antenna16is also possible. Furthermore, although not illustrated inFIG.12C, a circuit connection state in a case of one-uplink of Band 3 and two-downlink of Band 1 and Band 3 (mode 2: one-uplink two-downlink) is as follows. That is, by the control unit, the terminal20aand the terminal20care connected to each other in the switch circuit20(third connection state). Furthermore, the terminal50aand the terminal50care connected to each other in the switch circuit50. Furthermore, by the control unit, the common terminal63aand the selection terminal63dare connected to each other in the switch circuit63. In this connection state, in the mode 2, a transmission signal in Band 3 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier45, the switch circuit63, the first multiplexer, the switch circuit71, the switch circuit20, and the primary antenna15, and reception signals in Band 1 and Band 3 are received by the RFIC3through the primary antenna15, the switch circuit20, the switch circuit71, and the first multiplexer. Alternatively, in the mode 2, one-uplink of Band 1 and two-downlink of Band 1 and Band 3 (mode 2: one-uplink two-downlink) can be performed. That is, by the control unit, the terminal20band the terminal20care connected to each other in the switch circuit20(fifth connection state). Furthermore, the terminal50aand the terminal50dare connected to each other in the switch circuit50. Furthermore, by the control unit, the common terminal64aand the selection terminal64dare connected to each other in the switch circuit64. In this connection state, in the mode 2, a transmission signal in Band 1 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier46, the switch circuit64, the second multiplexer, the switch circuit72, the switch circuit20, and the primary antenna15, and reception signals in Band 1 and Band 3 are received by the RFIC3through the primary antenna15, the switch circuit20, the switch circuit72, and the second multiplexer. Note that in both the above-described two types of connection forms, the case of one-uplink two-downlink by passing through the output terminal3aand the primary antenna15has been described as an example, but one-uplink two-downlink by passing through the output terminal3band the secondary antenna16is also possible. 3.3 Comparison of High-Frequency Front End Modules According to Embodiment 3 and Comparative Example 3 FIG.13is a circuit configuration diagram of a high-frequency front end module504according to Comparative Example 3. Note that the diagram also illustrates the RFIC3connected to the high-frequency front end module504according to Comparative Example 3. As illustrated in the diagram, the high-frequency front end module504includes a primary circuit504aand a secondary circuit504b. The primary circuit504aincludes the primary antenna15, switch circuits565and71, the transmission filters31T1,32T1,33T1, and34T1, the reception filters31R1,32R1,33R1, and34R1, and the transmission amplifier45. The transmission filters31T1,32T1,33T1, and34T1and the reception filters31R1,32R1,33R1, and34R1constitute a first multiplexer. The secondary circuit504bincludes the secondary antenna16, switch circuits566and72, transmission filters31T2,32T2,33T2, and34T2, the reception filters31R2,32R2,33R2, and34R2, and the transmission amplifier46. The transmission filters31T2,32T2,33T2, and34T2and the reception filters31R2,32R2,33R2, and34R2constitute a second multiplexer. The high-frequency front end module504according to Comparative Example 3 is different from the high-frequency front end module2E according to Embodiment 3 in the configurations of the first multiplexer, the second multiplexer, and the switch circuit. Hereinafter, the high-frequency front end module504according to Comparative Example 3 will be described focusing on the differences from the high-frequency front end module2E according to Embodiment 3. The switch circuit565is an SP4T (Single Pole 4 Throw) type switch circuit having a common terminal and four selection terminals. The common terminal is connected to the output terminal of the transmission amplifier45. The switch circuit566is an SP4T type switch circuit having a common terminal and four selection terminals. The common terminal is connected to the output terminal of the transmission amplifier46. The switch circuit71is an SPDT type switch circuit having the common terminal71cand the selection terminals71aand71b. The common terminal71cis connected to the primary antenna15. The switch circuit72is an SPDT type switch circuit having the common terminal72cand the selection terminals72aand72b. The common terminal72cis connected to the secondary antenna16. The transmission filter31T1is a transmission filter whose input terminal is connected to the selection terminal of the switch circuit565, whose output terminal is connected to the selection terminal71a, and which takes B66-Tx as a pass band. The transmission filter32T1is a transmission filter whose input terminal is connected to the selection terminal of the switch circuit565, whose output terminal is connected to the selection terminal71a, and which takes B25-Tx as a pass band. The transmission filter33T1is a transmission filter whose input terminal is connected to the selection terminal of the switch circuit565, whose output terminal is connected to the selection terminal71b, and which takes B1-Tx as a pass band. The transmission filter34T1is a transmission filter whose input terminal is connected to the selection terminal of the switch circuit565, whose output terminal is connected to the selection terminal71b, and which takes B3-Tx as a pass band. The reception filter31R1is a reception filter whose input terminal is connected to the selection terminal71a, and which takes B66-Rx as a pass band. The reception filter32R1is a reception filter whose input terminal is connected to the selection terminal71a, and which takes B25-Rx as a pass band. The reception filter33R1is a reception filter whose input terminal is connected to the selection terminal71b, and which takes B1-Rx as a pass band. The reception filter34R1is a reception filter whose input terminal is connected to the selection terminal71b, and which takes B3-Rx as a pass band. The transmission filter31T2is a transmission filter whose input terminal is connected to the selection terminal of the switch circuit566, whose output terminal is connected to the selection terminal72a, and which takes B66-Tx as a pass band. The transmission filter32T2is a transmission filter whose input terminal is connected to the selection terminal of the switch circuit566, whose output terminal is connected to the selection terminal72a, and which takes B25-Tx as a pass band. The transmission filter33T2is a transmission filter whose input terminal is connected to the selection terminal of the switch circuit566, whose output terminal is connected to the selection terminal72b, and which takes B1-Tx as a pass band. The transmission filter34T2is a transmission filter whose input terminal is connected to the selection terminal of the switch circuit566, whose output terminal is connected to the selection terminal72b, and which takes B3-Tx as a pass band. The reception filter31R2is a reception filter whose input terminal is connected to the selection terminal72a, and which takes B66-Rx as a pass band. The reception filter32R2is a reception filter whose input terminal is connected to the selection terminal72a, and which takes B25-Rx as a pass band. The reception filter33R2is a reception filter whose input terminal is connected to the selection terminal72b, and which takes B1-Rx as a pass band. The reception filter34R2is a reception filter whose input terminal is connected to the selection terminal72b, and which takes B3-Rx as a pass band. According to the configuration described above, the high-frequency front end module504can execute (1) two-uplink in which a transmission signal in B66-Tx included in Band 66 and a transmission signal in B25-Tx included in Band 25 are simultaneously transmitted, (2) two-downlink in which a reception signal in B66-Rx included in Band 66 and a reception signal in B25-Rx included in Band 25 are simultaneously received, (3) two-uplink in which a transmission signal in B1-Tx included in Band 1 and a transmission signal in B3-Tx included in Band 3 are simultaneously transmitted, and (4) two-downlink in which a reception signal in B1-Rx included in Band 1 and a reception signal in B3-Rx included in Band 3 are simultaneously received. In the high-frequency front end module504according to Comparative Example 3, in order to ensure signal quality such as isolation and the like of high-frequency signals in Band 66 and Band 25 simultaneously transmitted/received, and signal quality such as isolation and the like of high-frequency signals in Band 1 and Band 3 simultaneously transmitted/received, two antenna elements, such as the primary antenna15which is preferentially used and the secondary antenna16which is secondarily used, are disposed. In this case, because of necessity of making it possible to transmit/receive each of the high-frequency signals in Band 66, Band 25, Band 1, and Band 3 even by any of the antennas, transmission paths and reception paths of all bands are connected to the primary antenna15, and transmission paths and reception paths of all bands are connected and disposed also to the secondary antenna16. A filter for selectively allowing a desired frequency band to pass therethrough is arranged in each signal path, and in the configuration of the high-frequency front end module504according to Comparative Example 3, it is necessary to connect eight filters to the primary antenna15, and to similarly connect eight filters to the secondary antenna16. That is, in the front end module to which the primary antenna15and the secondary antenna16are applied, in order to achieve two-uplink two-downlink of two arbitrary frequency bands among Band 66, Band 25, Band 1, and Band 3, a total of 16 filters are required, and the circuit is enlarged. In contrast, according to the high-frequency front end module2E according to the present embodiment, it is possible to arbitrarily distribute high-frequency signals in Band 66, Band 25, Band 1, and Band 3 to the primary antenna15and the secondary antenna16by switching the connection state of the switch circuit20, and execute CA of two-uplink two-downlink. Therefore, in the first multiplexer connected to one of the antennas, for example, the transmission filter of Band 25 and the transmission filter of Band 1 can be reduced. In the same manner, in the second multiplexer connected to the other of the antennas, for example, the transmission filter of Band 66 and the transmission filter of Band 3 can be reduced. That is, four or more filters can be reduced as compared with the configuration of the high-frequency front end module504according to Comparative Example 3. In the configuration of the high-frequency front end module2E according to the present embodiment, in comparison with the high-frequency front end module504according to Comparative Example 3, the one switch circuit20of the two-input two-output type is added, but the switch circuit20is sufficiently smaller than the transmission filter and the reception filter. Accordingly, it is possible to provide the high-frequency front end module2E which is reduced in size and in which CA of two-uplink two-downlink can be performed. 3.4 Configurations of High-Frequency Front End Module2F and Communication Device1F According to Modification 1 FIG.14is a circuit configuration diagram of a communication device1F according to Modification 1 of Embodiment 3. As illustrated in the diagram, the communication device1F includes a high-frequency front end module2F, the RFIC3, and the BBIC4. The communication device1F according to the present modification differs from the communication device1E according to Embodiment 3 in the configuration of the high-frequency front end module. Hereinafter, the communication device1F according to the present modification will be described focusing on the differences from the communication device1E according to Embodiment 3. As illustrated inFIG.14, the high-frequency front end module2F includes the primary antenna15and the secondary antenna16, switch circuits20,50,65,66, and72, transmission filters37T1,31T2,32T2,33T2, and34T2, reception filters36R1,34R1,32R1,31R2,32R2,33R2, and34R2, and transmission amplifiers47and48. Note that in the communication device1F according to the present modification, in frequency band allocation, a relationship in which the transmission band of Band 3 includes the transmission band of Band 66 is established, and a relationship in which the reception band of Band 66 includes the reception band of Band 1 is established. In the four frequency bands, there is no other overlapping and inclusion relationship. According to the configuration described above, the high-frequency front end module2F can execute (1) two-uplink in which a transmission signal in B66-Tx included in Band 66 and a transmission signal in B25-Tx included in Band 25 are simultaneously transmitted, (2) two-downlink in which a reception signal in B66-Rx included in Band 66 and a reception signal in B25-Rx included in Band 25 are simultaneously received, (3) two-uplink in which a transmission signal in B1-Tx included in Band 1 and a transmission signal in B3-Tx included in Band 3 are simultaneously transmitted, and (4) two-downlink in which a reception signal in B1-Rx included in Band 1 and a reception signal in B3-Rx included in Band 3 are simultaneously received. Note that two-uplink of Band 66 and Band 3 is not executed, and two-uplink of Band 66 and Band 1 is not executed. The high-frequency front end module2F according to the present modification is different from the high-frequency front end module2E according to Embodiment 3 in the configurations of the first multiplexer, the second multiplexer, and the switch circuit. Hereinafter, the high-frequency front end module2F according to the present modification will be described focusing on the differences from the high-frequency front end module2E according to Embodiment 3. The switch circuit65is an SP3T type switch circuit having a common terminal65aand selection terminals65c,65d, and65e. The common terminal65ais connected to an output terminal of the transmission amplifier47. The switch circuit66is an SPDT type switch circuit having a common terminal66aand selection terminals66cand66d. The common terminal66ais connected to an output terminal of the transmission amplifier48. The switch circuit72is an SPDT type switch circuit having the common terminal72cand the selection terminals72aand72b. The common terminal72cis connected to the terminal20bof the switch circuit20. The transmission filter37T1is a first transmission filter whose input terminal is connected to the selection terminal65c, whose output terminal is connected to the terminal20a, and which takes, as a pass band, B3-Tx which includes B66-Tx. The reception filter36R1is a first reception filter whose input terminal is connected to the terminal20a, and which takes, as a pass band, B66-Rx which includes B1-Rx. The reception filter34R1is a seventh reception filter whose input terminal is connected to the terminal20a, and which takes B3-Rx as a pass band. The reception filter32R1is a fourth reception filter whose input terminal is connected to the terminal20a, and which takes B25-Rx as a pass band. The transmission filter31T2is a transmission filter whose input terminal is connected to the selection terminal65d, whose output terminal is connected to the selection terminal72a, and which takes B66-Tx as a pass band. The transmission filter32T2is a second transmission filter whose input terminal is connected to the selection terminal66c, whose output terminal is connected to the selection terminal72a, and which takes B25-Tx as a pass band. The transmission filter33T2is a sixth transmission filter whose input terminal is connected to the selection terminal66d, whose output terminal is connected to the selection terminal72b, and which takes B1-Tx as a pass band. The transmission filter34T2is an eighth transmission filter whose input terminal is connected to the selection terminal65e, whose output terminal is connected to the selection terminal72b, and which takes B3-Tx as a pass band. The reception filter31R2is a third reception filter whose input terminal is connected to the selection terminal72a, and which takes B66-Rx as a pass band. The reception filter32R2is a second reception filter whose input terminal is connected to the selection terminal72a, and which takes B25-Rx as a pass band. The reception filter33R2is a sixth reception filter whose input terminal is connected to the selection terminal72b, and which takes B1-Rx as a pass band. The reception filter34R2is an eighth reception filter whose input terminal is connected to the selection terminal72b, and which takes B3-Rx as a pass band. The transmission filters37T1and31T2and the reception filters36R1,34R1, and32R1constitute a first multiplexer that can selectively transmit high-frequency signals in Band 3 and Band 66 and receive high-frequency signals in Band 66, Band 25, Band 1, and Band 3. Note that the first multiplexer does not have a transmission filter which takes B25-Tx as a pass band and a transmission filter which takes B1-Tx as a pass band. Furthermore, a transmission filter which takes B3-Tx as a pass band and a transmission filter which takes B66-Tx as a pass band are made as one transmission filter, and a reception filter which takes B1-Rx as a pass band and a reception filter which takes B66-Rx as a pass band are made as one reception filter. The transmission filters32T2,33T2, and34T2and the reception filters31R2,32R2,33R2, and34R2constitute a second multiplexer that can selectively transmit high-frequency signals in Band 25, Band 1, and Band 3 and receive high-frequency signals in Band 66, Band 25, Band 1, and Band 3. Note that the transmission filters31T2and32T2and the reception filters31R2and32R2constitute a first quadplexer of Band 66 and Band 25, and the transmission filters33T2and34T2and the reception filters33R2and34R2constitute a second quadplexer of Band 1 and Band 3. According to the configuration described above, the high-frequency front end module2F can execute (1) two-uplink in which a transmission signal in the first transmission band (B66-Tx) included in the first frequency band (Band 66) and a transmission signal in the second transmission band (B25-Tx) included in the second frequency band (Band 25) are simultaneously transmitted, (2) two-downlink in which a reception signal in the first reception band (B66-Rx) included in the first frequency band (Band 66) and a reception signal in the second reception band (B25-Rx) included in the second frequency band (Band 25) are simultaneously received, (3) two-uplink in which a transmission signal in B1-Tx included in Band 1 and a transmission signal in B3-Tx included in Band 3 are simultaneously transmitted, and (4) two-downlink in which a reception signal in B1-Rx included in Band 1 and a reception signal in B3-Rx included in Band 3 are simultaneously received. The high-frequency front end module2F described above includes the primary antenna15and the secondary antenna16, the switch circuits20,65,66, and72, the first multiplexer, and the second multiplexer described above, thereby making it possible to arbitrarily distribute high-frequency signals in Band 66, Band 25, Band 1, and Band 3 to the primary antenna15and the secondary antenna16by switching the connection state of the switch circuits20,65,66, and72, and execute two-uplink two-downlink of Band 66 and Band 25, and two-uplink two-downlink of Band 1 and Band 3. Here, the first multiplexer does not have a transmission filter of Band 25 and a transmission filter of Band 1. Furthermore, instead of individually having the transmission filter of Band 66, the transmission filter of Band 3 and the transmission filter of Band 66 are made as one transmission filter, and the reception filter of Band 1 and the reception filter of Band 66 are made as one reception filter. Furthermore, the second multiplexer does not have a transmission filter of Band 66. Therefore, four filters can be reduced as compared with the configuration of the high-frequency front end module504according to Comparative Example 3. Accordingly, in comparison with the high-frequency front end module504according to Comparative Example 3, it is possible to provide the high-frequency front end module2F which is reduced in size and in which CA of two-uplink two-downlink in four bands including three bands in the overlapping relationship can be performed. Note that in the high-frequency front end module2F according to the present modification, the first quadplexer and the second quadplexer, the switch circuits65,66,72,20, and50, and the transmission amplifiers47and48constitute a front end module100A supporting multi-band of Band 66, Band 25, Band 1, and Band 3. The front end module100A is a basic circuit capable of selecting one band among the above-described four bands by switching the switch circuits65,66, and72. The high-frequency front end module2F according to the present modification is capable of supporting two-uplink two-downlink of Band 66 and Band 25 and two-uplink two-downlink of Band 1 and Band 3 by adding a multiplexer100B constituted by the transmission filter37T1and the reception filters36R1,34R1, and32R1to the basic front end module100A described above. 3.5 Connection State of High-Frequency Front End Module2F According to Modification 1 FIG.15Ais a circuit state diagram in a case of two-uplink two-downlink of the high-frequency front end module2F according to Modification 1 of Embodiment 3. This diagram illustrates a circuit connection state in a case of two-uplink of Band 66 and Band 25 and two-downlink of Band 66 and Band 25 (mode 1: two-uplink two-downlink). In the mode 1, as illustrated inFIG.15A, by the control unit, the terminal20aand the terminal20care connected to each other, and the terminal20band the terminal20dare connected to each other, in the switch circuit20(first connection state). Furthermore, the terminal50aand the terminal50care connected to each other, and the terminal50band the terminal50dare connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal65aand the selection terminal65care connected to each other in the switch circuit65, the common terminal66aand the selection terminal66care connected to each other in the switch circuit66, and the common terminal72cand the selection terminal72aare connected to each other in the switch circuit72. In this connection state, in the mode 1, a transmission signal in Band 66 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier47, the switch circuit65, the first multiplexer, the switch circuit20, and the primary antenna15, and a transmission signal in Band 25 is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier48, the switch circuit66, the second multiplexer, the switch circuit72, the switch circuit20, and the secondary antenna16. Furthermore, a reception signal in Band 66 is received by the RFIC3through the primary antenna15, the switch circuit20, and the first multiplexer, and a reception signal in Band 25 is received by the RFIC3through the secondary antenna16, the switch circuit20, the switch circuit72, and the second multiplexer. Alternatively, in the mode 1, as illustrated inFIG.15A, by the control unit, the terminal20aand the terminal20dare connected to each other, and the terminal20band the terminal20care connected to each other, in the switch circuit20(second connection state). Furthermore, the terminal50aand the terminal50dare connected to each other, and the terminal50band the terminal50care connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal65aand the selection terminal65care connected to each other in the switch circuit65, the common terminal66aand the selection terminal66care connected to each other in the switch circuit66, and the common terminal72cand the selection terminal72aare connected to each other in the switch circuit72. In this connection state, in the mode 1, a transmission signal in Band 25 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier48, the switch circuit66, the second multiplexer, the switch circuit72, the switch circuit20, and the primary antenna15, and a transmission signal in Band 66 is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier47, the switch circuit65, the first multiplexer, the switch circuit20, and the secondary antenna16. Furthermore, a reception signal in Band 25 is received by the RFIC3through the primary antenna15, the switch circuit20, the switch circuit72, and the second multiplexer, and a reception signal in Band 66 is received by the RFIC3through the secondary antenna16, the switch circuit20, and the first multiplexer. FIG.15Bis a circuit state diagram in a case of two-uplink two-downlink of the high-frequency front end module2F according to Modification 1 of Embodiment 3. This diagram illustrates a circuit connection state in a case of two-uplink of Band 1 and Band 3 and two-downlink of Band 1 and Band 3 (mode 1: two-uplink two-downlink). In the mode 1, as illustrated inFIG.15B, by the control unit, the terminal20aand the terminal20care connected to each other, and the terminal20band the terminal20dare connected to each other, in the switch circuit20(first connection state). Furthermore, the terminal50aand the terminal50care connected to each other, and the terminal50band the terminal50dare connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal65aand the selection terminal65care connected to each other in the switch circuit65, the common terminal66aand the selection terminal66dare connected to each other in the switch circuit66, and the common terminal72cand the selection terminal72bare connected to each other in the switch circuit72. In this connection state, in the mode 1, a transmission signal in Band 3 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier47, the switch circuit65, the first multiplexer, the switch circuit20, and the primary antenna15, and a transmission signal in Band 1 is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier48, the switch circuit66, the second multiplexer, the switch circuit72, the switch circuit20, and the secondary antenna16. Furthermore, a reception signal in Band 3 is received by the RFIC3through the primary antenna15, the switch circuit20, and the first multiplexer, and a reception signal in Band 1 is received by the RFIC3through the secondary antenna16, the switch circuit20, the switch circuit72, and the second multiplexer. Alternatively, in the mode 1, as illustrated inFIG.15B, by the control unit, the terminal20aand the terminal20dare connected to each other, and the terminal20band the terminal20care connected to each other, in the switch circuit20(second connection state). Furthermore, the terminal50aand the terminal50dare connected to each other, and the terminal50band the terminal50care connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal65aand the selection terminal65care connected to each other in the switch circuit65, the common terminal66aand the selection terminal66dare connected to each other in the switch circuit66, and the common terminal72cand the selection terminal72bare connected to each other in the switch circuit72. In this connection state, in the mode 1, a transmission signal in Band 1 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier48, the switch circuit66, the second multiplexer, the switch circuit72, the switch circuit20, and the primary antenna15, and a transmission signal in Band 3 is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier47, the switch circuit65, the first multiplexer, the switch circuit20, and the secondary antenna16. Furthermore, a reception signal in Band 1 is received by the RFIC3through the primary antenna15, the switch circuit20, the switch circuit72, and the second multiplexer, and a reception signal in Band 3 is received by the RFIC3through the secondary antenna16, the switch circuit20, and the first multiplexer. That is, in the high-frequency front end module2F according to Modification 1, in addition to the fact that (1) two-uplink two-downlink of a high-frequency signal in the first frequency band (Band 66) and a high-frequency signal in the second frequency band (Band 25) can be executed, it is possible to execute two-uplink two-downlink of a high-frequency signal in Band 1 and a high-frequency signal in Band 3. Furthermore, the high-frequency front end module2F according to the present modification is applied also to the one-uplink two-downlink, in the same manner as the high-frequency front end module2E according to Embodiment 3. That is, by switching the switch circuits20,50,65,66, and72, it is possible to achieve (1) one-uplink of Band 66 and two-downlink of Band 66 and Band 25, (2) one-uplink of Band 25 and two-downlink of Band 66 and Band 25, (3) one-uplink of Band 1 and two-downlink of Band 1 and Band 3, and (4) one-uplink of Band 3 and two-downlink of Band 1 and Band 3 (mode 2: one-uplink two-downlink). 3.6 Configurations of High-Frequency Front End Module2G and Communication Device1G According to Modification 2 FIG.16is a circuit configuration diagram of a communication device1G according to Modification 2 of Embodiment 3. As illustrated in the diagram, the communication device1G includes a high-frequency front end module2G, the RFIC3, and the BBIC4. The communication device1G according to the present modification differs from the communication device1F according to Modification 1 of Embodiment 3 in the configuration of the high-frequency front end module. Hereinafter, the communication device1G according to the present modification will be described focusing on the differences from the communication device1F according to Modification 1 of Embodiment 3. As illustrated inFIG.16, the high-frequency front end module2G includes the primary antenna15and the secondary antenna16, the switch circuits20,50,66, and72, the transmission filters37T1,32T2, and33T2, the reception filters36R1,34R1,32R1,31R2,32R2,33R2, and34R2, and the transmission amplifiers47and48. Note that in the communication device1G according to the present modification, in frequency band allocation, a relationship in which the transmission band of Band 3 includes the transmission band of Band 66 is established, and a relationship in which the reception band of Band 66 includes the reception band of Band 1 is established. In the four frequency bands, there is no other overlapping and inclusion relationship. According to the configuration described above, the high-frequency front end module2G can execute (1) two-uplink in which a transmission signal in the first transmission band (B66-Tx) included in the first frequency band (Band 66) and a transmission signal in the second transmission band (B25-Tx) included in the second frequency band (Band 25) are simultaneously transmitted, and (2) two-downlink in which a reception signal in the first reception band (B66-Rx) included in the first frequency band (Band 66) and a reception signal in the second reception band (B25-Rx) included in the second frequency band (Band 25) are simultaneously received. Furthermore, Band 1 may be taken as the first frequency band, and Band 3 may be taken as the second frequency band, it is possible to execute (3) two-uplink in which a transmission signal in the first transmission band (B1-Tx) included in the first frequency band (Band 1) and a transmission signal in the second transmission band (B3-Tx) included in the second frequency band (Band 3) are simultaneously transmitted, and (4) two-downlink in which a reception signal in the first reception band (B1-Rx) included in the first frequency band (Band 1) and a reception signal in the second reception band (B3-Rx) included in the second frequency band (Band 3) are simultaneously received. Note that two-uplink of Band 66 and Band 3 is not executed, and two-downlink of Band 66 and Band 1 is not executed. The high-frequency front end module2G according to the present modification is different from the high-frequency front end module2F according to Modification 1 of Embodiment 3 in the configuration of the second multiplexer and in a point that the switch circuit65is removed. Hereinafter, the high-frequency front end module2G according to the present modification will be described focusing on the differences from the high-frequency front end module2F according to Modification 1 of Embodiment 3. The transmission filter37T1is a transmission filter whose input terminal is connected to the transmission amplifier47, whose output terminal is connected to the terminal20a, and which takes, as a pass band, B3-Tx which includes B66-Tx. The transmission filter37T1and the reception filters36R1,34R1, and32R1constitute a first multiplexer that can selectively transmit high-frequency signals in Band 3 and Band 66 and receive high-frequency signals in Band 66, Band 25, Band 1, and Band 3. Note that the first multiplexer does not have a transmission filter which takes B25-Tx as a pass band and a transmission filter which takes B1-Tx as a pass band. Furthermore, a transmission filter which takes B3-Tx as a pass band and a transmission filter which takes B66-Tx as a pass band are made as one transmission filter, and a reception filter which takes B1-Rx as a pass band and a reception filter which takes B66-Rx as a pass band are made as one reception filter. The transmission filters32T2and33T2and the reception filters31R2,32R2,33R2, and34R2constitute a second multiplexer that can selectively transmit high-frequency signals in Band 1 and Band 25 and receive high-frequency signals in Band 66, Band 25, Band 1, and Band 3. Note that the second multiplexer does not have a transmission filter which takes B66-Tx as a pass band and a transmission filter which takes B3-Tx as a pass band. Therefore, six filters can be reduced as compared with the configuration of the high-frequency front end module504according to Comparative Example 3. The high-frequency front end module2G according to the present modification has a configuration in which simplification and miniaturization of the circuit is prioritized without ensuring the basic function of the front end module100A as compared with the high-frequency front end module2F according to Modification 1. Accordingly, in comparison with the high-frequency front end module504according to Comparative Example 3 and the high-frequency front end module2F according to Modification 1 of Embodiment 3, it is possible to provide the high-frequency front end module2G which is reduced in size and in which CA of two-uplink two-downlink in four bands including three bands in an overlapping relationship can be performed. 3.7 Connection State of High-Frequency Front End Module2G According to Modification 2 FIG.17Ais a circuit state diagram in a case of one-uplink two-downlink of the high-frequency front end module2G according to Modification 2 of Embodiment 3. This diagram illustrates a circuit connection state in a case of one-uplink of Band 1 and two-downlink of Band 1 and Band 3 (mode 2: one-uplink two-downlink). In the mode 2, as illustrated inFIG.17A, by the control unit, the terminal20band the terminal20care connected to each other in the switch circuit20(fifth connection state). Furthermore, the terminal50aand the terminal50dare connected to each other in the switch circuit50. Furthermore, by the control unit, the common terminal66aand the selection terminal66dare connected to each other in the switch circuit66, and the common terminal72cand the selection terminal72bare connected to each other in the switch circuit72. In this connection state, in the mode 2, a transmission signal in Band 1 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier48, the switch circuit66, the second multiplexer, the switch circuit72, the switch circuit20, and the primary antenna15, and reception signals in Band 1 and Band 3 are received by the RFIC3through the primary antenna15, the switch circuit20, the switch circuit72, and the second multiplexer. Note that in the above-described connection form, the case of one-uplink two-downlink by passing through the output terminal3aand the primary antenna15has been described as an example, but one-uplink two-downlink by passing through the output terminal3band the secondary antenna16is also possible. FIG.17Bis a circuit state diagram in a case of one-uplink two-downlink of the high-frequency front end module2G according to Modification 2 of Embodiment 3. This diagram illustrates a circuit connection state in a case of one-uplink of Band 3 and two-downlink of Band 1 and Band 3 (mode 2: one-uplink two-downlink). In the mode 2, as illustrated inFIG.17B, by the control unit, the terminal20aand the terminal20care connected to each other in the switch circuit20(third connection state). Furthermore, the terminal50aand the terminal50care connected to each other in the switch circuit50. In this connection state, in the mode 2, a transmission signal in Band 3 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier47, the first multiplexer, the switch circuit20, and the primary antenna15, and reception signals in Band 1 and Band 3 are received by the RFIC3through the primary antenna15, the switch circuit20, and the first multiplexer. Note that in the above-described connection form, the case of one-uplink two-downlink by passing through the output terminal3aand the primary antenna15has been described as an example, but one-uplink two-downlink by passing through the output terminal3band the secondary antenna16is also possible. FIG.17Cis a circuit state diagram in a case of one-uplink two-downlink of the high-frequency front end module2G according to Modification 2 of Embodiment 3. This diagram illustrates a circuit connection state in a case of one-uplink of Band 25 and two-downlink of Band 66 and Band 25 (mode 2: one-uplink two-downlink). In the mode 2, as illustrated inFIG.17C, by the control unit, the terminal20band the terminal20care connected to each other in the switch circuit20(fifth connection state). Furthermore, the terminal50aand the terminal50dare connected to each other in the switch circuit50. Furthermore, by the control unit, the common terminal66aand the selection terminal66care connected to each other in the switch circuit66, and the common terminal72cand the selection terminal72aare connected to each other in the switch circuit72. In this connection state, in the mode 2, a transmission signal in Band 25 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier48, the switch circuit66, the second multiplexer, the switch circuit72, the switch circuit20, and the primary antenna15, and reception signals in Band 66 and Band 25 are received by the RFIC3through the primary antenna15, the switch circuit20, the switch circuit72, and the second multiplexer. Note that in the above-described connection form, the case of one-uplink two-downlink by passing through the output terminal3aand the primary antenna15has been described as an example, but one-uplink two-downlink by passing through the output terminal3band the secondary antenna16is also possible. FIG.17Dis a circuit state diagram in a case of one-uplink two-downlink of the high-frequency front end module2G according to Modification 2 of Embodiment 3. This diagram illustrates a circuit connection state in a case of one-uplink of Band 66 and two-downlink of Band 66 and Band 25 (mode 2: one-uplink two-downlink). In the mode 2, as illustrated inFIG.17D, by the control unit, the terminal20aand the terminal20care connected to each other in the switch circuit20(third connection state). Furthermore, the terminal50aand the terminal50care connected to each other in the switch circuit50. In this connection state, in the mode 2, a transmission signal in Band 66 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier47, the first multiplexer, the switch circuit20, and the primary antenna15, and reception signals in Band 66 and Band 25 are received by the RFIC3through the primary antenna15, the switch circuit20, and the first multiplexer. Note that in the above-described connection form, the case of one-uplink two-downlink by passing through the output terminal3aand the primary antenna15has been described as an example, but one-uplink two-downlink by passing through the output terminal3band the secondary antenna16is also possible. FIG.17Eis a circuit state diagram in a case of two-uplink two-downlink of the high-frequency front end module2G according to Modification 2 of Embodiment 3. This diagram illustrates a circuit connection state in a case of two-uplink of Band 1 and Band 3 and two-downlink of Band 1 and Band 3 (mode 1: two-uplink two-downlink). In the mode 1, as illustrated inFIG.17E, by the control unit, the terminal20aand the terminal20care connected to each other, and the terminal20band the terminal20dare connected to each other, in the switch circuit20(first connection state). Furthermore, the terminal50aand the terminal50care connected to each other, and the terminal50band the terminal50dare connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal66aand the selection terminal66dare connected to each other in the switch circuit66, and the common terminal72cand the selection terminal72bare connected to each other in the switch circuit72. In this connection state, in the mode 1, a transmission signal in Band 3 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier47, the first multiplexer, the switch circuit20, and the primary antenna15, and a transmission signal in Band 1 is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier48, the switch circuit66, the second multiplexer, the switch circuit72, the switch circuit20, and the secondary antenna16. Furthermore, a reception signal in Band 3 is received by the RFIC3through the primary antenna15, the switch circuit20, and the first multiplexer, and a reception signal in Band 1 is received by the RFIC3through the secondary antenna16, the switch circuit20, the switch circuit72, and the second multiplexer. Alternatively, in the mode 1, as illustrated inFIG.17E, by the control unit, the terminal20aand the terminal20dare connected to each other, and the terminal20band the terminal20care connected to each other, in the switch circuit20(second connection state). Furthermore, the terminal50aand the terminal50dare connected to each other, and the terminal50band the terminal50care connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal66aand the selection terminal66dare connected to each other in the switch circuit66, and the common terminal72cand the selection terminal72bare connected to each other in the switch circuit72. In this connection state, in the mode 1, a transmission signal in Band 1 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier48, the switch circuit66, the second multiplexer, the switch circuit72, the switch circuit20, and the primary antenna15, and a transmission signal in Band 3 is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier47, the first multiplexer, the switch circuit20, and the secondary antenna16. Furthermore, a reception signal in Band 1 is received by the RFIC3through the primary antenna15, the switch circuit20, the switch circuit72, and the second multiplexer, and a reception signal in Band 3 is received by the RFIC3through the secondary antenna16, the switch circuit20, and the first multiplexer. FIG.17Fis a circuit state diagram in a case of two-uplink two-downlink of the high-frequency front end module2G according to Modification 2 of Embodiment 3. This diagram illustrates a circuit connection state in a case of two-uplink of Band 66 and Band 25 and two-downlink of Band 66 and Band 25 (mode 1: two-uplink two-downlink). In the mode 1, as illustrated inFIG.17F, by the control unit, the terminal20aand the terminal20care connected to each other, and the terminal20band the terminal20dare connected to each other, in the switch circuit20(first connection state). Furthermore, the terminal50aand the terminal50care connected to each other, and the terminal50band the terminal50dare connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal66aand the selection terminal66care connected to each other in the switch circuit66, and the common terminal72cand the selection terminal72aare connected to each other in the switch circuit72. In this connection state, in the mode 1, a transmission signal in Band 66 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier47, the first multiplexer, the switch circuit20, and the primary antenna15, and a transmission signal in Band 25 is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier48, the switch circuit66, the second multiplexer, the switch circuit72, the switch circuit20, and the secondary antenna16. Furthermore, a reception signal in Band 66 is received by the RFIC3through the primary antenna15, the switch circuit20, and the first multiplexer, and a reception signal in Band 25 is received by the RFIC3through the secondary antenna16, the switch circuit20, the switch circuit72, and the second multiplexer. Alternatively, in the mode 1, as illustrated inFIG.17F, by the control unit, the terminal20aand the terminal20dare connected to each other, and the terminal20band the terminal20care connected to each other, in the switch circuit20(second connection state). Furthermore, the terminal50aand the terminal50dare connected to each other, and the terminal50band the terminal50care connected to each other, in the switch circuit50. Furthermore, by the control unit, the common terminal66aand the selection terminal66care connected to each other in the switch circuit66, and the common terminal72cand the selection terminal72aare connected to each other in the switch circuit72. In this connection state, in the mode 1, a transmission signal in Band 25 is transmitted through the output terminal3a, the switch circuit50, the transmission amplifier48, the switch circuit66, the second multiplexer, the switch circuit72, the switch circuit20, and the primary antenna15, and a transmission signal in Band 66 is transmitted through the output terminal3b, the switch circuit50, the transmission amplifier47, the first multiplexer, the switch circuit20, and the secondary antenna16. Furthermore, a reception signal in Band 25 is received by the RFIC3through the primary antenna15, the switch circuit20, the switch circuit72, and the second multiplexer, and a reception signal in Band 66 is received by the RFIC3through the secondary antenna16, the switch circuit20, and the first multiplexer. Other Embodiments Although the high-frequency front end modules and the communication devices according to the embodiments have been described above using the embodiments and the modifications thereof, the high-frequency front end module and the communication device according to the present disclosure are not limited to the above-described embodiments and the modifications thereof. The present disclosure also encompasses other embodiments that are implemented by combining desired constituent elements in the above-described embodiments and modifications thereof, modifications obtained by adding various changes to the above-described embodiments and modifications thereof, which are conceived by those skilled in the art, without departing from the gist of the present disclosure, and various apparatuses incorporating the high-frequency front end module and the communication device according to the present disclosure. Note that, the above-described embodiments and modifications thereof have described the configuration of two-uplink two-downlink in which a high-frequency signal in the first frequency band and a high-frequency signal in the second frequency band are simultaneously used as an example, but the configurations of the high-frequency front end module and the communication device according to the present disclosure can also be applied to the configuration of uplink and/or downlink (for example, three-uplink three-downlink) in which three or more different frequency bands are simultaneously used. That is, the present disclosure also includes a high-frequency front end module or a communication device including the configuration for executing uplink and/or downlink in which three or more different frequency bands are simultaneously used, the configuration of the high-frequency front end module or the communication device according to the above-described embodiments and modification thereof. For example, in the high-frequency front end modules and the communication devices according to the above-described embodiments and modifications thereof, other high-frequency circuit elements, wirings, and the like may be inserted between the paths connecting the respective circuit elements and signal paths disclosed in the drawings. INDUSTRIAL APPLICABILITY The present disclosure can be widely used for communication apparatuses, such as mobile phones, as a multi-band/multi-mode compatible front end module that adopts a carrier aggregation system. While preferred embodiments of the disclosure have been described above, it is to be understood that variations and modifications will be apparent to those skilled in the art without departing from the scope and spirit of the disclosure. The scope of the disclosure, therefore, is to be determined solely by the following claims. | 160,927 |
11863216 | DETAILED DESCRIPTION The present embodiments relate to improved receiver architectures for wireless communication systems. Wireless communication systems can provide various types of communication content such as voice, video, packet data, messaging, broadcast, and so on. These systems may be multiple-access systems capable of supporting communication with multiple users by sharing the available system resources (e.g., time, frequency, and power). Examples of such multiple-access systems include code-division multiple access (CDMA) systems, time-division multiple access (TDMA) systems, frequency-division multiple access (FDMA) systems, and orthogonal frequency-division multiple access (OFDMA) systems. A wireless multiple-access communication system may include a number of base stations, each simultaneously supporting communication for multiple communication devices, such as user equipment (e.g., cell phone, lap-tops, networked consumer electronics, etc.). In a Long-Term Evolution (LTE) or LTE-Advanced (LTE-A) network, a set of one or more base stations may define an eNodeB (eNB). In a next generation, New Radio (NR), millimeter wave (mmW), or 5G network, a base station may take the form of a smart radio head (or radio head (RH)) or access node controller (ANC), with a set of smart radio heads in communication with an ANC defining a gNodeB (gNB). A base station may communicate with a set of UE on downlink channels (e.g., for transmissions from a base station to one or more UE devices) and uplink channels (e.g., for transmissions from one or more UE devices to a base station). Wireless communication systems such as those described above can use carrier waves having millimeter wave (mmW) frequency ranges (e.g., between 10 GHz and 100 GHz or between 20 GHz and 80 GHz). Such frequencies may incur increased signal attenuation (also referred to as path loss) which can be due to environmental condition in some cases. For example, path loss may be affected by temperature, barometric pressure, signal diffraction from objects, etc, which can vary over time and can change based on the UE's location. New radio systems have increased in complexity to transmit and receive communication signals over multiple signal paths. To do so, a user's equipment can include multiple receiver heads that are used to receive and/or transmit signals over different signal paths. Each receiver head may have one or more antennas, which may be referred to as an “antenna sub-array”. In wireless communication systems of the present embodiments, a transmitter system can encode information onto one or more radio frequency (RF) carrier waves and then wirelessly transmit the RF signals using one or more antennas. A receiver system can detect the RF signals using one or more receiver heads and processes the received signals to recover the encoded information. In some cases, transmission and/or reception can be done simultaneously over two or more signal paths. For high data rate wireless communication, a transmitter may be adapted to simultaneously transmit portions of the RF signals in different frequency ranges to increase data transfer rates over the communication link. For example, Long Term Evolution (LTE) and Long Term Evolution Advanced (LTE-A) systems employ a carrier aggregation scheme where multiple RF signals are transmitted simultaneously in different frequency ranges (called component carrier signals). Each of these component carrier signals may support a data transfer rate of up to 150 megabits per second (Mbps). Accordingly, the data transfer rate of a communication link using multiple component carrier signals will increase as additional component carrier signals are added to the link (e.g., two component carrier signals at 150 Mbps can provide a total data transfer rate for the link of up to 300 Mbps). The term “communication link” may be used herein to generally describe one or more communication sub-links between a UE and a remote device. For example, a communication link may comprise two or more sub-links between the UE and remote device that occur using different carrier waves and/or different signal paths. Because of variable path loss, a UE should regularly scan different signal paths for improved signal quality on alternative signal paths. Signal quality may be determined based on one or more factors, such as signal strength, signal-to-noise ratio, error rate, etc. To avoid disrupting an ongoing communication session, the scanning of signal paths and monitoring for improved signal quality should occur during scheduled gaps in the communications (sometimes referred to a “transmission gaps” or “measurement gaps.”) Because there can be a substantial number of signal paths to scan due to multiple receiver heads, frequency bands, and polarizations, the inventors have recognized and appreciated that it can be very difficult or not possible to scan all signal paths and detect signal quality on each path with conventional receiver architectures during each scheduled gap in a communication session. Accordingly, the inventors have conceived of receiver architectures and methods that can provide scanning and monitoring signal quality for multiple receiver heads simultaneously during a scheduled gap in communications and quickly switch between a majority of or all signal paths during the gap. Examples of such receiver systems and methods are described below in connection withFIG.1throughFIG.4, though the invention is not limited to only the systems and methods as illustrated. Referring now toFIG.1, an example of a receiver architecture in accordance with the present teachings is depicted with a block diagram. Some or all of the components shown inFIG.1may be implemented as integrated circuit (IC) components on one or more semiconductor IC chips. A receiver system100may comprise a plurality of antenna sub-arrays101,102,103,104that have outputs from antennas connected to two or more switching networks110,112. Outputs from each switching network can connect to a corresponding receiver network120,122that processes selected signals received from the antenna sub-arrays. Each switching network is configured to connect an output from a selected antenna sub-array to a selected input of a receiver network. A receiver system100can also include (either on chip or off chip) a controller150that is in communication with the switching networks and/or receiver networks. The controller can provide control signals to the switching networks110,120and receiver networks120,122during operation of the receiver system100. For example, the controller150may provide signals to control components of these networks when routing signals from the antenna sub-arrays to receiver paths OUT1′, OUT2′, OUT3′, OUT4′ for signal processing. The antenna sub-arrays101,102,103,104may be implemented as integrated circuit components on a chip and/or printed circuit board. For example, each sub-array may comprise one or more antennas that are each implemented as a conductive loop antenna, a conductive horn antenna, a conductive dipole antenna, or one or more conductive shapes formed on a chip or printed circuit board. According to some embodiments, an antenna sub-array may comprise two or more antennas that are each shaped and/or oriented to preferentially receive a particular polarization of an RF signal (e.g., horizontal, vertical, circular, etc.). FIG.2depicts an example of an antenna sub-array101that includes a first dipole antenna101voriented to preferentially receive vertically polarized RF signals and a second dipole antenna101horiented to preferentially receive horizontally polarized RF signals. However, the antenna sub-arrays are not limited to the illustrated and described embodiments. Other antenna designs and orientations may be used additionally or alternatively to preferentially receive other types of polarizations. A switching network110,112may have N input ports115(four shown in the example ofFIG.1) and K output ports117(two shown in the illustrated example) where N and K are positive integers greater than 1. A switching network110,112can be implemented with a plurality of transistors, according to some embodiments. For example, control signals can be applied to a gate or base of one or more transistors to connect and disconnect a selected input port115of the switching network to a selected output port117of the switching network via the current-carrying terminals (e.g., source and drain or emitter and base) of the one or more transistors. There may be M switches in a switching network110,112, and each of the M switches may be configured to receive a control signal from a controller150to configure and reconfigure connections between input ports115and output ports117of the switching network for routing signals from the input ports to the output ports and receive paths. In some cases, a switching network may include additional circuit components such as diodes, resistors, capacitors, logic circuits, amplifiers, and buffers. According to some implementations, high-speed transistors may be used in a switching network110,112. In some cases, the transistors may be high-electron-mobility transistors (HEMTs). The transistors may enable each switch to transition from a fully off state to a fully on state in no more than 15 nanoseconds, according to some embodiments. In some implementations, the transistors may enable each switch to transition from a fully off state to a fully on state in no more than 5 nanoseconds. The transistors may exhibit essentially equal switching speeds for transitions from fully off to fully on states and from fully on to fully off states. With high-speed transistors, a switching network110,112may reconfigure its internal connections between one or more input ports and one or more output ports within a time span that is between 5 nanoseconds and 20 nanoseconds, though shorter or longer reconfiguration times are possible in some implementations. According to some embodiments, each switch may be implemented as those described in connection with FIG. 4A and FIG. 4B in U.S. Pat. No. 10,516,432 issued Dec. 24, 2019 and titled “Communication System with Switchable Devices,” which patent is incorporated by reference herein in its entirety. Further details of an example of a switching network110,112are depicted in the block circuit diagram ofFIG.3. In some embodiments, a switching network may provide only one through-connection at a time. For example, only one signal on an input port (in1, in2, in3, in4) may connect through to one output port (out1, out2) during a span of time. According to other implementations, a switching network110,112may provide multiple through-connections simultaneously. For example, switch110-1may be in an “on” or connected state to connect a signal on input port in1to output port out1, while switches110-2,110-3,110-4,110-5,110-6and110-8are in an “off” or disconnected state. The signal from output port out1may be processed by a mixer and filter on a path to an output port of the receiver network120(OUT1′ or OUT2′, depending on which mixer212-1or212-2is activated). At the same time that switch110-1is in an “on” or connected state, switch110-7can be in an “on” or connected state to connect a signal on input port in2to output port out2. The signal from output port out2may be processed by a mixer and filter on a path to an output port of the receiver network120(OUT1′ or OUT2′, depending on which mixer222-1or222-2is activated). In such cases, all receiver heads and/or receive paths may be active at a same time, e.g., to enable fast head and receive path selection during an initial cell search, neighboring cell measurement, and link failure recovery. In some embodiments, time-division multiplexing (TDM) may be used to alternatingly connect and disconnect, in an interleaved manner, two or more input ports to the two outputs out1, out2during a communication session. In such cases, only one through-connection may be active at a time, and different through-connections are rapidly cycled through by the receiver system100. In embodiments of the present invention, any input port of a switching network may connect to any output port of the same switching network. An example of such reconfigurability is evident from the illustration ofFIG.3. For example, any input port (in1, in2, in3, in4) of the switching network110can be connected to any output port (out1, out2) of the switching network110. Such reconfigurability is beneficial for rapidly scanning different signal paths and detecting quality of signals from different antenna sub-arrays during scheduled gaps in a communication link. An additional benefit of the switching networks of the present invention is that they are readily scalable from three to four or more inputs (in1, in2, in3, . . . ), and additional output ports can be added for additional receive signal paths. For the illustrated example inFIG.3, an additional input port can be added by adding two more switches in parallel to the existing switches, one for each bank of switches connected to an output port. Additional output ports can be added by adding an additional bank of switches in parallel with the two banks of switches and that connect to the input ports in a same manner as the existing banks of switches. According to some embodiments, a switching network110,112can have M switches, N inputs connected to the M switches and configured to connect to N antenna sub-arrays, and K outputs connected to the M switches. M may be greater than N and K, and M, N, and K are integers greater than 1. In some cases, the value of M may equal N×K. The N inputs can receive RF signals from the antenna sub-arrays, and the K outputs can provide the RF signals for processing on a receive path. According to some implementations, all components of a switching network110can be fabricated on a single integrated-circuit chip. In some cases, the amplifiers210,220of the receiver network120may also be fabricated on the same chip as the switches. In yet other cases, the amplifiers210,220and mixers or synthesizers212-1,212-2,222-1,222-2of the receiver network120may also be fabricated on the same chip as the switches. In yet further implementations, the amplifiers210,220, mixers or synthesizers212-1,212-2,222-1,222-2, and filters230-1,2301-2of the receiver network120may also be fabricated on the same chip as the switches. Although two switching networks110,112are depicted in the illustration ofFIG.1, a receiver system100may have more than two switching networks. For example, more than two switching networks may be used if additional types of polarization are used (e.g., right circular and/or left circular in addition to vertical and horizontal). As another example, more than two switching networks may be used if additional antenna sub-arrays are used. For example, if four additional antenna sub-arrays are used, the receiving system ofFIG.1may be duplicated and the two receiving systems operated in parallel. Outputs from the switching networks110,112may connect to receiver networks120,122as shown inFIG.1. Details of an example receiver network120are depicted inFIG.3. According to some implementations, a receiver network may comprise one or more amplifiers210,220each connected to an output from the switching network110. The amplifiers210,220may each have outputs that connect to two or more mixers or synthesizers212-1,222-2,212-2,222-1. Outputs from the mixers or synthesizers may be provided to filters230-1,230-2in the receiver network120, according to some embodiments. The amplifiers210,220may be RF amplifiers and configured to receive signals from the K outputs of a switching network110. An amplifier can have two outputs that provide a same signal on each output. In some cases, the signals on the outputs may be phase shifted with respect to each other (e.g., by 90 degrees) but otherwise have essentially the same modulations and encoded data. In another embodiment, an amplifier may have a single output that is provided to a signal splitter, which splits the signal into two copies, or phase-shifted copies, on two output ports. Outputs from the amplifiers (or signal splitters) can be provided to two or mixers or synthesizers. For example, a first output from a first amplifier210can be provided to a first mixer212-1, and a second output from the first amplifier210can be provided to a second mixer212-2. The first mixer212-1may connect to a first output OUT1′ receive path of the receiver network120and the second mixer212-2may connect to a separate second output OUT2′ receive path. The first mixer212-1may mix the first signal with an RF signal from a local oscillator operating at a first frequency. If data is encoded on a carrier wave of the same first frequency, the first mixer212-1can beat the signal down to an intermediate frequency that can be operated on by the filter230-1and from which the data can be decoded downstream in the first receive path that connects to a first output port OUT1′. The second mixer212-2may mix the first signal with an RF signal from a local oscillator operating at a second frequency. If data is encoded on a carrier wave of the same second frequency, the second mixer212-2can beat the signal down to an intermediate frequency that can be operated on by the filter230-2and from which the data can be decoded downstream in the second receive path that connects to a second output port OUT2′. The filters230-1,230-2may comprise noise filters and may be implemented with digital circuitry, analog circuitry, or some combination thereof. According to some embodiments, a first receiver network120may receive signals of only a same type of first polarization (e.g., vertical according to the illustration ofFIG.1). A second receiver network122may receive signals of only a same type of second polarization (e.g., horizontal) that is different from the first polarization. In some implementations, the type of polarization received by a first switching network and first receiver network may be orthogonal to a type of polarization received by a second switching network and second receiver network. In some embodiments, the mixers or synthesizers in a receiver network120,122may each be individually activated and deactivated by control signals received from a controller150. For example, the local oscillators may be turned on and off by control signals from the controller150and applied to the mixers. Alternatively or additionally, a switch in the receive path may be placed in a connected or disconnected state to switch a mixer in or out of a receive path. In some implementations, a mixer or synthesizer in a receiver network120may be activated immediately prior to a scheduled gap in a communication link. Alternatively or additionally, a mixer or synthesizer may be activated immediately prior to reconfiguring switches in a switching network110to apply a signal from an input port to the activated mixer. In some cases, the time that a mixer is activated before being switched into a receive path by the switching network or before a scheduled gap is between 5 microseconds and 50 microseconds. As may be appreciated from the receiver system architecture ofFIG.1andFIG.3, both polarized signals from two receiver heads may be monitored simultaneously. For example, vertically-polarized signals from two antenna sub-arrays101,104can be provided to two separate output receive paths OUT1′, OUT2′ from a first receiver network120by putting only switches110-1and110-5in a connected state and placing all other switches in a disconnected state. Additionally, to monitor for signal quality on a first carrier-wave frequency, mixers212-1and222-1can be activated and the other two mixers deactivated. Simultaneously, the horizontally-polarized signals for the same two antenna sub-arrays101,104can be provided to two separate output receive paths OUT3′, OUT4′ from a second receiver network122in a similar manner. To switch from monitoring signal quality of the first carrier-waver frequency to a second carrier-wave frequency, the activated mixers can be deactivated and the other mixers212-2,222-2can be activated. In some cases, signals output from more than two antenna sub-arrays can be monitored simultaneously. For the illustrated example inFIG.3, vertically polarized signals from two different antenna sub-arrays (e.g.,102,103) can be provided to two receive paths OUT1′, OUT2′, while horizontally polarized signals from two other different antenna sub-arrays (e.g.,101,104) can be provided to two other receive paths OUT3′, OUT4′. Monitoring outputs from more than two antenna sub-arrays is possible by simply reconfiguring the switches in the switching networks110,112. In some implementations, each receive path connecting to an output port OUT1′, OUT2′, OUT3′, OUT4′ of a receiver network120,122can connect to any antenna sub-array for a particular polarization and particular carrier frequency. For example, any antenna sub-array101,102,103,104can provide vertical polarization received from an antenna101vto either receive paths OUT1′, OUT2′ of a first receiver network. The provided vertical polarization can be mixed with local oscillator outputs at either of two carrier frequencies by mixers or synthesizers212-1,212-2,222-1,222-2, according to the example architecture depicted inFIG.1andFIG.3. Further, signal quality on two carrier frequencies for a same antenna sub-array can be monitored simultaneously on two separate receive paths. For example, a vertical polarization signal received from an antenna sub-array102can be provided to a first amplifier210by placing switch110-2in a connected state. A first output from the amplifier210can be provided to a first mixer212-1for mixing with an output from a local oscillator at a first frequency f1, and then provided to filter230-1on a receive path connected to a first output port OUT1′. A second output from the amplifier210can be provided to a second mixer212-2for mixing with an output from a local oscillator at a second frequency f2, and then provided to filter230-2on receive path connected to a second output port OUT2′. By including a second amplifier220and second bank of switches in a switching network, each receive path can unrestrainedly connect to each antenna sub-array. Accordingly, the receiver architecture of the present embodiments can monitor signal strengths for two antenna sub-arrays simultaneously. The receiver architecture of the present embodiments can also support non-contiguous carrier aggregation (NCCA) and multiple-input, multiple-output communication (MIMO) links. FIG.4depicts an example of acts associated with a method400of wireless reception of communication signals that can be implemented with receiver architectures of the present embodiments. According to some embodiments, a user equipment (UE) having a receiver architecture in accordance with the present embodiments can transmit and/or receive (act410) one or more signals over one or more signal paths during a communication session between the UE and a remote device. For example, time-division duplexing (TDD) may be used to separate transmitted and received signals. During such a communication link, there can be a plurality of scheduled gaps in the communication link, during which the UE may scan several signal paths and monitor signal quality on those paths. In some implementations, prior to a scheduled gap, the receiver system may activate (act420) one or more selected components in one or more receive paths before the gap begins, while the communication link is still active. For example, the receiver system may activate a mixer and/or amplifier in a receive path immediately before a start time of the gap. The activated mixer and/or amplifier may not be in an active receive path during the ongoing communication link before the gap. By activating the mixer and/or amplifier, they can be placed in a ready state for when the scheduled gap begins. The activation of the one or more selected components may occur between 5 microseconds and 50 microseconds before a start of the scheduled gap. A method400may further comprise monitoring (act430) signal quality for at least a majority of the receiver heads of the receiving system during a scheduled gap in the communication link. In some embodiments, signal quality for all of the receiver heads of the receiving system can be monitored (act430) during the scheduled gap. For example, the signals from the different antenna sub-arrays101,102,103,104can be scanned by sequentially activating switches and mixers as described above in connection withFIG.1andFIG.3and signal quality can be monitored on each signal path. The monitoring of signal quality may comprise one or more acts of: evaluating signal strength (e.g., signal amplitude), evaluating signal-to-noise ratio, and evaluating signal error rate. If it is determined (act435) that there are no other signal paths with improved signal quality, then the receiver system may continue transceiving (act440) using the same signal path(s). Alternatively, if it is determined (act435) that there is one or more other signal paths with improved signal quality, then the receiver system may change reception (act450) to one or more receiver heads on the other signal path(s) having improved signal quality. The acts of a method400may repeat in a cyclical manner, as indicated inFIG.4, throughout a communication session. When changing reception (act450) to one or more receiver heads on one or more other signal paths, the wireless receiver system may activate a switching network to disconnect from at least one antenna sub-array that is currently active and used for a communication session immediately before the scheduled gap. The system may further activate the switching network to connect to at least one other antenna sub-array that was not active immediately before the scheduled gap. In some cases, the receiver system may turn on one or more of the mixers in the receiver network that will be connected in one or more receive paths for the at least one other antenna sub-array immediately before activating the switching network to connect to the at least one other antenna sub-array. In some embodiments, the turning on is between 5 microseconds and 50 microseconds before activating the switching network to connect to the at least one other antenna sub-arrays. The method400of wireless communication described above in connection withFIG.4includes various functionalities that can be implemented, at least in part, with logic circuitry, analog circuitry, and/or one or more processor(s) and code. For example, acts of activating selected components, scanning signal paths, monitoring signal quality, determining whether better signal quality exists on some signal paths, and changing reception to one or more better signal paths can be implemented, at least in part, with logic circuitry and/or one or more processor(s) and code. Code written to perform such functionalities, or part of such functionalities, can be stored on non-transitory computer-readable media, so that it can be loaded onto one or more processors (or used to configure circuitry) to adapt the one or more processors (or circuits) to perform the functionalities or parts thereof. Such circuitry may or may not include at least one field-programmable gate array (FPGA), application specific integrated circuit (ASIC), and/or digital signal processor (DSP). A processor may be a microprocessor or microcontroller, in some embodiments. Such circuitry and/or one or more processors may be part of controller150, referring again toFIG.1. In some cases, the controller150may be implemented using hardware or some combination of hardware, firmware, and code (software). When implemented in part using code, suitable code can be executed on a suitable processor (e.g., a microprocessor) or collection of processors. The one or more processors can be implemented in numerous ways, such as with dedicated hardware, or with general purpose hardware (e.g., one or more microprocessors) that can be programmed using code to perform the functions described above. In this respect, it should be appreciated that one implementation of at least a portion of the embodiments described herein may comprise at least one computer-readable storage medium (e.g., RAM, ROM, EEPROM, flash memory or other memory technology, or other tangible, non-transitory computer-readable storage medium) encoded with computer code (i.e., a plurality of executable instructions) that, when executed on one or more processors, performs at least some of the above-discussed functionalities of one or more embodiments. In addition, it should be appreciated that the reference to code which, when executed, performs any of the above-discussed functionalities, is not limited to an application program running on a host computer. Rather, the terms code and software are used herein in a generic sense to reference any type of computer code (e.g., application software, firmware, microcode, machine language, or any other form of computer instruction) that can be employed to program one or more processors and/or logic circuitry to implement functionalities described herein. Various aspects of the apparatus and techniques described herein may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing description and is therefore not limited in its application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings. For example, aspects described in one embodiment may be combined with aspects described in other embodiments. Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements. Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” “having,” “containing,” “involving,” and variations thereof is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. | 30,603 |
11863217 | DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS The following description of certain embodiments presents various descriptions of specific embodiments. However, the innovations described herein can be embodied in a multitude of different ways, for example, as defined and covered by the claims. In this description, reference is made to the drawings where like reference numerals can indicate identical or functionally similar elements. It will be understood that elements illustrated in the figures are not necessarily drawn to scale. Moreover, it will be understood that certain embodiments can include more elements than illustrated in a drawing and/or a subset of the elements illustrated in a drawing. Further, some embodiments can incorporate any suitable combination of features from two or more drawings. The headings provided herein are for convenience only and are not intended to affect the meaning or scope of the claims. Cellular telephony radio front-ends can include feedback receivers that are used to measure the forward and/or reflected power on a given active transmit path for a variety of applications. A feedback receiver can alternatively be referred to as a measurement receiver. An example application includes measuring forward power and determining whether absolute power is above a threshold limit for a specific absorbed radiation (SAR) regulatory specification, and taking action such as reducing the power to prevent excessive radiated power amplitudes. Another example application includes measuring forward power and determining a relative amplitude with respect to a 50 Ohm in-factory reference measurement made during initial calibration of a product to determine how close the power is to the specific target power, and taking action such as adjusting the power up and/or down to get back to a reference value. Another example application includes measuring reflected power (and perhaps with knowledge of the forward power setting or a specific measurement of the forward power as well) determining if the difference between forward and reflected power exceeds a threshold and the antenna loading/de-tuning/performance has become unacceptably poor and a different antenna should to be used for the given use case/signal. Then the radio can perform a change in connectivity or antenna swap. Another example application includes measuring a detailed complex real time ratio of the forward and reflected signal in order to assess the actual complex impedance of the antenna. Based on this ratio, the radio can adjust antenna tuning settings to better set aperture/complex input impedance/both for the antenna, or to make adjustments to the digital pre-distortion of the transmitter (DPD) to better optimize the linearity/efficiency/power capability into the variable antenna load. Such forward and/or reflected power measurements can be achieved by coupling off a relatively small amount of the forward and/or reflected power and directing the coupled energy to a radio frequency (RF) path that can be connected back to one or more feedback receiver inputs on a transceiver. The amount of coupled RF power can be relatively small so as not to incur significant insertion loss from losing useful signal energy. A radio frequency coupler can provide a coupled power signal that is an indication of radio frequency power. Technical solutions for connecting a coupled RF path back to a feedback receiver include (1) star connection where each individual transmit coupler is connected in parallel back to a consolidation switch and to the appropriate feedback receiver input(s) or (2) a daisy chain where each coupler has a local switch to connect either (a) the local coupler or (b) an input to the output. The daisy chain path from input to output can be a bypass path to enable several radio frequency modules to be connected together and eventually connect to a feedback receiver input of a transceiver. In a daisy chain, circuitry of several radio frequency modules are connected in series and can then be connected to a dedicated feedback receiver input. In certain applications, multiple clusters of daisy chained modules can be connected to a consolidation switch, which then connects to a feedback receiver input of a transceiver. A challenge for non-stand-alone (NSA) or E-UTRAN New Radio-Dual Connectivity (EN-DC) operation in Fifth Generation (5G) cellular communications is intermodulation distortion (IMD) and coupling/leakage of transmit carrier power when more than one transmit carrier is active concurrently. In such EN-DC dual connectivity for NSA operation or uplink (UL) carrier aggregation (CA) operation, leakage of a second transmit path power onto a first transmit path for measurement can reduce accuracy and/or limit a useful dynamic range of a feedback receiver. For example, isolation can be challenging when a low power reflected power signal is being routed in the presence of a high power blocker. Some schemes further make flexible use of programming the supply domains and first transmit path or second transmit path operation between different modules, in such a way that the coupled path isolation and transmit leakage management is also flexibly programmed to provide isolation and avoid a first transmit path coupled path from going through a second transmit path active module, and vice-versa. This can be further complicated by the availability of only a single feedback receiver input. Aspects of this disclosure relate to radio frequency module with input/output ports for providing and/or passing an indication of radio frequency power. Two input/output ports of the radio frequency module can be reconfigurable such that each of these input/output ports can (a) function as an input to receive a coupled power signal from another radio frequency module and provide the coupled power signal to the other input/output port, (b) function as an output to provide a coupled power signal from another radio frequency module received at the other input/output port, and (c) output a coupled power signal from a radio frequency coupler of the radio frequency module. A coupled power signal provides an indication of radio frequency power. A forward power measurement or a reflected power measurement from a radio frequency coupler, such as a directional coupler, of the radio frequency module can be output at either of the two input/output ports. A coupler switching circuit can be configured to (a) enable a daisy chain to bypass from a first of the two input/output ports as an input to a second of the two input/output ports as an output or (b) enable the daisy chain to bypass from the second of the two input/output ports as an input to the first of the two input/output ports as an output. Circuitry of a plurality of such radio frequency modules can be connected together in a daisy chain. The daisy chain can be a loop between two different feedback receiver inputs. Coupled power from two actively transmitting modules of the plurality of radio frequency modules can be routed in opposite directions to an appropriate respective feedback receiver input without overlapping or running one transmit path coupled power through a radio frequency module with another concurrently active transmit path for isolation and/or other considerations. A coupler switching circuit in a radio frequency module can enable bidirectional routing for a coupled power signal generated by a radio frequency coupler of the radio frequency module. With bidirectional routing, the coupled power signal can be routed either clockwise or counter-clockwise around the daisy chain back to a feedback receiver input. In certain instances, a daisy chain loop including circuitry of a plurality of radio frequency modules can be connected to a single node that is electrically connected to a single feedback receiver input. By maintaining programmable bidirectionality of the coupled power around the daisy chain loop, a first transmit path coupled power signal can be routed to the single feedback receiver input without passing through a radio frequency module with a concurrently active second transmit path. Similarly, the second transmit path coupled power signal can be routed to the single feedback receiver input without passing through a radio frequency module with the concurrently active first transmit path. Radio frequency modules and radio frequency systems disclosed herein can achieve a variety of advantages over other technical solutions. For example, a daisy chain can be connected to a feedback receiver input without a consolidation switch external to radio frequency modules of the daisy chain. Technical solutions disclosed herein can scale with an arbitrary number of connected radio frequency modules. There can be no significant isolation issues between two concurrently active transmit paths because a coupled power signal from one transmit path can be routed away from a module with another concurrently active transmit path. Technical solutions disclosed herein can be realized in a relatively small physical area and can reduce and/or minimize long routes on a phone board from a radio frequency module positioned relatively far away from a feedback receiver. Technical solutions disclosed herein can implement daisy chaining for coupled power signals with simplified overhead, control, and/or timing. 5G Technology and Example Communication Network The International Telecommunication Union (ITU) is a specialized agency of the United Nations (UN) responsible for global issues concerning information and communication technologies, including the shared global use of radio spectrum. The 3rd Generation Partnership Project (3GPP) is a collaboration between groups of telecommunications standard bodies across the world, such as the Association of Radio Industries and Businesses (ARIB), the Telecommunications Technology Committee (TTC), the China Communications Standards Association (CCSA), the Alliance for Telecommunications Industry Solutions (ATIS), the Telecommunications Technology Association (TTA), the European Telecommunications Standards Institute (ETSI), and the Telecommunications Standards Development Society, India (TSDSI). Working within the scope of the ITU, 3GPP develops and maintains technical specifications for a variety of mobile communication technologies, including, for example, second generation (2G) technology (for instance, Global System for Mobile Communications (GSM) and Enhanced Data Rates for GSM Evolution (EDGE)), third generation (3G) technology (for instance, Universal Mobile Telecommunications System (UMTS) and High Speed Packet Access (HSPA)), and fourth generation (4G) technology (for instance, Long Term Evolution (LTE) and LTE-Advanced). The technical specifications controlled by 3GPP can be expanded and revised by specification releases, which can span multiple years and specify a breadth of new features and evolutions. In one example, 3GPP introduced carrier aggregation (CA) for LTE in Release 10. Although initially introduced with two downlink carriers, 3GPP expanded carrier aggregation in Release 14 to include up to five downlink carriers and up to three uplink carriers. Other examples of new features and evolutions provided by 3GPP releases include, but are not limited to, License Assisted Access (LAA), enhanced LAA (eLAA), Narrowband Internet of things (NB-IOT), Vehicle-to-Everything (V2X), and High Power User Equipment (HPUE). 3GPP introduced Phase 1 of fifth generation (5G) technology in Release 15, and is currently in the process of developing Phase 2 of 5G technology in Release 16. Subsequent 3GPP releases will further evolve and expand 5G technology. 5G technology is also referred to herein as 5G New Radio (NR). 5G NR supports or plans to support a variety of features, such as communications over millimeter wave spectrum, beamforming capability, high spectral efficiency waveforms, low latency communications, multiple radio numerology, and/or non-orthogonal multiple access (NOMA). Although such RF functionalities offer flexibility to networks and enhance user data rates, supporting such features can pose a number of technical challenges. The teachings herein are applicable to a wide variety of communication systems, including, but not limited to, communication systems using advanced cellular technologies, such as LTE-Advanced, LTE-Advanced Pro, and/or 5G NR. FIG.1is a schematic diagram of one example of a communication network10. The communication network10includes a macro cell base station1, a mobile device2, a small cell base station3, and a stationary wireless device4. Embodiments disclosed herein can be implemented in the communication network10, for example. The illustrated communication network10ofFIG.1supports communications using a variety of technologies, including, for example, 4G LTE, 5G NR, and wireless local area network (WLAN), such as WiFi. In the communication network10, dual connectivity can be implemented with concurrent 4G LTE and 5G NR communication with the mobile device2. Although various examples of supported communication technologies are shown, the communication network10can be adapted to support a wide variety of communication technologies. Various communication links of the communication network10have been depicted inFIG.1. The communication links can be duplexed in a wide variety of ways, including, for example, using frequency-division duplexing (FDD) and/or time-division duplexing (TDD). FDD is a type of radio frequency communications that uses different frequencies for transmitting and receiving signals. FDD can provide a number of advantages, such as high data rates and low latency. In contrast, TDD is a type of radio frequency communications that uses about the same frequency for transmitting and receiving signals, and in which transmit and receive communications are switched in time. TDD can provide a number of advantages, such as efficient use of spectrum and variable allocation of throughput between transmit and receive directions. As shown inFIG.1, the mobile device2communicates with the macro cell base station1over a communication link that uses a combination of 4G LTE and 5G NR technologies. The mobile device2also communications with the small cell base station3. In the illustrated example, the mobile device2and small cell base station3communicate over a communication link that uses 5G NR, 4G LTE, and WiFi technologies. In certain implementations, enhanced license assisted access (eLAA) is used to aggregate one or more licensed frequency carriers (for instance, licensed 4G LTE and/or 5G NR frequencies), with one or more unlicensed carriers (for instance, unlicensed WiFi frequencies). In certain implementations, the mobile device2communicates with the macro cell base station2and the small cell base station3using 5G NR technology over one or more frequency bands that within Frequency Range 1 (FR1) and/or over one or more frequency bands that are above FR1. The one or more frequency bands within FR1 can be less than 6 GHz. For example, wireless communications can utilize FR1, Frequency Range 2 (FR2), or a combination thereof. In one embodiment, the mobile device2supports a HPUE power class specification. The illustrated small cell base station3also communicates with a stationary wireless device4. The small cell base station3can be used, for example, to provide broadband service using 5G NR technology. In certain implementations, the small cell base station3communicates with the stationary wireless device4over one or more millimeter wave frequency bands in the frequency range of 30 GHz to 300 GHz and/or upper centimeter wave frequency bands in the frequency range of 24 GHz to 30 GHz. In certain implementations, the small cell base station3communicates with the stationary wireless device4using beamforming. For example, beamforming can be used to focus signal strength to overcome path losses, such as high loss associated with communicating over millimeter wave frequencies. The communication network10ofFIG.1includes the macro cell base station1and the small cell base station3. In certain implementations, the small cell base station3can operate with relatively lower power, shorter range, and/or with fewer concurrent users relative to the macro cell base station1. The small cell base station3can also be referred to as a femtocell, a picocell, or a microcell. Although the communication network10is illustrated as including two base stations, the communication network10can be implemented to include more or fewer base stations and/or base stations of other types. As shown inFIG.1, base stations can communicate with one another using wireless communications to provide a wireless backhaul. Additionally or alternatively, base stations can communicate with one another using wired and/or optical links. The communication network10ofFIG.1is illustrated as including one mobile device and one stationary wireless device. The mobile device2and the stationary wireless device4illustrate two examples of user devices or user equipment (UE). Although the communication network10is illustrated as including two user devices, the communication network10can be used to communicate with more or fewer user devices and/or user devices of other types. For example, user devices can include mobile phones, tablets, laptops, IoT devices, wearable electronics, and/or a wide variety of other communications devices. User devices of the communication network10can share available network resources (for instance, available frequency spectrum) in a wide variety of ways. In one example, frequency division multiple access (FDMA) is used to divide a frequency band into multiple frequency carriers. Additionally, one or more carriers are allocated to a particular user. Examples of FDMA include, but are not limited to, single carrier FDMA (SC-FDMA) and orthogonal FDMA (OFDMA). OFDMA is a multicarrier technology that subdivides the available bandwidth into multiple mutually orthogonal narrowband subcarriers, which can be separately assigned to different users. Other examples of shared access include, but are not limited to, time division multiple access (TDMA) in which a user is allocated particular time slots for using a frequency resource, code division multiple access (CDMA) in which a frequency resource is shared amongst different users by assigning each user device a unique code, space-divisional multiple access (SDMA) in which beamforming is used to provide shared access by spatial division, and non-orthogonal multiple access (NOMA) in which the power domain is used for multiple access. For example, NOMA can be used to serve multiple user devices at the same frequency, time, and/or code, but with different power levels. Enhanced mobile broadband (eMBB) refers to technology for growing system capacity of LTE networks. For example, eMBB can refer to communications with a peak data rate of at least 10 Gbps and a minimum of 100 Mbps for each user device. Ultra-reliable low latency communications (uRLLC) refers to technology for communication with very low latency, for instance, less than 2 milliseconds. uRLLC can be used for mission-critical communications such as for autonomous driving and/or remote surgery applications. Massive machine-type communications (mMTC) refers to low cost and low data rate communications associated with wireless connections to everyday objects, such as those associated with Internet of Things (IoT) applications. The communication network10ofFIG.1can be used to support a wide variety of advanced communication features, including, but not limited to eMBB, uRLLC, and/or mMTC. A peak data rate of a communication link (for instance, between a base station and a user device) depends on a variety of factors. For example, peak data rate can be affected by channel bandwidth, modulation order, a number of component carriers, and/or a number of antennas used for communications. For instance, in certain implementations, a data rate of a communication link can be about equal to M*B*log2(1+S/N), where M is the number of communication channels, B is the channel bandwidth, and S/N is the signal-to-noise ratio (SNR). Accordingly, data rate of a communication link can be increased by increasing the number of communication channels (for instance, transmitting and receiving using multiple antennas), using wider bandwidth (for instance, by aggregating carriers), and/or improving SNR (for instance, by increasing transmit power and/or improving receiver sensitivity). 5G NR communication systems can employ a wide variety of techniques for enhancing data rate and/or communication performance. Carrier Aggregation FIG.2Ais a schematic diagram of one example of a communication link using carrier aggregation. Carrier aggregation can be used to widen bandwidth of the communication link by supporting communications over multiple frequency carriers, thereby increasing user data rates and enhancing network capacity by utilizing fragmented spectrum allocations. Carrier aggregation can present technical challenges for measuring power of individual carriers. Radio frequency systems disclosed herein can measure power associated with one or more transmit paths in carrier aggregation applications. Embodiments disclosed herein can be implemented in carrier aggregation applications. In the illustrated example, the communication link is provided between a base station21and a mobile device22. As shown inFIG.2A, the communications link includes a downlink channel used for RF communications from the base station21to the mobile device22, and an uplink channel used for RF communications from the mobile device22to the base station21. AlthoughFIG.2Aillustrates carrier aggregation in the context of FDD communications, carrier aggregation can also be used for TDD communications. In certain implementations, a communication link can provide asymmetrical data rates for a downlink channel and an uplink channel. For example, a communication link can be used to support a relatively high downlink data rate to enable high speed streaming of multimedia content to a mobile device, while providing a relatively slower data rate for uploading data from the mobile device to the cloud. In the illustrated example, the base station21and the mobile device22communicate via carrier aggregation, which can be used to selectively increase bandwidth of the communication link. Carrier aggregation includes contiguous aggregation, in which contiguous carriers within the same operating frequency band are aggregated. Carrier aggregation can also be non-contiguous, and can include carriers separated in frequency within a common band or in different bands. In the example shown inFIG.2A, the uplink channel includes three aggregated component carriers fUL1, fUL2, and fUL3. Additionally, the downlink channel includes five aggregated component carriers fDL1, fDL2, fDL3, fDL4, and fDL5. Although one example of component carrier aggregation is shown, more or fewer carriers can be aggregated for uplink and/or downlink. Moreover, a number of aggregated carriers can be varied over time to achieve desired uplink and downlink data rates. For example, a number of aggregated carriers for uplink and/or downlink communications with respect to a particular mobile device can change over time. For example, the number of aggregated carriers can change as the device moves through the communication network and/or as network usage changes over time. FIG.2Billustrates various examples of uplink carrier aggregation for the communication link ofFIG.2A.FIG.2Bincludes a first carrier aggregation scenario31, a second carrier aggregation scenario32, and a third carrier aggregation scenario33, which schematically depict three types of carrier aggregation. The carrier aggregation scenarios31-33illustrate different spectrum allocations for a first component carrier fUL1, a second component carrier fUL2, and a third component carrier fUL3. AlthoughFIG.2Bis illustrated in the context of aggregating three component carriers, carrier aggregation can be used to aggregate more or fewer carriers. Moreover, although illustrated in the context of uplink, the aggregation scenarios are also applicable to downlink. The first carrier aggregation scenario31illustrates intra-band contiguous carrier aggregation, in which component carriers that are adjacent in frequency and in a common frequency band are aggregated. For example, the first carrier aggregation scenario31depicts aggregation of component carriers fUL1, fUL2, and fUL3that are contiguous and located within a first frequency band BAND1. With continuing reference toFIG.2B, the second carrier aggregation scenario32illustrates intra-band non-continuous carrier aggregation, in which two or more components carriers that are non-adjacent in frequency and within a common frequency band are aggregated. For example, the second carrier aggregation scenario32depicts aggregation of component carriers fUL1, fUL2, and fUL3that are non-contiguous, but located within a first frequency band BAND1. The third carrier aggregation scenario33illustrates inter-band non-contiguous carrier aggregation, in which component carriers that are non-adjacent in frequency and in multiple frequency bands are aggregated. For example, the third carrier aggregation scenario33depicts aggregation of component carriers fUL1and fUL2of a first frequency band BAND1with component carrier fUL3of a second frequency band BAND2. With reference toFIGS.2A and2B, the individual component carriers used in carrier aggregation can be of a variety of frequencies, including, for example, frequency carriers in the same band or in multiple bands. Additionally, carrier aggregation is applicable to implementations in which the individual component carriers are of about the same bandwidth as well as to implementations in which the individual component carriers have different bandwidths. Certain communication networks allocate a particular user device with a primary component carrier (PCC) or anchor carrier for uplink and a PCC for downlink. Additionally, when the mobile device communicates using a single frequency carrier for uplink or downlink, the user device communicates using the PCC. To enhance bandwidth for uplink communications, the uplink PCC can be aggregated with one or more uplink secondary component carriers (SCCs). Additionally, to enhance bandwidth for downlink communications, the downlink PCC can be aggregated with one or more downlink SCCs. In certain implementations, a communication network provides a network cell for each component carrier. Additionally, a primary cell can operate using a PCC, while a secondary cell can operate using a SCC. The primary and secondary cells may have different coverage areas, for instance, due to differences in frequencies of carriers and/or network environment. License assisted access (LAA) refers to downlink carrier aggregation in which a licensed frequency carrier associated with a mobile operator is aggregated with a frequency carrier in unlicensed spectrum, such as WiFi. LAA employs a downlink PCC in the licensed spectrum that carries control and signaling information associated with the communication link, while unlicensed spectrum is aggregated for wider downlink bandwidth when available. LAA can operate with dynamic adjustment of secondary carriers to avoid WiFi users and/or to coexist with WiFi users. Enhanced license assisted access (eLAA) refers to an evolution of LAA that aggregates licensed and unlicensed spectrum for both downlink and uplink. Dual Connectivity With the introduction of the 5G NR air interface standards, 3GPP has allowed for the simultaneous operation of 5G and 4G standards in order to facilitate the transition. This mode can be referred to as Non-Stand-Alone (NSA) operation or E-UTRAN New Radio-Dual Connectivity (EN-DC) and can involve both 4G and 5G carriers being simultaneously transmitted from a user equipment (UE). EN-DC can present technical challenges for measuring power associated with individual transmit paths. Radio frequency systems disclosed herein can measure power associated with one or more transmit paths in dual connectivity applications. Embodiments disclosed herein can be implemented in dual connectivity applications. In certain EN-DC applications, dual connectivity NSA involves overlaying 5G systems onto an existing 4G core network. For dual connectivity in such applications, the control and synchronization between the base station and the UE can be performed by the 4G network while the 5G network is a complementary radio access network tethered to the 4G anchor. The 4G anchor can connect to the existing 4G network with the overlay of 5G data/control. FIG.3is a diagram of an example dual connectivity network topology. This architecture can leverage LTE legacy coverage to ensure continuity of service delivery and the progressive rollout of 5G cells. A UE30can simultaneously transmit dual uplink LTE and NR carriers. The UE30can transmit an uplink LTE carrier Tx1to the eNB31while transmitting an uplink NR carrier Tx2to the gNB32to implement dual connectivity. Any suitable combination of uplink carriers Tx1, Tx2and/or downlink carriers Rx1, Rx2can be concurrently transmitted via wireless links in the example network topology ofFIG.3. The eNB31can provide a connection with a core network, such as an Evolved Packet Core (EPC). The gNB32can communicate with the core network via the eNB31. Control plane data can be wirelessly communicated between the UE30and eNB31. The eNB31can also communicate control plane data with the gNB32. In the example dual connectivity topology ofFIG.3, any suitable combinations of standardized bands and radio access technologies (e.g., FDD, TDD, SUL, SDL) can be wirelessly transmitted and received. This can present technical challenges related to having multiple separate radios and bands functioning in the UE30. With a TDD LTE anchor point, network operation may be synchronous, in which case the operating modes can be constrained to Tx1/Tx2and Rx1/Rx2, or asynchronous which can involve Tx1/Tx2, Tx1/Rx2, Rx1/Tx2, or Rx1/Rx2. When the LTE anchor is a frequency division duplex (FDD) carrier, the TDD/FDD inter-band operation can involve simultaneous Tx1/Rx1/Tx2and Tx1/Rx1/Rx2. Radio Frequency Modules with Coupler Switching Circuit and Daisy Chain Architecture of Coupler Switching Circuits Radio frequency modules disclosed herein include coupler switching circuits arranged to bidirectionally pass a coupled power signal between input/output ports of a radio frequency module. Coupler switching circuits in a plurality of modules can be arranged in a daisy chain. A coupler switching circuit of an individual radio frequency module can provide a coupled power signal from a radio frequency coupler to an input/output port to cause the coupled power signal to propagate in a particular direction through the daisy chain to a feedback receiver input port. The coupler switching circuit can cause the coupled power signal to propagate in opposite directions though the daisy chain in different states. Daisy chained coupler switching circuit architectures disclosed herein can be scalable and achieve high isolation between coupled power signals associated with different transmit paths. In some instances, daisy chained coupler switching circuit architectures disclosed herein can be implemented without a switch external to the radio frequency modules between the daisy chain and an input of a feedback receiver. The coupler switching circuits and related daisy chains can be implemented in applications where two or more transmit paths are concurrently active, such as carrier aggregation applications and/or dual connectivity applications. FIG.4Ais a schematic diagram of a portion of a radio frequency module40with a bidirectional daisy chain coupler interface according to an embodiment. The radio frequency module40can be implemented with one less contact (e.g., pin) compared to some other technical solutions. The radio frequency module40can improve isolation of coupled power signals. The radio frequency module40can be implemented without external switching for a daisy chain in certain applications. As illustrated, the radio frequency module40includes a multi-throw switch41, radio frequency couplers42A and42B, notch filters43A and43B, termination and output switches44A and44B, termination impedances45A,45B,45C,45D, a coupled power output switch46, and a bypass switch47. In the radio frequency module40, the termination and output switches44A and44B, the coupled power output switch46, and the bypass switch47together implement a coupler switching circuit. A bidirectional daisy chain coupler interface of the radio frequency module40enables two input/output ports CPL1and CPL2to each be (a) an output for a coupled power signal generated by a radio frequency coupler42A or42B of the radio frequency module40, (b) an input for receiving a coupled power signal from another module, or (c) an output for passing a coupled power signal from another module received at the other input/output port. For example, the coupled power output switch46can electrically connect either radio frequency coupler42A or radio frequency coupler42B to either a first input/output port CPL1or a second input/output port CPL2. This can enable the radio frequency module40to output coupled radio frequency power to propagate in either direction in a daisy chain. The coupled power output switch46can provide either a forward coupled power signal or a reverse coupled power signal depending on a state of a termination and output switch44A or44B. The bypass switch47can enable a coupled power signal received at the first input/output port CPL1to be passed to the second input/output port CPL2. Similarly, the bypass switch47can enable a coupled power signal received at the second input/output port CPL2to be passed to the first input/output port CPL1. Accordingly, the bidirectional daisy chain coupler interface of the radio frequency module40can implement a bidirectional pass through of coupled power signals. The input/output ports CPL1and CPL2can be implemented by any suitable contacts, such as one or more pins, one or more pads, one or more bumps, the like, or any suitable combination thereof. The multi-throw switch41can be a multi-throw double-pole switch as illustrated. The multi-throw switch41can receive a radio frequency signal for transmission. The radio frequency signal can be generated by a power amplifier of the radio frequency module40. In certain applications, different power amplifiers can be coupled to respective notch filters43A and43B via the multi-throw switch41. The multi-throw switch41can be coupled to a plurality of transmit signal paths, which can include different filters. The multi-throw switch41can provide an output from a selected transmit path to the first radio frequency coupler42A. The multi-throw switch41can provide an output from a selected transmit path to the second radio frequency coupler42B. The first radio frequency coupler42A can couple a relatively small amount of forward or reflected power propagating between the multi-throw switch41to the notch filter43A. The reflected power can be referred to as reverse power. The first termination and output switch44A can connect one port of the first radio frequency coupler42A to the coupled power output switch46and another port to one of the termination impedances45A or45B. The termination impedances45A,45B,45C, and45D can each include a resistor. The first termination and output switch44A can be set to a first state corresponding to providing a forward power measurement or a second state corresponding to providing a reverse power measurement. InFIG.4A, the first termination and output switch44A is shown in the first state where the termination impedance45B is connected to a port of the radio frequency coupler42A. The first termination and output switch44B can toggle between the first state and the second state. In the second state, the port of the first radio frequency coupler42A illustrated inFIG.4Aas being connected to the termination impedance45B is instead connected to the coupled power output switch46and the port of the first radio frequency coupler42A illustrated inFIG.4Aas being connected to the coupled power output switch46is instead connected termination impedance45A. The second radio frequency coupler42B can couple a relatively small amount of forward or reflected power propagating between the multi-throw switch41to the notch filter43B. The second termination and output switch44B can connect one port of the second radio frequency coupler42B to the coupled power output switch46and another port to one of the termination impedances45C or45D. The second termination and output switch44B can be set to a first state corresponding to providing a forward power measurement or a second state corresponding to providing a reverse power measurement. InFIG.4A, the second termination and output switch44B is shown in the second state. The second termination and output switch44B can toggle between the first state for providing forward coupled power and the second state for providing reflected coupled power. The coupled power output switch46can electrically connect the first termination and output switch44A the first input/output port CPL1or the second input/output port CPL2. The coupled power output switch46can electrically connect the second termination and output switch44B to the first input/output port CPL1or the second input/output port CPL2. In certain instances, the coupled power output switch46can provide a single coupled power signal to either the input/output port CPL1or the input/output port CPL2. The coupled power output switch46can provide coupled power signals from radio frequency couplers42A and42B to different input/output ports CPL1and CPL2concurrently in some applications. The bypass switch47can provide a bypass path between the first input/output port CPL1and the second input/output port CPL2. A coupled power signal received at either one of these ports can be provided to the other one of these ports. The bypass switch47can be turned off when the radio frequency module40is providing a coupled power signal from a radio frequency coupler42A or42B to an input/output port CPL1or CPL2. When the bypass switch47is turned off, the input/output ports CPL1and CPL2can be electrically isolated from each other. FIG.4Bis a schematic diagram of a portion of a radio frequency module48with a bidirectional daisy chain coupler interface according to an embodiment. The radio frequency module48can be implemented with fewer contacts compared to some other technical solutions and/or improve isolation of coupled power signals. The radio frequency module48can be implemented without external switching for a daisy chain in certain applications. As illustrated, the radio frequency module48includes a power amplifier36, a filter38, a multi-throw switch41′, a radio frequency coupler42connected to a termination impedance45, and a coupler switching circuit49, and a bypass switch47. The coupler switching circuit49can be implemented in accordance with any suitable principles and advantages disclosed herein.FIG.4Billustrates that the radio frequency coupler42and the coupler switching circuit49can be included in a radio frequency module that also includes the power amplifier36. One or more circuit elements can be included in a signal path between the power amplifier36and the radio frequency coupler42. For example, as illustrated, the filter38and switch41′ can be in a signal path between the power amplifier36and the radio frequency coupler42. The coupled power signal generated by the radio frequency coupler42can be indicative of power of a radio frequency signal generated by the power amplifier36. FIG.5is a schematic diagram of a radio frequency coupler42and a coupler switching circuit50according to an embodiment. The coupler switching circuit50includes a plurality of switches51,52,53,54,55,56,57,58, and59. The switches51to59of the coupler switching circuit50can be controlled by a control circuit to provide various connections between the radio frequency coupler42and the input/output ports CPL1and CPL2and between the input/output ports CPL1and CPL2.FIG.5also illustrates that termination impedances45A and45B can be tunable. Any of the termination impedances disclosed herein can be tunable as suitable. The coupler switching circuit50can provide a forward coupled power signal to the first input/output port CPL1. In this mode, the switches51,52, and55are On and the other illustrated switches of the coupler switching circuit50are Off. The coupler switching circuit50can provide a reverse coupled power signal to the first input/output port CPL1. In this mode, the switches53,54, and57are On and the other illustrated switches of the coupler switching circuit50are Off. The coupler switching circuit50can provide a forward coupled power signal to the first input/output port CPL2. In this mode, the switches51,52, and56are On and the other illustrated switches of the coupler switching circuit50are Off. The coupler switching circuit50can provide a reverse coupled power signal to the first input/output port CPL2. In this mode, the switches53,54, and58are On and the other illustrated switches of the coupler switching circuit50are Off. The coupler switching circuit50can pass a coupled power signal from the first input/output port CPL1to the second input/output port CPL2. Similarly, the coupler switching circuit50can pass a coupled power signal from the second input/output port CPL1to the first input/output port CPL2. In the modes where a coupled power signal is passed from one input/output port to the other, the switch59is On and the other illustrated switches of the coupler switching circuit50are Off. Table 1 below summarizes states of the switches51to59of the coupler switching circuit50for these modes. In Table 1, FWD CPL represents forward coupled power and REV CPL represents reverse coupled power. TABLE 1Mode/Switch51/5253/545556575859FWD CPL −> CPL1OnOffOnOffOffOffOffREV CPL −> CPL1OffOnOffOffOnOffOffFWD CPL −> CPL2OnOffOffOnOffOffOffREV CPL −> CPL2OffOnOffOffOffOnOffCPL1 −> CPL2OffOffOffOffOffOffOnCPL2 −> CPL1OffOffOffOffOffOffOn FIG.6Ais a schematic diagram of a radio frequency system60with a bidirectional daisy chain of coupler switching circuits for dual feedback receiver inputs according to an embodiment. The illustrated daisy chain is arranged in a loop between two feedback receiver inputs. A coupler switching circuit can bidirectionally pass a coupled power signal through the daisy chain to either of the two feedback receiver inputs. A coupled power signal associated with a first transmit path can be routed through the daisy chain to the feedback receiver without passing through a radio frequency module with a second transmit path that is concurrently active. Similarly, a coupled power signal associated with the transmit path can be routed through the daisy chain to the feedback receiver without passing through a radio frequency module that includes the first transmit path when the first and second transmit paths are both active. High isolation can be achieved between coupled power signals regardless of a number of modules, a number of supply domains, or how transmit paths are connected. As illustrated, the radio frequency system60includes a plurality of radio frequency modules61,62,63,64,65, and66and a transceiver67. The radio frequency modules61,62,63,64,65, and66can be implemented with any suitable principles and advantages disclosed with reference to the radio frequency module40ofFIG.4Aand/or the radio frequency module48ofFIG.4B. Each of the radio frequency modules61to66can include a coupler switching circuit, where the coupler switching circuits are together arranged in a daisy chain. The daisy chain can form a loop between feedback receiver inputs FB Rx1and FB Rx2of the transceiver67. In the daisy chain, input/output ports of radio frequency modules are electrically connected to each other external to the radio frequency modules. The coupler switching circuits can be implemented in accordance with any suitable principles and advantages discussed with reference toFIGS.4and/or5. Some or all of the radio frequency modules61to66can include one or more transmit paths. Two or more of the radio frequency modules61to66can include transmit paths that are concurrently active. The transmit paths can be concurrently active in a carrier aggregation application and/or a dual connectivity application. In certain instances, radio frequency modules61to66can each have coupler switching circuits with any suitable combination of features disclosed herein. According to some other instances, radio frequency modules61and66at ends of the daisy chain can have simplified coupler switching circuits that are unidirectional. In some applications, the daisy chain can include one or more one or more radio frequency modules (e.g., one or more diversity receive modules) arranged to pass a coupled power signal between two input/output ports without the functionality to provide a coupled power signal generated by the radio frequency module to either of the two input/output ports. In physical layout, the radio frequency modules61to66can be arranged to reduce and/or minimize the length of routes (1) between input/output ports of different radio frequency modules and (2) between input/output ports of the radio frequency modules at the ends of the daisy chain to the feedback receiver inputs. Accordingly, all of these routes can be relatively short in certain physical layouts. This can reduce and/or minimize parasitic capacitance associated with such routes. The transceiver67includes a feedback receiver. The transceiver67can be implemented on an integrated circuit. The feedback receiver can process coupled power signals from the radio frequency modules61to66. The feedback receiver can process a plurality of coupled power signals concurrently. The feedback receiver can include one or more receive paths that each include any suitable circuitry arranged to process a coupled power signal. For example, a receive path of a feedback receiver can include a low noise amplifier, a mixer, a filter, and an analog-to-digital converter. One or more adjustments to a transmit path can be performed in response to an output of the receive path of the feedback receiver. FIG.6Bis a schematic diagram of the radio frequency system60ofFIG.6Ain a state where two radio frequency modules are actively transmitting. InFIG.6B, the radio frequency modules62and64are concurrently transmitting. The first transmitting radio frequency module62can provide a coupled power signal at a first input/output port CPL1. For example, a coupler switching circuit of the first transmitting radio frequency module62can electrically connect a radio frequency coupler of the first transmitting radio frequency module62to the first input/output port CPL1of the first transmitting radio frequency module62. The second transmitting radio frequency module64can provide a coupled power signal at a second input/output port CPL2. For example, a coupler switching circuit of the second transmitting radio frequency module64can electrically connect a radio frequency coupler of the second transmitting radio frequency module64to the second input/output port CPL2of the second transmitting radio frequency module64. As illustrated, the coupled power signals from the transmitting radio frequency modules62and64can propagate in different directions through the daisy chain to different respective feedback receiver inputs FB Rx1and FB Rx2. The different directions are opposite directions as illustrated inFIG.6B. Accordingly, in the state shown inFIG.6B, coupled power signals do not propagate through another active transmitting radio frequency module in the daisy chain. This can achieve high isolation. In the state shown inFIG.6B, coupled power signals do not propagate through coupler switching circuits of the radio frequency module63. FIG.7Ais a schematic diagram of a radio frequency system70with a bidirectional daisy chain of coupler switching circuits for a single feedback receiver input according to an embodiment. In the radio frequency system70, a transceiver67′ includes one feedback receiver input FB Rx. The daisy chain of coupler switching circuits in the radio frequency system70is like the daisy chain in the radio frequency system60except that ends of the daisy chain connect at T connection72in the radio frequency system70. The T connection72can be referred to as a T junction. The daisy chain of the radio frequency system70connects to a single feedback receiver input FB Rx. The daisy chain illustrated inFIG.7Ais implemented without switching external to the radio frequency modules61to66. FIG.7Bis a schematic diagram of the radio frequency system70ofFIG.7Ain a state where two radio frequency modules are actively transmitting. InFIG.7B, the radio frequency modules62and64are concurrently transmitting. As illustrated, the coupled power signals from the transmitting radio frequency modules62and64can propagate in different directions through the daisy chain to the T connection72. Accordingly, in the state shown inFIG.7B, coupled power signals from an active transmitting radio frequency module do not propagate through another active transmitting radio frequency module in the daisy chain. This can achieve high isolation. In some applications, a coupled power signal from the first actively transmitting radio frequency module62can propagate through the daisy chain at a different time than the coupled power signal form the second actively transmitting radio frequency module64in the radio frequency system70. In such applications, a coupled power signal from the first actively transmitting radio frequency module62received by a feedback receiver does not propagate through the second actively transmitting radio frequency module64. In certain applications, coupled power signals from the first actively transmitting radio frequency module62and the second actively transmitting radio frequency module64can propagate through the daisy chain concurrently in the radio frequency system70. In such applications, the feedback receiver can separate the coupled power signals from the different actively transmitting radio frequency modules62and64for further separate processing. For example, a diplexer of the transceiver67′ can separate such coupled power signals FIG.8is a schematic diagram of a radio frequency system80with a bidirectional daisy chain of coupler switching circuits coupled to a single feedback receiver input via a switch82according to an embodiment. The switch82can selectively electrically connect one end of the daisy chain to the feedback receiver input port FB Rx of the transceiver67′. The switch82can selectively electrically connect a port CPL of another radio frequency module81to the feedback receiver input port FB Rx. The principles and advantages disclosed herein can be implemented in radio frequency systems where any suitable number of daisy chains of coupler switching circuits and any suitable number of ports of individual radio frequency modules can be electrically connected to a feedback receiver. For example, in certain applications, two or more daisy chains of coupler switching circuits can be electrically connected to a feedback receiver. As another example, in some applications, one or more daisy chains of coupler switching circuits and two or more individual radio frequency modules can be electrically connected to a feedback receiver. As one more example, in various applications, two or more daisy chains of coupler switching circuits can each be coupled to a different feedback receiver port and/or a different feedback receiver. FIG.9is a schematic diagram of a portion of a radio frequency module90with a bidirectional daisy chain coupler interface for coupled power signals with direct current and radio frequency components according to an embodiment. In the radio frequency system90, a radio frequency coupler42can be electrically connected to either input/output port CPL1or CPL2via a termination and output switch44and a coupled power output switch92. Biasing circuits93and94can each provide a direct current (DC) bias to a respective coupled power signal provided to input/output port CPL1or CPL2when the radio frequency coupler42is electrically connected to an input/output port CPL1or CPL2. Accordingly, the coupled power signal provided to an input/output port CPL1or CPL2can have a DC component and an RF component. The radio frequency module90also includes an RF pass circuit95and a DC pass circuit96arranged to pass a coupled power signal between input/output ports CPL1and CPL2. With the RF pass circuit95and the DC pass circuit96, the radio frequency module90pass a coupled power signal between input/output ports CPL1and CPL2when the radio frequency module90is otherwise inactive. Accordingly, control of the daisy chain of coupler switching circuits can be simplified. Power of the radio frequency system can also be reduced. The RF pass circuit95and the DC pass circuit96can both be deactivated when the radio frequency coupler42is providing a coupled power signal to either of the input/output ports CPL1and CPL2. FIG.10is a schematic diagram of an example radio frequency pass circuit100that can be electrically connected between input/output ports of the radio frequency module90ofFIG.9according to an embodiment. The radio frequency pass circuit100can implement the RF pass circuit95ofFIG.9. A DC component of a coupled power signal received at the first input/output port CPL1can be applied to a control terminal of a first pass transistor101via a biasing element103to turn on the first pass transistor101. This can turn on the first pass transistor101when a radio frequency module that includes the radio frequency pass circuit100is otherwise inactive. A DC blocking element102can block the DC component of the coupled power signal received at the first input/output port CPL1so that the pass transistor101passes the RF component of the coupled power signal to the second input/output port CPL2when on. A DC component of a coupled power signal received at the second input/output port CPL2can be applied to a control terminal of a second pass transistor105via a biasing element107to turn on the first pass transistor105. This can turn on the second pass transistor105when a radio frequency module that includes the radio frequency pass circuit100is otherwise inactive. A DC blocking element106can block the DC component of the coupled power signal received at the second input/output port CPL2so that the pass transistor105passes the RF component of the coupled power signal to the first input/output port CPL1when on. When a radio frequency coupler is providing a coupled power signal to either of the input/output ports CPL1or CPL2, the active low enable signal Enable_L can turn off the pass transistors101and105to deactivate the radio frequency pass circuit100. This can decouple the input/output ports CPL1and CPL2from each other when one or more radio frequency couplers are providing a coupled power signal to at least one of these input/output ports. FIG.11is a schematic diagram of an example direct current pass circuit110that can be electrically connected between input/output ports of the radio frequency module ofFIG.9according to an embodiment. The DC pass circuit110can implement the DC pass circuit96ofFIG.9. A RF blocking element112can block an RF component of a coupled power signal received at the first input/output port CPL1. As illustrated, the RF blocking element112includes a resistor and a capacitor arranged as a low pass filter. The DC component of the coupled power signal received at the first input/output port CPL1can turn on a transistor113that in turn turns on a first pass transistor111. This can turn on the first pass transistor111when a radio frequency module that includes the direct current pass circuit110is otherwise inactive. The first pass transistor111can pass the DC component of the coupled power signal received at the first input/output port CPL1to the second input/output port CPL2when on. A RF blocking element116can block an RF component of a coupled power signal received at the second input/output port CPL2. As illustrated, the RF blocking element116includes a resistor and a capacitor arranged as a low pass filter. The DC component of the coupled power signal received at the second input/output port CPL2can turn on a transistor117that in turn turns on a second pass transistor115. This can turn on the second pass transistor115when a radio frequency module that includes the direct current pass circuit110is otherwise inactive. The second pass transistor115can pass the DC component of the coupled power signal from the second input/output port CPL2to the first input/output port CPL1when on. When a radio frequency coupler is providing a coupled power signal to either of the input/output ports CPL1or CPL2, the active high enable signal Enable_H can turn off the pass transistors111and115to deactivate the DC pass circuit110. This can decouple the input/output ports CPL1and CPL2from each other when one or more radio frequency couplers are providing a coupled power signal to at least one of these input/output ports. FIG.12is a schematic diagram of a radio frequency system120with a bidirectional daisy chain of coupler switching circuits with direct current blocking elements127and128included between the daisy chain and dual feedback receiver inputs according to an embodiment. The radio frequency system120includes radio frequency modules121,122,123,124,125, and126with respective coupler switching circuits arranged in a daisy chain. The coupler switching circuits can pass a coupled power signal having a DC component and a radio frequency component and to provide such a coupled power signal to an input/output port. The coupler switching circuits of the radio frequency modules121to126can be implemented in accordance with any suitable principles and advantages disclosed with reference toFIGS.9to11. The radio frequency system120is like the radio frequency system60ofFIG.6A, except that (1) the coupler switching circuits of radio frequency modules121to126are configured to pass a coupled power signal with a DC component and an RF component and (2) the direct current blocking elements127and128can block the DC component of a coupled power signal from the daisy chain provided to feedback receiver ports FB Rx1and RB Rx2, respectively, of a transceiver. As illustrated, direct current blocking elements127and128can be capacitors. Direct current blocking elements127and/or128can be implemented with a daisy chain of coupler switching circuits in accordance with any suitable principles and advantages disclosed herein. FIG.13is a schematic diagram of a radio frequency system130with a bidirectional daisy chain of coupler switching circuits with a direct current blocking element127between the daisy chain and a single feedback receiver input according to an embodiment. The radio frequency system130is like the radio frequency system70ofFIG.7A, except that (1) the coupler switching circuits of radio frequency modules121to126are configured to pass a coupled power signal with a DC component and an RF component and (2) the direct current blocking element127can block the DC component of coupled power signal from the daisy chain provided to feedback receiver port FB Rx of the transceiver67. Radio frequency systems disclosed herein can perform methods of passing radio frequency power through a daisy chain of coupler switching circuits. Such methods can be performed in accordance with any suitable principles and advantages of the radio frequency modules and/or radio frequency systems disclosed herein. These methods can involve a coupled power signal propagating through circuitry of one or more radio frequency modules that are not actively transmitting while a plurality of radio frequency modules are actively transmitting. An example method can include providing two indications of radio frequency power to a daisy chain of coupler switching circuits while a plurality of radio frequency modules are concurrently transmitting. A first coupled power signal generated by a first actively transmitting radio frequency module can be provided to the daisy chain. A second coupled power signal generated by a second actively transmitting radio frequency module can also be provided to the daisy chain such that the first and second coupled power signals propagate in opposite direction in the daisy chain. The first and second coupled power signals can be received by a feedback receiver. The coupled signals can be processed by the feedback receiver. Then one or more adjustments to a transmit path of an actively transmitting radio frequency module can be performed based on an output signal of the feedback receiver. Wireless Communication Devices The radio frequency modules and radio frequency systems disclosed herein can be included in wireless communication devices, such as mobile devices. An example of such a wireless communication device will be discussed with reference toFIG.14. FIG.14is a schematic diagram of one embodiment of a mobile device800. The mobile device800includes a baseband system801, a transceiver802, a front end system803, antennas804, a power management system805, a memory806, a user interface807, and a battery808. The mobile device800can be used communicate using a wide variety of communications technologies, including, but not limited to, 2G, 3G, 4G (including LTE, LTE-Advanced, and LTE-Advanced Pro), 5G NR, WLAN (for instance, WiFi), WPAN (for instance, Bluetooth and ZigBee), WMAN (for instance, WiMax), and/or GPS technologies. The transceiver802generates RF signals for transmission and processes incoming RF signals received from the antennas804. It will be understood that various functionalities associated with the transmission and receiving of RF signals can be achieved by one or more components that are collectively represented inFIG.14as the transceiver802. In one example, separate components (for instance, separate circuits or dies) can be provided for handling certain types of RF signals. The front end system803aids in conditioning signals transmitted to and/or received from the antennas804. In the illustrated embodiment, the front end system803includes antenna tuning circuitry810, power amplifiers (PAs)811, low noise amplifiers (LNAs)812, filters813, switches814, and signal splitting/combining circuitry815. However, other implementations are possible. The filters813can include one or more tunable filters with harmonic rejection with that include one or more features of the embodiments disclosed herein. For example, the front end system803can provide a number of functionalities, including, but not limited to, amplifying signals for transmission, amplifying received signals, filtering signals, switching between different bands, switching between different power modes, switching between transmission and receiving modes, duplexing of signals, multiplexing of signals (for instance, diplexing or triplexing), or some combination thereof. In certain implementations, the mobile device800supports carrier aggregation, thereby providing flexibility to increase peak data rates. Carrier aggregation can be used for both Frequency Division Duplexing (FDD) and Time Division Duplexing (TDD), and may be used to aggregate a plurality of carriers or channels. Carrier aggregation includes contiguous aggregation, in which contiguous carriers within the same operating frequency band are aggregated. Carrier aggregation can also be non-contiguous, and can include carriers separated in frequency within a common band or in different bands. The antennas804can include antennas used for a wide variety of types of communications. For example, the antennas804can include antennas for transmitting and/or receiving signals associated with a wide variety of frequencies and communications standards. In certain implementations, the antennas804support MIMO communications and/or switched diversity communications. For example, MIMO communications use multiple antennas for communicating multiple data streams over a single radio frequency channel. MIMO communications benefit from higher signal to noise ratio, improved coding, and/or reduced signal interference due to spatial multiplexing differences of the radio environment. Switched diversity refers to communications in which a particular antenna is selected for operation at a particular time. For example, a switch can be used to select a particular antenna from a group of antennas based on a variety of factors, such as an observed bit error rate and/or a signal strength indicator. The mobile device800can operate with beamforming in certain implementations. For example, the front end system803can include amplifiers having controllable gain and phase shifters having controllable phase to provide beam formation and directivity for transmission and/or reception of signals using the antennas804. For example, in the context of signal transmission, the amplitude and phases of the transmit signals provided to the antennas804are controlled such that radiated signals from the antennas804combine using constructive and destructive interference to generate an aggregate transmit signal exhibiting beam-like qualities with more signal strength propagating in a given direction. In the context of signal reception, the amplitude and phases are controlled such that more signal energy is received when the signal is arriving to the antennas804from a particular direction. In certain implementations, the antennas804include one or more arrays of antenna elements to enhance beamforming. The baseband system801is coupled to the user interface807to facilitate processing of various user input and output (I/O), such as voice and data. The baseband system801provides the transceiver802with digital representations of transmit signals, which the transceiver802processes to generate RF signals for transmission. The baseband system801also processes digital representations of received signals provided by the transceiver802. As shown inFIG.14, the baseband system801is coupled to the memory806of facilitate operation of the mobile device800. The memory806can be used for a wide variety of purposes, such as storing data and/or instructions to facilitate the operation of the mobile device800and/or to provide storage of user information. The power management system805provides a number of power management functions of the mobile device800. In certain implementations, the power management system805includes a PA supply control circuit that controls the supply voltages of the power amplifiers811. For example, the power management system805can be configured to change the supply voltage(s) provided to one or more of the power amplifiers811to improve efficiency, such as power added efficiency (PAE). As shown inFIG.14, the power management system805receives a battery voltage from the battery808. The battery808can be any suitable battery for use in the mobile device800, including, for example, a lithium-ion battery. APPLICATIONS, TERMINOLOGY, AND CONCLUSION Any of the embodiments described above can be implemented in association with mobile devices such as cellular handsets. The principles and advantages of the embodiments can be used for any systems or apparatus, such as any uplink wireless communication device, that could benefit from any of the embodiments described herein. The teachings herein are applicable to a variety of systems. Although this disclosure includes example embodiments, the teachings described herein can be applied to a variety of modules, systems, devices, and methods. Any of the principles and advantages discussed herein can be implemented in association with RF circuits configured to process signals having a frequency in a range from about 30 kHz to 300 GHz, such as in a frequency range from about 450 MHz to 8.5 GHz. Aspects of this disclosure can be implemented in various electronic devices. Examples of the electronic devices can include, but are not limited to, consumer electronic products, parts of the consumer electronic products such as packaged radio frequency modules, radio frequency filter die, uplink wireless communication devices, wireless communication infrastructure, electronic test equipment, etc. Examples of the electronic devices can include, but are not limited to, a mobile phone such as a smart phone, a wearable computing device such as a smart watch or an ear piece, a telephone, a television, a computer monitor, a computer, a modem, a hand-held computer, a laptop computer, a tablet computer, a microwave, a refrigerator, a vehicular electronics system such as an automotive electronics system, a robot such as an industrial robot, an Internet of things device, a stereo system, a digital music player, a radio, a camera such as a digital camera, a portable memory chip, a home appliance such as a washer or a dryer, a peripheral device, a wrist watch, a clock, etc. Further, the electronic devices can include unfinished products. Unless the context indicates otherwise, throughout the description and the claims, the words “comprise,” “comprising,” “include,” “including” and the like are to generally be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” “for example,” “such as” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or states. The word “coupled”, as generally used herein, refers to two or more elements that may be either directly connected, or connected by way of one or more intermediate elements. Likewise, the word “connected”, as generally used herein, refers to two or more elements that may be either directly connected, or connected by way of one or more intermediate elements. Additionally, the words “herein,” “above,” “below,” and words of similar import, when used in this application, shall refer to this application as a whole and not to any particular portions of this application. Where the context permits, words in the above Detailed Description using the singular or plural number may also include the plural or singular number respectively. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the disclosure. Indeed, the novel filters, wireless communication devices, apparatus, methods, and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the filters, wireless communication devices, apparatus, methods, and systems described herein may be made without departing from the spirit of the disclosure. For example, while blocks are presented in a given arrangement, alternative embodiments may perform similar functionalities with different components and/or circuit topologies, and some blocks may be deleted, moved, added, subdivided, combined, and/or modified. Each of these blocks may be implemented in a variety of different ways. Any suitable combination of the elements and/or acts of the various embodiments described above can be combined to provide further embodiments. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the disclosure. | 72,252 |
11863218 | DETAILED DESCRIPTION OF THE EMBODIMENTS In order to make the above purposes, technical solutions and advantages of the present disclosure more obvious and clear, the present disclosure is clearly and completely described in the following with reference to the accompanying drawings and embodiments of the present disclosure. It should be understood that the embodiments described herein are intended only to explain the present disclosure and not to define the present disclosure. FIG.1is a schematic diagram of an application environment of a gain control method in an embodiment. As shown in theFIG.1, the application environment includes, but is not limited to, a system requiring gain adjustment, such as a repeater or a distributed antenna system (“DAS”). The following takes a digital fiber retractor as an example. A digital fiber retractor10includes a local machine110and remote machines120. Multiple remote machines120can be cascaded to one local machine110. A transmission principle of downlink of the digital fiber repeater10is as follows: a downlink signal can be coupled from a base station to the local machine110through a coupler. The local machine110receives the downlink signal by an antenna, and converts the downlink signal into an intermediate frequency signal through down conversion. The intermediate frequency signal is sent to an analog-to-digital converter (“AD”) after an adjustment thereto made by an attenuation module using an attenuation value from an automatic gain controller. After the gain adjustment, the intermediate frequency signal can be sampled by the AD and converted into a digital signal. The digital signal is processed by a baseband signal processing, packaged to an optical fiber transceiver, and transmitted to the remote machine120through an optical fiber. The remote machine120can receive the digital signal of the local machine110through the optical fiber transceiver. After the baseband signal processing, the digital signal can be converted into an analog signal through a digital-to-analog converter (“DA”). The analog signal can be sent out through an antenna after up-conversion and power amplifier processing. A transmission principle of the uplink corresponds to that of the downlink, and is not described here. In an embodiment of the present disclosure, the system requiring gain adjustment can detect an input power of a primary synchronization signal (“PSS”) in an input signal in real time, acquire a rated power of a downlink PSS that acts as a gain control threshold of the automatic gain controller, and control the automatic gain controller to adjust a value of gain attenuation according to magnitudes of the input power of the PSS and the rated power of the downlink PSS, which is used to adjust an uplink gain and a downlink gain. The solution provided in the present disclosure can control levels of the input signal based on the magnitude relationship between the input power of the PSS and the rated power of the downlink PSS, avoiding frequent switching of automatic level control function during application peak hours, and the downlink gain and the uplink gain are linkage controlled at the same time to prevent interference with the base station due to excessively high uplink noise floor. FIG.2is a flowchart diagram of a gain control method in an embodiment of the present disclosure. A gain control method is provided in the present disclosure, and the system requiring gain adjustment includes at least the automatic gain controller. As shown inFIG.2, the gain control method includes step201to step203. At step201, an input power of a PSS in an input signal is detected in real time. Specifically, in network communication, the input signal of the system requiring gain adjustment includes the PSS. The input power of a real-time PSS can be obtained by detecting the PSS in the input signal in real time. A physical-layer cell identity (“PCI”) consists of the PSS and 3*Secondary Synchronization Signal (“SSS”). The PSS consumes 6 RB, i.e. 72 sc of system bandwidth in the frequency domain, and the PSS indicates that IDs in a physical-layer including 0, 1 and 2 (a number of the IDs is three). At step202, a rated power of a downlink PSS that acts as a gain control threshold of the automatic gain controller is acquired. Specifically, the automatic gain controller (“AGC”) can use an effective combination of linear amplification and compression amplification to adjust the level of the input signal of the uplink and downlink of the system requiring gain adjustment. The rated power of the downlink PSS of the AGC is compared with the input signal information of the AGC. If the input signal information (power or voltage amplitude) does not match with the rated power of the downlink PSS, an external attenuator is controlled to increase or decrease the value of gain attenuation to adjust the levels of the input signal of the uplink and downlink. The downlink PSS represent the PSS in downlink data. The rated power of the downlink PSS can be calculated according to communication parameters of the base station acquired, and the rated power of the downlink PSS can be used as the rated power of the downlink PSS of the AGC. At step203, the automatic gain controller can be controlled to adjust a value of gain attenuation according to magnitudes of the input power of the PSS and the rated power of the downlink PSS, which is used to adjust an uplink gain and a downlink gain. Specifically, the rated power of the downlink PSS can be calculated according to the communication parameters of the base station, and the rated power of the downlink PSS can be used as the rated power of the downlink PSS of the AGC. The magnitude relationship between the input power of the PSS and the rated power of the downlink PSS is obtained to control the value of gain attenuation. The AGC can control the gains of uplink and downlink of the system requiring gain adjustment by adjusting the value of gain attenuation. The value of gain attenuation can be used to adjust the levels of the input signal of the uplink and downlink by controlling gain devices of the local machine and the remote machine. Specifically, the value of gain attenuation is configured to adjust the level of the input signal of the downlink for gain adjustment. The value of gain attenuation corresponding to the local machine is transmitted to the remote machine synchronously, and the value of gain attenuation can also control the level of the input signal of the uplink of the remote machine, so as to realize the gain adjustment of the uplink, i.e., the value of gain attenuation can enable synchronous linkage control of the downlink gain and the uplink gain in the system. In the above gain control method, the input power of the PSS in an input signal is detected in real time, the rated power of the downlink PSS that acts as the gain control threshold of the automatic gain controller is acquired, and the automatic gain controller can be controlled to adjust the value of gain attenuation according to magnitudes of the input power of the PSS and the rated power of the downlink PSS, which is used to adjust the uplink gain and the downlink gain. In the present disclosure, the rated power of the downlink PSS can be used as the rated power of the downlink PSS of the AGC, and the magnitude relationship between the input power of the PSS and the rated power of the downlink PSS is detected to control the gain of the input signal of the uplink and downlink, avoiding frequent switching of automatic level control function during application peak hours. The uplink gain and the downlink gain are further controlled. The method can prevent frequent switching of signal levels, and linkage control the downlink gain and the uplink gain at the same time to prevent interference with the base station due to excessively high uplink noise floor. FIG.3is a flowchart diagram of controlling the automatic gain controller to adjust the value of gain attenuation according to magnitudes of the input power of the PSS and rated power of the downlink PSS in an embodiment of the present disclosure. The method includes step301to step302. At step301, when the input power of the PSS is greater than the rated power of the downlink PSS, the automatic gain controller is controlled to increase the value of gain attenuation until an adjusted input power of the PSS is detected to be less than the rated power of the downlink PSS. Specifically, the input power of the PSS and the rated power of the downlink PSS are acquired, and the rated power of the downlink PSS can be used as the gain control threshold of the automatic gain controller. When the input power of the PSS is greater than the rated power of the downlink PSS, the levels of input signals of downlink and uplink is automatically reduced according to the input power of the PSS in real-time detection. The automatic gain controller can increase the value of gain attenuation to ensure that output power of the AGC is kept within an allowable error range of the maximum output power. The value of gain attenuation can be used to adjust the levels of the input signal of the uplink and downlink by controlling gain devices of the local machine and the remote machine. The value of gain attenuation can be accumulated through a counter, and a step of each accumulation can be defined by an engineer according to actual requirements. In the process of adjusting the value of gain attenuation, the adjusted input power of the PSS can be compared with the rated power of the downlink PSS. When the adjusted input power of the PSS is greater than the rated power of the downlink PSS, the value of gain attenuation is increased until the input power of the PSS does not exceed the rated power of the downlink PSS. At step302: when the input power of the PSS is less than or equal to the rated power of the downlink PSS, the automatic gain controller is controlled to adjust the value of gain attenuation, according to a difference between the rated power of the downlink PSS and the input power of the PSS during a preset time. Specifically, the input power of the PSS and the rated power of the downlink PSS are acquired, and the rated power of the downlink PSS can be used as the gain control threshold of the automatic gain controller. When the input power of the PSS is less than the rated power of the downlink PSS, a power difference between the rated power of the downlink PSS and the input power of the PSS is acquired. Since the rated power of the downlink PSS is greater than the input power of the PSS, the power difference is defined as a difference by the rated power of the downlink PSS minus the input power of the PSS. The levels of the downlink input signal and the uplink input signal are increased according to the difference between the input power of the PSS detected during the preset time and the rated power of the downlink PSS, ensuring that the output power of the AGC is within the allowable error range of the maximum output power. The preset time can be 30 minutes, 60 minutes, 120 minutes, etc. The preset time can be set by the engineer according to a control accuracy of the automatic gain controller. The above list is for example only and does not limit the preset time. FIG.4is a flowchart diagram of controlling an automatic gain controller to adjust the value of gain attenuation according to a difference between the rated power of the downlink PSS and the input power of the PSS during a preset time in an embodiment of the present disclosure. The method includes step401to step402. At step401, a power difference between the rated power of the downlink PSS and the input power of the PSS during the preset time is acquired. Specifically, the input power of the PSS is detected in real time during the preset time, and the rated power of the downlink PSS can be used as the gain control threshold of the automatic gain controller. The preset time can be set by the engineer according to a signal period of a present network. The input power of the PSS can be detected in real time during the preset time, and a difference between the rated power of the downlink PSS and the input power of the PSS detected in real time during the preset time can be calculated. At step402, when the power difference during the preset time is greater than a power threshold, the automatic gain controller is controlled to reduce the value of gain attenuation. Specifically, the power threshold is configured to set an allowable error range of the input power of the PSS, and the power threshold can be set by the engineer according to actual requirements, such as 2 dB, 3 dB, 5 dB, 7 dB, or 10 dB, etc. It should be noted that the preceding power thresholds are for example only and do not limit the power threshold. During the preset time, when the power difference between the rated power of the downlink PSS and the input power of the PSS detected in real time is greater than the power threshold, it indicates that a currently detected input power of the PSS is not within the allowable error range of the input power of the PSS, i.e., the currently detected input power of the PSS is low, and the automatic gain controller needs to be controlled to reduce the value of gain attenuation, increasing the levels of the input signals of the downlink and uplink and then controlling the uplink gain and downlink gain. It should be understood that although the steps in the flowchart diagram inFIG.2toFIG.4are shown in sequence as indicated by arrows, the steps are not necessarily executed in the order indicated by the arrows. Unless explicitly stated in the present disclosure, there is no strict order in which the steps can be performed, and the steps can be performed in any other order. In addition, at least a part of the steps inFIG.2toFIG.4can include multiple sub-steps or stages. The sub-steps or the stages are not necessarily executed at the same time, but can be executed at different times. An execution sequence of the sub-steps or the stages is not necessarily sequential. The sub-steps or the stages may be performed alternately or in turn with other steps or at least a part of a sub-step or phase of other steps. In an embodiment of the present disclosure, the automatic gain controller can be controlled to reduce the value of gain attenuation, including the following steps: the value of gain attenuation can be detected while the automatic gain controller is controlled to reduce the value of gain attenuation; and when the value of gain attenuation is detected to be greater than zero, the value of gain attenuation is continuing to be reduced until the adjusted input power of the PSS is greater than the rated power of the downlink PSS. Specifically, when the power difference corresponding to the preset time is greater than the power threshold, it indicates that the currently detected input power of the PSS is not within the allowable error range of the input power of the PSS, i.e., the currently detected input power of the PSS is low, and the automatic gain controller needs to be controlled to reduce the value of gain attenuation. When the automatic gain controller is reducing the value of gain attenuation, the current value of gain attenuation of the automatic gain controller is repeatedly detected. When the value of gain attenuation of the automatic gain controller is greater than zero, the automatic gain controller is controlled to continue to reduce the value of gain attenuation. The current adjusted input power of the PSS is repeatedly detected until the adjusted input power of the PSS is greater than or equal to the rated power of the downlink PSS, and then the adjustment of the value of gain attenuation is stopped. In an embodiment of the present disclosure, after the value of gain attenuation is detected, the method further includes: when the value of gain attenuation is detected to be equal to zero, the adjustment of the value of gain attenuation is stopped. Specifically, when the power difference corresponding to the preset time is greater than the power threshold, it indicates that the currently detected input power of the PSS is not within the allowable error range of the input power of the PSS, i.e., the currently detected input power of the PSS is low, and the automatic gain controller needs to be controlled to reduce the value of gain attenuation. When the automatic gain controller is reducing the value of gain attenuation, the current value of gain attenuation of the automatic gain controller is repeatedly detected. When the value of gain attenuation of the automatic gain controller is greater than zero, the automatic gain controller is controlled to continue to reduce the value of gain attenuation. The adjustment of the value of gain attenuation is stopped until the value of gain attenuation is equal to zero. In an embodiment of the present disclosure, the method further includes: when the power difference during the preset time is always less than the power threshold, the automatic gain controller is controlled to maintain the current value of gain attenuation. Specifically, the power threshold is configured to set the allowable error range of the input power of the PSS, and the power threshold can be set by the engineer according to actual requirements, such as 2 dB, 3 dB, 5 dB, 7 dB, or 10 dB, etc. It should be noted that the preceding power thresholds are for example only and do not limit the power threshold. When the power difference between the rated power of the downlink PSS and the input power of the PSS detected in real time during the preset time is always less than the power threshold, it indicates that currently detected input power of the PSS is within the allowable error range of input power of the PSS, and the automatic gain controller is controlled to maintain the current value of gain attenuation. In an embodiment of the present disclosure, the rated power of the downlink PSS that acts as a gain control threshold of an automatic gain controller is acquired, including the following steps: the rated power of the downlink PSS is calculated according to estimated communication parameters of the base station, and a first calculation formula is defined by the following: Rated power of the downlink PSS=Input power of the downlink PSS/Maximum baseband power*Rated power of a downlink input, wherein the maximum baseband power is maximum power in a preset period. Specifically, the communication parameters of the base station are estimated automatically, including the input power of downlink PSS, the maximum baseband power and the rated power of downlink input. The maximum baseband power can be the maximum power in the preset period, and the preset period can be adjusted according to the present network. The rated power of the downlink PSS is acquired according to the estimated communication parameters of the base station using a formula: Rated power of the downlinkPSS=Input power of the downlinkPSS/Maximum baseband power*Rated power of the downlink input. An acquired rated power of the downlink PSS can be used as the rated power of the downlink PSS of the AGC. In an embodiment of the present disclosure, the rated power of the downlink PSS that acts as a gain control threshold of an automatic gain controller is acquired, including the following steps: the rated power of the downlink PSS is calculated according to set communication parameters of a base station using a formula: Rated power of the downlinkPSS=Rated power of the downlink input−Distributed power ratio of Long Term Evolution (“LTE”) carrier in which thePSSis −10*lg(total subcarriers of theLTEcarrier in which thePSSis /62). Specifically, the communication parameters of the base station are set manually, including the distributed power ratio of LTE carrier in which the PSS is and the total subcarriers of the LTE carrier in which the PSS is. The rated power of the downlink PSS is acquired according to manually set communication parameters of the base station using a formula: Rated power of the downlink input−Distributed power ratio of theLTEcarrier in which thePSSis −10*lg(total subcarriers of theLTEcarrier in which thePSSis /62). An acquired rated power of the downlink PSS can be used as the rated power of the downlink PSS of the AGC. FIG.5is a schematic diagram of a gain control apparatus in an embodiment of the present disclosure. The gain control apparatus can be applied to the system requiring gain adjustment, and the system requiring gain adjustment includes at least the automatic gain controller, and the gain control apparatus includes: a detecting module501, configured for detecting an input power of a PSS in an input signal in real time, specifically, in network communication, the input signal of the system requiring gain adjustment includes the PSS. The detecting module501is configured for obtaining the input power of a real-time PSS by detecting the PSS in the input signal in real time. A physical-layer cell identity (“PCI”) consists of the PSS and 3*Secondary Synchronization Signal (“SSS”). The PSS consumes 6 RB, i.e. 72 sc of system bandwidth in the frequency domain, and the PSS indicates that IDs in a physical-layer including 0, 1 and 2 (a number of the IDs is three), an acquiring module502, configured for acquiring a rated power of a downlink PSS that acts as a gain control threshold of the automatic gain controller, specifically, the automatic gain controller (AGC) can use an effective combination of linear amplification and compression amplification to adjust the gain of the input signal of the system requiring gain adjustment. The rated power of the downlink PSS of the AGC is compared with the input signal information of the AGC. If the input signal information (power or voltage amplitude) does not match the rated power of the downlink PSS, an external attenuator is controlled to increase or decrease the value of gain attenuation to adjust the levels of the input signal of the uplink and downlink. The downlink PSS is defined as the PSS in downlink data. The acquiring module502is configured to calculate the rated power of the downlink PSS according to the communication parameters of the base station acquired, and take the rated power of the downlink PSS as the rated power of the downlink PSS of the AGC, a controlling module503, configured for controlling the automatic gain controller to adjust a value of gain attenuation according to magnitudes of the input power of the PSS and the rated power of the downlink PSS, which is used to adjust an uplink gain and a downlink gain, specifically, the controlling module503is configured to calculate the rated power of the downlink PSS according to the communication parameters of the base station acquired, and take the rated power of the downlink PSS as the rated power of the downlink PSS of the AGC. A magnitude relationship of between the input power of the PSS and the rated power of the downlink PSS is obtained to control the value of gain attenuation. The AGC can control the gain of the uplink and downlink of the system requiring gain adjustment by adjusting the value of gain attenuation. The value of gain attenuation can be used to adjust the levels of the input signal of the uplink and downlink by controlling gain devices of the local machine and the remote machine. Specifically, the value of gain attenuation is configured to adjust the level of the input signal of the downlink for gain adjustment. The value of gain attenuation corresponding to the local machine is transmitted to the remote machine synchronously, and the value of gain attenuation can also control the level of the input signal of the uplink of the remote machine, so as to realize the gain adjustment of the uplink, i.e., the value of gain attenuation can enable synchronous linkage control of the downlink gain and the uplink gain in the system. In an embodiment of the present disclosure, the controlling module503is further configured to control the automatic gain controller to increase the value of gain attenuation until the adjusted input power of the PSS is detected to be less than the rated power of the downlink PSS when the input power of the PSS is greater than the rated power of the downlink PSS, acquire a power difference between the rated power of the downlink PSS and the input power of the PSS during the preset time, and control the automatic gain controller to reduce the value of gain attenuation when the power difference during the preset time is greater than a power threshold. Specifically, the input power of the PSS and the rated power of the downlink PSS are acquired, and the rated power of the downlink PSS can be used as the gain control threshold of the automatic gain controller. When the input power of the PSS is greater than the rated power of the downlink PSS, the levels of the input signals of the downlink and uplink is automatically reduced according to the input power of the PSS in real-time detection. The automatic gain controller can increase the value of gain attenuation to ensure that output power of the AGC is kept within the allowable error range of the maximum output power. The value of gain attenuation can be used to adjust the levels of the input signal of the uplink and downlink by controlling gain devices of the local machine and the remote machine. The value of gain attenuation can be accumulated through the counter, and the step of each accumulation can be defined by the engineer according to actual requirements. In the process of adjusting the value of gain attenuation, the adjusted input power of the PSS can be compared with the rated power of the downlink PSS. When the adjusted input power of the PSS is greater than the rated power of the downlink PSS, the value of gain attenuation is increased until the input power of the PSS does not exceed the rated power of the downlink PSS. When the adjusted input power of the PSS is less than or equal to the rated power of the downlink PSS, the input power of the PSS can be detected in real time during the preset time, and the difference between the rated power of the downlink PSS and the input power of the PSS detected in real time during the preset time can be calculated. During the preset time, when the power difference between the rated power of the downlink PSS and the input power of the PSS detected in real time is greater than the power threshold, it indicates that the currently detected input power of the PSS is not within the allowable error range of the input power of the PSS, i.e., the currently detected input power of the PSS is low, and the automatic gain controller needs to be controlled to reduce the value of gain attenuation, increasing the levels of the input signals of the downlink and uplink and then increasing the uplink gain and downlink gain. The above gain control apparatus can be applied to the system requiring gain adjustment, and the system requiring gain adjustment includes at least the automatic gain controller. The gain control apparatus includes: the detecting module, configured for detecting the input power of the PSS in the input signal in real time; the acquiring module, configured for acquiring the rated power of the downlink PSS that acts as the gain control threshold of the automatic gain controller; and the controlling module, configured for controlling the automatic gain controller to adjust a value of gain attenuation according to magnitudes of the input power of the PSS and the rated power of the downlink PSS, which is used to adjust an uplink gain and a downlink gain. In the present disclosure, the rated power of the downlink PSS is used as the gain control threshold of the automatic gain controller, the magnitude relationship between the input power of the PSS and the rated power of the downlink PSS is detected to control the levels of the input signal of the uplink and downlink, and the uplink gain and the downlink gain are further controlled. The apparatus can prevent frequent switching of signal levels, and linkage control the downlink gain and the uplink gain at the same time to prevent interference with a base station due to excessively high uplink noise floor. The technical features of the above-described embodiments may be combined in any combination. For the sake of brevity of description, not all possible combinations of the technical features in the above embodiments are described. However, as long as there is no contradiction between the combinations of these technical features, all should be considered as within the scope of this disclosure. The above-described embodiments are merely illustrative of several embodiments of the present disclosure, and the description thereof is relatively specific and detailed, but is not to be construed as limiting the scope of the disclosure. It should be noted that a plurality of variations and modifications may be made by those skilled in the art without departing from the spirit and scope of the disclosure. Therefore, the scope of the disclosure should be determined by the appended claims. | 29,232 |
11863219 | DETAILED DESCRIPTION In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure. However, it should be apparent to those skilled in the art that the present disclosure may be practiced without such details. In other instances, well known methods, procedures, systems, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present disclosure. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown, but to be accorded the widest scope consistent with the claims. The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise,” “comprises,” and/or “comprising,” “include,” “includes,” and/or “including,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that the term “system,” “unit,” “module,” and/or “block” used herein are one method to distinguish different components, elements, parts, section or assembly of different level in ascending order. However, the terms may be displaced by other expression if they achieve the same purpose. Generally, the word “module,” “unit,” or “block,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions. A module, a unit, or a block described herein may be implemented as software and/or hardware and may be stored in any type of non-transitory computer-readable medium or another storage device. In some embodiments, a software module/unit/block may be compiled and linked into an executable program. It will be appreciated that software modules can be callable from other modules/units/blocks or from themselves, and/or may be invoked in response to detected events or interrupts. Software modules/units/blocks configured for execution on computing devices (e.g., processor210as illustrated inFIG.2) may be provided on a computer readable medium, such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that needs installation, decompression, or decryption prior to execution). Such software code may be stored, partially or fully, on a storage device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules/units/blocks may be included of connected logic components, such as gates and flip-flops, and/or can be included of programmable units, such as programmable gate arrays or processors. The modules/units/blocks or computing device functionality described herein may be implemented as software modules/units/blocks, but may be represented in hardware or firmware. In general, the modules/units/blocks described herein refer to logical modules/units/blocks that may be combined with other modules/units/blocks or divided into sub-modules/sub-units/sub-blocks despite their physical organization or storage. It will be understood that when a unit, engine, module or block is referred to as being “on,” “connected to,” or “coupled to,” another unit, engine, module, or block, it may be directly on, connected or coupled to, or communicate with the other unit, engine, module, or block, or an intervening unit, engine, module, or block may be present, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. These and other features, and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, may become more apparent upon consideration of the following description with reference to the accompanying drawings, all of which form a part of this disclosure. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended to limit the scope of the present disclosure. It is understood that the drawings are not to scale. An aspect of the present disclosure relates to a radio frequency power amplifier (RFPA) control device configured to adjust an input signal to implement a nonlinear correction of a radio frequency power amplifier (RFPA). The RFPA control device may be coupled to a radio frequency power amplification module. The RFPA control device may include an input signal processing module, an adjustment module, a signal processing and control module, and a delay module. The adjustment module may be connected to the input signal processing module and the radio frequency power amplification module, respectively. The signal processing and control module may be connected to the input signal processing module and the adjustment module, respectively. The delay module may be disposed between the input signal processing module and the adjustment module. The input signal processing module may be configured to process an input signal into two signals. A first signal may be used for signal detection, and a second signal may be used for signal amplification. The adjustment module may be configured to adjust at least one feature of the second signal. The signal processing and control module may be configured to generate a control signal based on at least one feature of the first signal. The control signal may be configured to control a degree of the adjustment of the at least one feature of the second signal. The delay module may be configured to determine a delay of the second signal such that the second signal and the control signal roughly simultaneously reach the adjustment module. In some embodiments, the RFPA can be used in various applications. Examples of such applications may include broadcasting, satellite communications, cellular communications. In some embodiments, the RFPA may be used in a magnetic resonance imaging (MRI) system. For example, the RFPA may amplify an input signal and generate an amplified output signal. The amplified output signal may be transmitted to RF coils. More descriptions may be found in, e.g.,FIG.1and the descriptions thereof. FIG.1is a schematic diagram illustrating an exemplary MRI system according to some embodiments of the present disclosure. As illustrated, the MRI system100may include an MRI scanner110, a network120, one or more terminals130, a processing device140, and a storage device150. The components in the MRI system100may be connected in various ways. Merely by way of example, as illustrated inFIG.1, the MRI scanner110may be connected to the processing device140through the network120. As another example, the MRI scanner110may be connected to the processing device140directly as indicated by the bi-directional arrow in dotted lines linking the MRI scanner and the processing device140. As a further example, the storage device150may be connected to the processing device140directly or through the network120. As still a further example, one or more terminals130may be connected to the processing device140directly (as indicated by the bi-directional arrow in dotted lines linking the terminal130and the processing device140) or through the network120. The MRI scanner110may scan a subject located within its detection region and generate a plurality of data relating to the subject. In the present disclosure, “subject” and “object” are used interchangeably. The MRI scanner110may include a magnet assembly, a gradient coil assembly, and a radiofrequency (RF) coil assembly (not shown inFIG.1). In some embodiments, the MRI scanner110may be a close-bore scanner or an open-bore scanner. The magnet assembly may generate a first magnetic field (also referred to as a main magnetic field) for polarizing the subject to be scanned. The magnet assembly may include a permanent magnet, a superconducting electromagnet, a resistive electromagnet, etc. In some embodiments, the magnet assembly may further include shim coils for controlling the homogeneity of the main magnetic field. The gradient coil assembly may generate a second magnetic field (also referred to as a gradient magnetic field). The gradient coil assembly may include X-gradient coils, Y-gradient coils, and Z-gradient coils. The gradient coil assembly may generate one or more magnetic field gradient pulses to the main magnetic field in the X direction (Gx), Y direction (Gy), and Z direction (Gz) to encode the spatial information of the subject. In some embodiments, the X direction may be designated as a frequency encoding direction, while the Y direction may be designated as a phase encoding direction. In some embodiments, Gx may be used for frequency encoding or signal readout, generally referred to as frequency encoding gradient or readout gradient. In some embodiments, Gy may be used for phase encoding, generally referred to as phase encoding gradient. In some embodiments, Gz may be used for slice selection for obtaining 2D k-space data. In some embodiments, Gz may be used for phase encoding for obtaining 3D k-space data. The RF coil assembly may include a plurality of RF coils. The RF coils may include one or more RF transmit coils and/or one or more RF receiver coils. The RF transmit coil(s) may transmit RF pulses to the subject. Under the coordinated action of the main magnetic field, the gradient magnetic field, and the RF pulses, MR signals relating to the subject may be generated. The RF receiver coils may receive MR signals from the subject. In some embodiments, one or more RF coils may both transmit RF pulses and receive MR signals at different times. In some embodiments, the function, size, type, geometry, position, amount, and/or magnitude of the RF coil(s) may be determined or changed according to one or more specific conditions. For example, according to the difference in function and size, the RF coil(s) may be classified as volume coils and local coils. In some embodiments, the MRI scanner110may also include a radio frequency power amplifier (RFPA). The RFPA may receive a series of pulses generated by an external RF source as the input signals, and generate a series of amplified pulses as the output signals. The output signals are used to drive RF coils. In some embodiments, the performance of the RFPA may affect the quality of image(s) generated by the MRI system. To ensure the RFPA having a linear amplification ability, a RFPA control device is used to adjust the input signal before amplified by the RFPA. Details regarding the RFPA control device may be found elsewhere in the present disclsoue (e.g.,FIGS.4A-6Cand the descriptions thereof). The network120may facilitate exchange of information and/or data. In some embodiments, one or more components of the MRI system100(e.g., the MRI scanner110, the terminal130, the processing device140, or the storage device150) may send information and/or data to another component(s) in the MRI system100via the network120. For example, the processing device140may cause, via the network120, an input signal processing module to process an input signal into at least two signals. In some embodiments, the network120may be any type of wired or wireless network, or a combination thereof. The network120may be and/or include a public network (e.g., the Internet), a private network (e.g., a local area network (LAN), a wide area network (WAN)), etc.), a wired network (e.g., an Ethernet network), a wireless network (e.g., an 802.11 network, a Wi-Fi network), a cellular network (e.g., a Long Term Evolution (LTE) network), a frame relay network, a virtual private network (“VPN”), a satellite network, a telephone network, routers, hubs, switches, server computers, and/or any combination thereof. Merely by way of example, the network120may include a cable network, a wireline network, an optical fiber network, a telecommunications network, an intranet, an Internet, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), a metropolitan area network (MAN), a wide area network (WAN), a public telephone switched network (PSTN), a Bluetooth™ network, a ZigBee™ network, a near field communication (NFC) network, or the like, or any combination thereof. In some embodiments, the network120may include one or more network access points. For example, the network120may include wired or wireless network access points such as base stations and/or internet exchange points through which one or more components of the MRI system100may be connected to the network120to exchange data and/or information. The terminal130include a mobile device130-1, a tablet computer130-2, a laptop computer130-3, or the like, or any combination thereof. In some embodiments, the mobile device130-1may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home device may include a smart lighting device, a control device of an intelligent electrical apparatus, a smart monitoring device, a smart television, a smart video camera, an interphone, or the like, or any combination thereof. In some embodiments, the wearable device may include a bracelet, footgear, eyeglasses, a helmet, a watch, clothing, a backpack, an accessory, or the like, or any combination thereof. In some embodiments, the smart mobile device may include a smartphone, a personal digital assistant (PDA), a gaming device, a navigation device, a point of sale (POS) device, or the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, a virtual reality glass, a virtual reality patch, an augmented reality helmet, an augmented reality glass, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device and/or the augmented reality device may include a Google Glass, an Oculus Rift, a HoloLens, a Gear VR, etc. In some embodiments, the terminal130may remotely operate the MRI scanner110. In some embodiments, the terminal130may operate the MRI scanner110via a wireless connection. In some embodiments, the terminal130may receive information and/or instructions inputted by a user, and send the received information and/or instructions to the MRI scanner110or to the processing device140via the network120. In some embodiments, the terminal130may receive data and/or information from the processing device140. In some embodiments, the terminal130may be part of the processing device140. In some embodiments, the terminal130may be omitted. In some embodiments, the processing device140may process data obtained from the MRI scanner110, the terminal130, or the storage device150. For example, the processing device140may cause an adjustment module to adjust at least one feature of a signal (e.g., a second signal) based on a control signal. The processing device140may be a central processing unit (CPU), a digital signal processor (DSP), a system on a chip (SoC), a microcontroller unit (MCU), or the like, or any combination thereof. In some embodiments, the processing device140may be a single server or a server group. The server group may be centralized or distributed. In some embodiments, the processing device140may be local or remote. For example, the processing device140may access information and/or data stored in the MRI scanner110, the terminal130, and/or the storage device150via the network120. As another example, the processing device140may be directly connected to the MRI scanner110, the terminal130, and/or the storage device150, to access stored information and/or data. In some embodiments, the processing device140may be implemented on a cloud platform. Merely by way of example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof. In some embodiments, the processing device140may be implemented on a computing device200having one or more components illustrated inFIG.2in the present disclosure. The storage device150may store data and/or instructions. In some embodiments, the storage device150may store data obtained from the terminal130and/or the processing device140. In some embodiments, the storage device150may store data and/or instructions that the processing device140may execute or use to perform exemplary methods described in the present disclosure. In some embodiments, the storage device150may include a mass storage, removable storage, a volatile read-and-write memory, a read-only memory (ROM), or the like, or any combination thereof. Exemplary mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc. Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. Exemplary volatile read-and-write memory may include a random-access memory (RAM). Exemplary RAM may include a dynamic RAM (DRAM), a double date rate synchronous dynamic RAM (DDR SDRAM), a static RAM (SRAM), a thyristor RAM (T-RAM), and a zero-capacitor RAM (Z-RAM), etc. Exemplary ROM may include a mask ROM (MROM), a programmable ROM (PROM), an erasable programmable ROM (PEROM), an electrically erasable programmable ROM (EEPROM), a compact disk ROM (CD-ROM), and a digital versatile disk ROM, etc. In some embodiments, the storage device150may be implemented on a cloud platform. Merely by way of example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof. In some embodiments, the storage device150may be connected to the network120to communicate with one or more components of the MRI system100(e.g., the terminal130, the processing device140). One or more components of the MRI system100may access the data or instructions stored in the storage device150via the network120. In some embodiments, the storage device150may be directly connected to or communicate with one or more components of the MRI system100(e.g., the terminal130, the processing device140). In some embodiments, the storage device150may be part of the processing device140. FIG.2is a schematic diagram illustrating exemplary hardware and/or software components of a computing device on which the processing device140may be implemented according to some embodiments of the present disclosure. As illustrated inFIG.2, the computing device200may include a processor210, a storage220, an input/output (I/O)230, and a communication port240. The processor210may execute computer instructions (program code) and, when executing the instructions, cause the processing device140to perform functions of the processing device140in accordance with techniques described herein. The computer instructions may include, for example, routines, programs, objects, components, signals, data structures, procedures, modules, and functions, which perform particular functions described herein. In some embodiments, the processor210may process data and/or images obtained from the MRI scanner110, the terminal130, the storage device150, and/or any other component of the MRI system100. For example, the processor210may cause an input signal processing module to process an input signal into at least two signals. A first signal of the at least two signals may be used for signal detection, and a second signal of the at least two signals may be used for signal amplification. In some embodiments, the processor210may include one or more hardware processors, such as a microcontroller, a microprocessor, a reduced instruction set computer (RISC), an application specific integrated circuits (ASICs), an application-specific instruction-set processor (ASIP), a central processing unit (CPU), a graphics processing unit (GPU), a physics processing unit (PPU), a microcontroller unit, a digital signal processor (DSP), a field programmable gate array (FPGA), an advanced RISC machine (ARM), a programmable logic device (PLD), any circuit or processor capable of executing one or more functions, or the like, or any combinations thereof. Merely for illustration, only one processor is described in the computing device200. However, it should be noted that the computing device200in the present disclosure may also include multiple processors. Thus operations and/or method steps that are performed by one processor as described in the present disclosure may also be jointly or separately performed by the multiple processors. For example, if in the present disclosure the processor of the computing device200executes both process A and process B, it should be understood that process A and process B may also be performed by two or more different processors jointly or separately in the computing device200(e.g., a first processor executes process A and a second processor executes process B, or the first and second processors jointly execute processes A and B). The storage220may store data/information obtained from the MRI scanner110, the terminal130, the storage device150, or any other component of the MRI system100. In some embodiments, the storage220may include a mass storage device, removable storage device, a volatile read-and-write memory, a read-only memory (ROM), or the like, or any combination thereof. For example, the mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc. The removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. The volatile read-and-write memory may include a random access memory (RAM). The RAM may include a dynamic RAM (DRAM), a double date rate synchronous dynamic RAM (DDR SDRAM), a static RAM (SRAM), a thyristor RAM (T-RAM), and a zero-capacitor RAM (Z-RAM), etc. The ROM may include a mask ROM (MROM), a programmable ROM (PROM), an erasable programmable ROM (PEROM), an electrically erasable programmable ROM (EEPROM), a compact disk ROM (CD-ROM), and a digital versatile disk ROM, etc. In some embodiments, the storage220may store one or more programs and/or instructions to perform exemplary methods described in the present disclosure. The I/O230may input or output signals, data, and/or information. In some embodiments, the I/O230may enable a user interaction with the processing device140. In some embodiments, the I/O230may include an input device and an output device. Exemplary input devices may include a keyboard, a mouse, a touch screen, a microphone, or the like, or a combination thereof. Exemplary output devices may include a display device, a loudspeaker, a printer, a projector, or the like, or a combination thereof. Exemplary display devices may include a liquid crystal display (LCD), a light-emitting diode (LED)-based display, a flat panel display, a curved screen, a television device, a cathode ray tube (CRT), or the like, or a combination thereof. The communication port240may be connected to a network (e.g., the network120) to facilitate data communications. The communication port240may establish connections between the processing device140and the MRI scanner110, the terminal130, or the storage device150. The connection may be a wired connection, a wireless connection, or a combination of both that enables data transmission and reception. The wired connection may include an electrical cable, an optical cable, a telephone wire, or the like, or any combination thereof. The wireless connection may include Bluetooth, Wi-Fi, WiMAX, WLAN, ZigBee, mobile network (e.g., 3G, 4G, 5G, etc.), or the like, or a combination thereof. In some embodiments, the communication port240may be a standardized communication port, such as RS232, RS485, etc. In some embodiments, the communication port240may be a specially designed communication port. For example, the communication port240may be designed in accordance with the digital imaging and communications in medicine (DICOM) protocol. FIG.3is a schematic diagram illustrating exemplary hardware and/or software components of a mobile device according to some embodiments of the present disclosure. As illustrated inFIG.3, the mobile device300may include a communication platform310, a display320, a graphics processing unit (GPU)330, a central processing unit (CPU)340, an I/O350, a memory360, and a storage390. In some embodiments, any other suitable component, including but not limited to a system bus or a controller (not shown), may also be included in the mobile device300. In some embodiments, a mobile operating system370(e.g., iOS, Android, Windows Phone, etc.) and one or more applications380may be loaded into the memory360from the storage390in order to be executed by the CPU340. The applications380may include a browser or any other suitable mobile apps for receiving and rendering information relating to image processing or other information from the processing device140. User interactions with the information stream may be achieved via the I/O350and provided to the processing device140and/or other components of the MRI system100via the network120. To implement various modules, units, and their functionalities described in the present disclosure, computer hardware platforms may be used as the hardware platform(s) for one or more of the elements described herein. The hardware elements, operating systems and programming languages of such computers are conventional in nature, and it is presumed that those skilled in the art are adequately familiar therewith to adapt those technologies to the RFPA control device as described herein. A computer with user interface elements may be used to implement a personal computer (PC) or another type of work station or terminal device, although a computer may also act as a server if appropriately programmed. It is believed that those skilled in the art are familiar with the structure, programming and general operation of such computer equipment and as a result, the drawings should be self-explanatory. FIG.4Ais a schematic diagram illustrating an exemplary radio frequency power amplifier (RFPA) control device405according to some embodiments of the present disclosure.FIG.4Bis a schematic diagram illustrating an exemplary radio frequency power amplifier (RFPA)400according to some embodiments of the present disclosure. The RFPA400may include a RFPA control device (e.g., the RFPA control device405as illustrated inFIG.4A), a radio frequency power amplification module450, and a load460. The RFPA control device405may be configured to adjust an input signal to implement a nonlinear correction of the radio frequency power amplification module450. The radio frequency power amplification module450may be configured to amplify the adjusted input signal and generate an output signal. In some embodiments, an amplification function (which is nonlinear) of the radio frequency power amplification module450may be pre-determined via tests. Then a correction function may be determined based on the amplification function. The RFPA control device405may adjust the input signal according to the correction function. Then the radio frequency power amplification module450may amplify the adjusted input signal to generate the output signal that satisfies user's demand. The output signal may be transmitted to the load460(e.g., RF coils). Unless otherwise stated, like reference numerals inFIGS.4A and4Brefer to like components having the same or similar functions. As shown inFIGS.4A and4B, the RFPA control device405may include an input signal processing module410, an adjustment module430, and a signal processing and control module440. The adjustment module430may be connected to the input signal processing module410and the radio frequency power amplification module450, respectively. Specifically, the first end of the adjustment module430may be connected to the input signal processing module410, and the second end of the adjustment module430may be connected to the radio frequency power amplification module450. The signal processing and control module440may be connected to the input signal processing module410and the adjustment module430, respectively. Specifically, the first end of the signal processing and control module440may be connected to the input signal processing module410, and the second end of the signal processing and control module440may be connected to the adjustment module430. The RFPA control device400may also include a delay module420. The delay module420may be disposed between the input signal processing module410and the adjustment module430. The input signal processing module410may be configured to process an input signal into at least two signals. A first signal (also referred to as first input signal) of the at least two signals may be used for signal detection, and a second signal (also referred to as second input signal) of the at least two signals may be used for signal amplification. In some embodiments, the input signal processing module410may be a power divider, a couper, or the like, or a combination thereof. The power divider may divide the input signal into two equal or unequal signals (i.e., the first signal, the second signal). The couper may extract a portion of signal from the input signal as the first signal, and the remaining signal as the second signal. In some embodiments, the first signal may be transmitted to the signal processing and control module440. The second signal may be transmitted to the delay module420. The signal processing and control module440may be configured to generate a control signal based on at least one feature of the first signal. The control signal may be used to control a degree of the adjustment of the at least one feature of the second signal. In some embodiments, the at least one feature of the first signal may include amplitude, phase, or the like, or any combination thereof. In some embodiments, the signal processing and control module440may include one or more sub-modules (such as an input signal detection sub-module, a signal processing and control sub-module, an output signal detection sub-module) configured to determine the control signal. More descriptions of the sub-modules may be found elsewhere in the present disclosure (e.g.,FIGS.5A,5B,6A, and6B, and the descriptions thereof). The adjustment module430may be configured to adjust at least one feature of the second signal. In some embodiments, the at least one feature of the second signal may include amplitude, phase, or the like, or any combination thereof. The adjustment module430may include a controllable attenuator, a phase shifter, or the like. The controllable attenuator may be used to adjust the amplitude of the second signal. The phase shifter may be used to adjust the phase of the second signal. In some embodiments, the adjustment module430may transmit the adjusted second signal into the radio frequency power amplification module450for amplification. For example, the radio frequency power amplification module450may amplify the amplitude of the adjusted second signal and/or adjust the phase of the second signal to generate the output signal. As shown inFIG.4B, the first signal may be transmitted to the signal processing and control module440and processed by the signal processing and control module440to generate the control signal. The control signal may then be transmitted to the adjustment module430. In some embodiments, the generation of the control signal may take a certain amount of time, while the second signal may be directly transmitted to the adjustment module430. Thus, the moment that the control signal reaches the adjustment module430may be significantly later than the moment that the second signal reaches the adjustment module430, which may result in poor nonlinear correction effects of the radio frequency power amplification module450(especially for fast response signals), and further may cause imaging artifacts in an MRI system (e.g., the MRI system100). Thus, the delay module420may be disposed between the input signal processing module410and the adjustment module430to delay the moment that the second signal reaches the adjustment module430. The delay module420may be configured to determine a delay of the second signal such that the second signal and the control signal may roughly simultaneously reach the adjustment module430. As used herein, “roughly simultaneously reaching” refers to a time difference between the moment that the second signal reaches the adjustment module430and the moment that the control signal reaches the adjustment module430is less than several nanoseconds (e.g., 10 nanoseconds, 20 nanoseconds, 50 nanoseconds). The delay module420is configured to delay the time when the inputted to-be-amplified signal reaches the adjustment module430. For example, the delay module420may adjust a time difference between the moment that the second signal reaches the adjustment module430and the moment that the control signal reaches the adjustment module430. In some embodiments, the time difference may be 0. In this case, the second signal and the control signal may simultaneously reach the adjustment module430. Alternatively, the time difference may be several nanoseconds, several microseconds, or the like. In this case, the second signal and the control signal may be regarded as approximately simultaneously reaching the adjustment module430. The delay module420may include a LC filter, a surface acoustic wave filter (SAWF), a delay line, or the like, or any combination thereof. The delay time may be determined when the delay module420may be fabricated. In some embodiments, a plurality of delay modules with different delay times may be fabricated according to actual demands. In some embodiments of the present disclosure, the delay module420may be disposed between the input signal processing module410and the adjustment module430to delay the transmission of the second signal, which may counteract the time difference between the moment that the control signal reaches the adjustment module430and the moment that the second signal reaches the adjustment module430, thus achieving good nonlinear correction effects and further avoiding imaging artifacts in an MRI system (e.g., the MRI system100). It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, the RFPA may be applied to an MRI system (e.g., the MRI system100). Alternatively or additionally, the RFPA may be used in various applications, such as broadcasting, satellite communications, cellular communications, or the like. FIG.5Ais a schematic diagram illustrating an exemplary radio frequency power amplifier (RFPA) control device505according to some embodiments of the present disclosure.FIG.5Bis a schematic diagram illustrating an exemplary radio frequency power amplifier (RFPA)500according to some embodiments of the present disclosure. The RFPA500may include a RFPA control device (e.g., the RFPA control device505as illustrated inFIG.5A), a radio frequency power amplification module550, and a load560. The RFPA control device505may be configured to adjust an input signal to implement a nonlinear correction of the radio frequency power amplification module550. The radio frequency power amplification module550may be configured to amplify the adjusted input signal and generate an output signal. The output signal may be transmitted to the load560(e.g., RF coils). Unless otherwise stated, like reference numerals inFIGS.5A and5Brefer to like components having the same or similar functions. As shown inFIGS.5A and5B, the RFPA control device505may include an input signal processing module510, a delay module520, an adjustment module530, and a signal processing and a control module540. The connection of the modules of the RFPA control device505may be the same as the modules of the RFPA control device405. The functions of the input signal processing module510, the delay module520, the adjustment module530, and the signal processing and control module540may be the same as or similar to the functions of the input signal processing module410, the delay module420, the adjustment module430, and the signal processing and control module440, and the relevant descriptions are not repeated herein. As shown inFIGS.5A and5B, the signal processing and control module540may further include an input signal detection sub-module542and a signal processing and control sub-module544. The input signal detection sub-module542may be connected to the input signal processing module510. The signal processing and control sub-module544may include a first end544aand a second end544b. The first end544amay be connected to the input signal detection sub-module542. The second end544bmay be connected to the adjustment module530. The input signal detection sub-module542may be configured to detect the at least one feature of the first signal. The at least one feature of the first signal may include amplitude, phase, or the like. In some embodiments, the at least one feature may only include amplitude. Alternatively, the at least one feature may only include phase. Alternatively, the at least one feature may include amplitude and phase. The input signal detection sub-module542may detect the amplitude and/or phase of the first signal, and transmit the detected amplitude and/or detected phase of the first signal into the signal processing and control sub-module544. In some embodiments, the input signal detection sub-module542may be a device configured to detect the at least one feature of the first signal. The input signal detection sub-module542may include a detector, a diode, a phase discriminator, or the like, or any combination thereof. For example, the detector may be used to detect the amplitude of the first signal. The phase discriminator may be used to detect the phase of the first signal. In some embodiments, the input signal detection sub-module542may be a signal detection circuit. For example, the at least one feature of the first signal may include the amplitude of the first signal. The signal detection circuit may detect the at least one feature (i.e., the amplitude) of the first signal via a envelope-demodulation. As another example, the at least one feature of the first signal may include the phase of the first signal. The signal detection circuit may detect the at least one feature (i.e., the phase) of the first signal via a direct radio frequency sampling. The signal processing and control sub-module544may be configured to generate the control signal based on the at least one feature of the first signal. The control signal may be used to control the degree of the adjustment of the at least one feature of the second signal. In some embodiments, the signal processing and control sub-module544may determine the amplitude of the input signal based on the amplitude of the first signal. The signal processing and control sub-module544may determine the degree of the adjustment of the amplitude of the second signal based on the amplitude of the input signal and user's demand (e.g., a required amplitude of the output signal outputted by the radio frequency power amplification module550). For example, the amplification function of the radio frequency power amplification module550may be determinate. To obtain the required amplitude of the output signal, a correction function may be determined based on the amplification function. The degree of the adjustment of the amplitude of the second signal may be determined based on the amplitude of the input signal and the correction function. In some embodiments, the degree of the adjustment of the phase of the second signal may be determined based on the degree of the adjustment of the amplitude of the second signal. The signal processing and control sub-module544may determine the degree of the adjustment of the phase of the second signal based on the degree of the adjustment of the amplitude of the second signal. Thus, the signal processing and control sub-module544may generate the control signal based on the degree of adjustment of the amplitude and/or phase of the second signal. The control signal may be applied to the adjustment module530, and configured to control the degree of adjustment of the amplitude and/or phase of the second signal. The amplitude and/or phase of the second signal may be adjusted by the adjustment module530, and further be amplified by the radio frequency power amplification module550to generate a required output signal. In the present disclosure, before the radio frequency power amplification module550amplifies the second signal (e.g., before amplifying the amplitude and/or phase of the second signal), the adjustment module530may adjust and/or correct (e.g., nonlinearly correct) the second signal (e.g., the amplitude and/or phase of the second signal) to ensure that the output signal (e.g., the amplitude and/or phase of the output signal) satisfies user's demands. In some embodiments, the second signal is relatively small before the amplification, and thus, it is easy to achieve the adjustment and correction. In some embodiments, the delay module520may adjust a time difference between the moment that the second signal reaches the adjustment module530and the moment that the control signal reaches the adjustment module530. In some embodiments, the time difference may be 0, several nanoseconds (e.g., 5 nanoseconds, 10 nanoseconds), several microseconds (e.g., 0.01 microsecond, 0.05 microseconds), or the like. When the time difference is 0, the control signal and the second signal may simultaneously reach the adjustment module530. When the time difference is 0.05 microseconds, the control signal and the second signal may approximately simultaneously reach the adjustment module530, and the imaging stability (e.g., whether there is an image artifact) of the MRI system100may not affected by the slightly time difference. In the present disclosure, by introducing the delay module530into the RFPA control device505, it may compensate for the time difference that the control signal lags behind the second signal, thereby avoiding poor imaging stability (or even image artifact) of the MRI system100. In some embodiments, the delay time of the delay module520may be a predetermined value to delay the moment that the second signal reaches the adjustment module530. The delay of the second signal may be greater than 0.2 microseconds. Specifically, the delay of the second signal may be 0.4 microseconds, 0.5 microseconds, 0.7 microseconds, 1 microsecond, or the like. In some embodiments, a total transmission time from a moment that the first signal is transmitted to the signal processing and control module540to a moment that the control signal is transmitted to the adjustment module530may be composed of four transmission times, that is, a first transmission time, a second transmission time, a third transmission time, and a fourth transmission time. In some embodiments, the first transmission time may be a time that the input signal detection sub-module542detects the at least one feature of the first signal. The second transmission time may be a time that the signal processing and control sub-module544processes the first signal and generates the control signal. The third transmission time may be a time of transmitting the control signal to the adjustment module530. The fourth transmission time may be an adjustable delay time imposed by an adjustable delay unit of the signal processing and control sub-module. In some embodiments, the transmission link of the second signal (i.e., the input signal processing module510—the delay module520—the adjustment module530—the radio frequency power amplification module550) may be a radio frequency (RF) transmission link. The delay module520may be a LC filter, a surface acoustic wave filter (SAWF), a delay line, or the like, or any combination thereof. The delay time may be determined when the delay module520is fabricated. Due to the radio frequency transmission link, the delay module520may provide an RF delay, which is a fixed value and cannot be changed in real-time. Therefore, the delay has to be preset, resulting in inaccurate delay which further leads to the time difference between the moment that the second signal reaches the adjustment module530and the moment that the control signal reaches the adjustment module530is undesirably excessively large or small. In some embodiments, the total transmission time from a moment that the first signal is transmitted to the signal processing and control module540to a moment that the control signal is transmitted to the adjustment module530may be varied due to the situations described above. For example, for the first signals with different amplitudes or phases, the second times (i.e., a time that the signal processing and control sub-module544processes the first signal and generates the control signal) may be different. In order to ensure that the control signal and the second signal roughly simultaneously reach the adjustment module530, the signal processing and control sub-module544may include the adjustable delay unit. In some embodiments, the transmission link of the control signal may belong to a analog domain or a digital domain. A delay on the analog domain or the digital domain may be adjustable. The adjustable delay unit may be configured to impose an adjustable delay on the control signal to adjust the moment that the control signal reaches the adjustment module530. In some embodiments, the adjustable delay imposed on the control signal may be the fourth transmission time. The adjustable delay unit may cooperate with the delay module520to ensure that the control signal and the second signal roughly simultaneously reach the adjustment module530. In some embodiments, the adjustable delay unit may adjust the fourth transmission time to change a time difference between the total transmission time and the delay of the second signal such that the second signal and the control signal roughly simultaneously reach the adjustment module. For example, the time difference between the total transmission time and the delay of the second signal may be expressed as Equation (1) as below: Ts=T−T0=T−(T1+T2+T3+T4), (1) wherein Tsrefers to the time difference between total transmission time and the delay of the second signal (or the time difference between the moment that the second signal reaches the adjustment module530and the moment that the control signal reaches the adjustment module530); T refers to delay of the second signal; T0refers to the total transmission time; T1refers to the first transmission time; T2refers to the second transmission time; T3refers to the third transmission time; and T4refers to the fourth transmission time. Merely by way of example, if the delay of the second signal is equal to the total transmission time (i.e., a sum of the first transmission time, the second transmission time, the third transmission time, and the fourth transmission time), the time difference between the moment that the second signal reaches the adjustment module530and the moment that the control signal reaches the adjustment module530may be 0, that is, Ts=0. In the present disclosure, by adjusting the fourth transmission time by the adjustable delay unit, the time difference between the moment that the second signal reaches the adjustment module530and the moment that the control signal reaches the adjustment module530may be changed such that the second signal and the control signal may roughly simultaneously reach the adjustment module530, which may improve the imaging stability of the MRI system100and avoid image artifacts. FIG.6Ais a schematic diagram illustrating an exemplary radio frequency power amplifier (RFPA) control device605according to some embodiments of the present disclosure.FIGS.6B and6Care schematic diagrams illustrating an exemplary radio frequency power amplifier (RFPA) according to some embodiments of the present disclosure. The RFPA600as illustrated inFIG.6Bmay include a RFPA control device (e.g., the RFPA control device605as illustrated inFIG.6A), a radio frequency power amplification module650, and a load660. The RFPA600′ as illustrated inFIG.6Cmay include a RFPA control device (e.g., the RFPA control device605as illustrated inFIG.6A), and a radio frequency power amplification module650. The RFPA control device605may be configured to adjust an input signal to implement a nonlinear correction of the radio frequency power amplification module650. The radio frequency power amplification module650may be configured to amplify the adjusted input signal and generate an output signal. The output signal may be transmitted to the load660(e.g., RF coils). Unless otherwise stated, like reference numerals inFIGS.6A-6Crefer to like components having the same or similar functions. As shown inFIGS.6A-6C, the RFPA control device605may include an input signal processing module610, a delay module620, an adjustment module630, and a signal processing and a control module640. The connection of the modules of the RFPA control device605may be the same as the modules of the RFPA control device405. The functions of the input signal processing module610, the delay module620, the adjustment module630, and the signal processing and control module640may be the same as or similar to the functions of the input signal processing module410, the delay module420, the adjustment module430, and the signal processing and control module440, and the descriptions are not repeated herein. As described in connection withFIGS.5A and5B, the signal processing and control module640may detect the at least one feature of the first signal, and generate the control signal based on the feature of the first signal and the feature of the output radio frequency signal desired by the user (e.g., the feature of a required output signal). The adjustment module630may adjust the at least one feature of the second signal based on the control signal, and the radio frequency power amplification module650may further amplify the at least one feature of the second signal to generate the output signal. However, in some cases, the actual output signal generated by the radio frequency power amplification module650may be different from the required output signal (e.g., a small error exists). That is, the detection of the first signal alone may not achieve the accurate nonlinear correction of the radio frequency power amplification module650. In some embodiments, to eliminate the small error between the actual output signal and the required output signal, the output radio frequency signal (i.e., the actual output signal) needs to be processed further by an output signal processing module670, and accordingly, the control signal may be adjusted based on the processed output signal. In some embodiments, the RFPA control device605may further include the output signal processing module670. The output signal processing module670may be connected to the radio frequency power amplification module650. The output signal processing module670may be configured to process the output signal (i.e., the actual output signal) into at least two output signals. A first output signal of the at least two output signals may be used for signal detection, and a second output signal of the at least two output signals may be transmitted to the load660. In some embodiments, the first output signal may be used to adjust the control signal. As shown inFIG.6A, the signal processing and control module640may include an input signal detection sub-module642, a signal processing and control sub-module644, and an output signal detection sub-module646. The input signal detection sub-module642may be connected to the input signal processing module610. The output signal detection sub-module646may be connected to the output signal processing module670. The signal processing and control sub-module644may include a first end644a, a second end644b, and a third end644c. The first end644amay be connected to the input signal detection sub-module642. The second end644bmay be connected to the adjustment module630. The third end644cmay be connected to the output signal detection sub-module646. In some embodiments, the input signal detection sub-module642may be configured to detect the at least one feature of the first signal (also referred to as first input signal) and transmit the first signal to the signal processing and control sub-module644. The output signal detection sub-module646may be configured to detect at least one feature of the first output signal and transmit the first output signal to the signal processing and control sub-module644. The signal processing and control sub-module644may determine the control signal based on the at least one feature of the first signal and the at least one feature of the first output signal. For example, the signal processing and control sub-module644may adjust the control signal based on an error between the at least one feature of the first signal and the at least one feature of the first output signal. The control signal may be used to control the degree of the adjustment of the at least one feature of the second signal. The at least one feature of the first signal and/or the first output signal may include amplitude, phase, or the like, or any combination thereof. In some embodiments, the signal processing and control sub-module644may determine the amplitude of the input signal based on the amplitude of the first signal. The signal processing and control sub-module644may determine the amplitude of the output signal based on the amplitude of the first output signal. The signal processing and control sub-module644may determine the degree of the adjustment of the amplitude of the second signal based on the amplitude of the input signal, the amplitude of the actual output signal, and the user's demand (e.g., the amplitude of the required output signal). The signal processing and control sub-module644may determine the degree of the adjustment of the phase of the second signal based on the degree of the adjustment of the amplitude of the second signal. Thus, the signal processing and control sub-module644may generate the control signal based on the degree of adjustment of the amplitude and/or phase of the second signal. When the RFPA control device605works, the signal processing and control sub-module644may continually adjust the control signal based on the input signal (e.g., the first signal), the actual output signal (e.g., the first output signal), and the user's demand (e.g., the output signal required by the user). And the signal processing and control sub-module644may continually adjust the second signal based on the control signal until the required output signal is obtained. In some embodiments of the present disclosure, the control signal may be determined or adjusted based on the feature(s) of the input signal and the feature(s) of the actual output signal. The second signal may be adjusted based on the control signal, and further be amplified by the radio frequency power amplification module650to generate the required output signal. The above process may form a closed-loop control, which may implement the precise nonlinear correction of the radio frequency power amplification module650. FIG.7is a flowchart illustrating an exemplary process for adjusting a second signal according to some embodiments of the present disclosure. For illustration purposes only, the processing device140may be described as a subject to perform the process700. However, one of ordinary skill in the art would understand that the process700may also be performed by other entities. In some embodiments, one or more operations of process700may be implemented in the MRI system100illustrated inFIG.1. For example, the process700may be stored in the storage device150and/or the storage220in the form of instructions (e.g., an application), and invoked and/or executed by one or more components of the RFPA405(or505,605) as illustrated inFIG.4A(orFIG.5A,FIG.6A). The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process700may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process700as illustrated inFIG.7Aand described below is not intended to be limiting. In701, the processing device140(e.g., the input signal processing module410, the input signal processing module510, the input signal processing module610) may process an input signal into at least two signals. The processing device140may divide the input signal into two equal signals, or two unequal signals. A first signal of the at least two signals may be transmitted to a signal processing and control module (e.g., the signal processing and control module440, the signal processing and control module540, the signal processing and control module640) and used for signal detection. A second signal of the at least two signals may be transmitted to a delay module (e.g., the delay module420, the delay module520, the delay module620) and used for signal amplification. In703, the processing device140(e.g., the signal processing and control module440, the signal processing and control module540, the signal processing and control module640) may generate a control signal based on at least feature of the first signal. The control signal may be used to control a degree of the adjustment of at least one feature of the second signal. In some embodiments, the processing device140may detect the at least one feature of the first signal. The at least one feature of the first signal may include amplitude, phase, or the like, or any combination thereof. In some embodiments, the processing device140may detect the amplitude of the first signal via a envelope-demodulation. The processing device140may detect the phase of the first signal via a direct radio frequency sampling. The processing device140may generate the control signal based on the at least one feature of the first signal. Specifically, the processing device140may determine the amplitude of the input signal based on the amplitude of the first signal. The processing device140may determine the degree of the adjustment of the amplitude of the second signal based on the amplitude of the input signal and the amplitude of the output signal desired by the user (e.g., a required output signal). The processing device140may also determine the degree of the adjustment of the phase of the second signal based on the degree of the adjustment of the phase of the second signal. Thus, the processing device140may determine the control signal based on the degree of the adjustment of the amplitude and/or phase of the second signal. In705, the processing device140(e.g., the delay module420, the delay module520, the delay module620) may determine a delay of the second signal such that the second signal and the control signal roughly simultaneously reach an adjustment module (e.g., the adjustment module430, the adjustment module530, the adjustment module630). In some embodiments, the processing device140may adjust a time difference between the moment that the second signal reaches the adjustment module and the moment that the control signal reaches the adjustment module. In some embodiments, the time difference may be 0, several nanoseconds (2 nanoseconds, 5 nanoseconds, 10 nanoseconds), or several microseconds (e.g., 0.01 microseconds, 0.05 microseconds), or the like. More descriptions of the determination of the delay of the second signal may be found elsewhere in the present disclosure (e.g.,FIGS.5A and5Band the relevant descriptions thereof). In707, the processing device140(e.g., the adjustment module430, the adjustment module530, the adjustment module630) may adjust at least one feature of the second signal based on the control signal. The at least one feature of the second signal may include amplitude, phase, or the like, or any combination thereof. In some embodiments, the processing device140may adjust the amplitude of the second signal. Alternatively, the processing device140may adjust the phase of the second signal. Alternatively, the processing device140may adjust the amplitude and the phase of the second signal. In some embodiments, the processing device140(e.g., the radio frequency power amplification module450, the radio frequency power amplification module550, the radio frequency power amplification module650) may amplify the adjusted second signal to generate an output signal. In some embodiments, the processing device140(e.g., the output signal processing module670) may process the output signal into at least two output signals. A first output signal of the at least two output signals may be used for signal detection, and a second output signal of the at least two output signals may be transmitted to a load (e.g., RF coils). In some embodiments, the first output signal may be used to adjust the control signal. For example, the processing device140may detect at least one feature of the first output signal. The at least one feature of the first output signal may include amplitude, phase, or the like, or any combination thereof. The processing device140may determine the control signal based on the at least one feature of the first signal and the at least one feature of the first output signal. It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, operations703and705may be performed simultaneously. As another example, operation705may be performed before operation703. Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Various alterations, improvements, and modifications may occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested by this disclosure, and are within the spirit and scope of the exemplary embodiments of this disclosure. Moreover, certain terminology has been used to describe embodiments of the present disclosure. For example, the terms “one embodiment,” “an embodiment,” and/or “some embodiments” mean that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the present disclosure. Further, it will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc.) or combining software and hardware implementation that may all generally be referred to herein as a “unit,” “module,” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon. A non-transitory computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the “C” programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS). Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose, and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software only solution, e.g., an installation on an existing server or mobile device. Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various inventive embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed object matter requires more features than are expressly recited in each claim. Rather, inventive embodiments lie in less than all features of a single foregoing disclosed embodiment. In some embodiments, the numbers expressing quantities, properties, and so forth, used to describe and claim certain embodiments of the application are to be understood as being modified in some instances by the term “about,” “approximate,” or “substantially.” For example, “about,” “approximate,” or “substantially” may indicate ±20% variation of the value it describes, unless otherwise stated. Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable. Each of the patents, patent applications, publications of patent applications, and other material, such as articles, books, specifications, publications, documents, things, and/or the like, referenced herein is hereby incorporated herein by this reference in its entirety for all purposes, excepting any prosecution file history associated with same, any of same that is inconsistent with or in conflict with the present document, or any of same that may have a limiting affect as to the broadest scope of the claims now or later associated with the present document. By way of example, should there be any inconsistency or conflict between the description, definition, and/or the use of a term associated with any of the incorporated material and that associated with the present document, the description, definition, and/or the use of the term in the present document shall prevail. In closing, it is to be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the application. Other modifications that may be employed may be within the scope of the application. Thus, by way of example, but not of limitation, alternative configurations of the embodiments of the application may be utilized in accordance with the teachings herein. Accordingly, embodiments of the present application are not limited to that precisely as shown and described. | 71,754 |
11863220 | DETAILED DESCRIPTION The accompanying drawings, which form a part hereof, show examples of the disclosure. It is to be understood that the examples shown in the drawings and/or discussed herein are non-exclusive and that there are other examples of how the disclosure may be practiced. FIG.1shows an example communication network100in which features described herein may be implemented. The communication network100may comprise one or more information distribution networks of any type, such as, without limitation, a telephone network, a wireless network (e.g., an LTE network, a 5G network, a WiFi IEEE 802.11 network, a WiMAX network, a satellite network, and/or any other network for wireless communication), an optical fiber network, a coaxial cable network, and/or a hybrid fiber/coax distribution network. The communication network100may use a series of interconnected communication links101(e.g., coaxial cables, optical fibers, wireless links, etc.) to connect multiple premises102(e.g., businesses, homes, consumer dwellings, train stations, airports, etc.) to a distribution network facility103(e.g., a headend). The distribution network facility103may send downstream information signals and receive upstream information signals via the communication links101. Each of the premises102may comprise devices, described below, to receive, send, and/or otherwise process those signals and information contained therein. The communication links101may originate from the distribution network facility103and may comprise components not shown, such as splitters, filters, amplifiers, etc., to help convey signals clearly. For example, the communication links101may comprise a hybrid fiber/coaxial (HFC) cable network. The HFC network may include a splitter for isolating upstream (US) signals from downstream (DS) signals, as described with examples inFIGS.3A,3B, and3C. The communication links101may be coupled to one or more wireless access points127configured to communicate with one or more mobile devices125via one or more wireless networks. The mobile devices125may comprise smart phones, tablets or laptop computers with wireless transceivers, tablets or laptop computers communicatively coupled to other devices with wireless transceivers, and/or any other type of device configured to communicate via a wireless network. The distribution network facility103may comprise one or more distribution devices. Distribution devices may comprise an interface104. The interface104may comprise one or more computing devices configured to send information downstream to, and to receive information upstream from, devices communicating with the distribution network facility103via the communications links101. The interface104may be configured to manage communications among those devices, to manage communications between those devices and other distributions devices such as servers105-107and122, and/or to manage communications between those devices and one or more external networks109. The interface104may, for example, comprise one or more routers, one or more base stations, one or more optical line terminals (OLTs), one or more termination systems (e.g., a modular cable modem termination system (M-CMTS), an integrated cable modem termination system (I-CMTS), or virtual cable modem termination system (vCMTS)), one or more digital subscriber line access modules (DSLAMs), and/or any other computing device(s). The distribution network facility103may comprise one or more network interfaces108that comprise circuitry needed to communicate via the external networks109. The external networks109may comprise networks of Internet devices, telephone networks, wireless networks, wired networks, fiber optic networks, and/or any other desired network. The distribution network facility103may also or alternatively communicate with the mobile devices125via the interface108and one or more of the external networks109, e.g., via one or more of the wireless access points127. The push notification server105may be configured to generate push notifications to deliver information to devices in the premises102and/or to the mobile devices125. The content server106may be configured to provide content to devices in the premises102and/or to the mobile devices125. This content may comprise, for example, video, audio, text, web pages, images, files, etc. The content server106(or, alternatively, an authentication server) may comprise software to validate user identities and entitlements, to locate and retrieve requested content, and/or to initiate delivery (e.g., streaming) of the content. The application server107may be configured to offer any desired service. For example, an application server may be responsible for collecting, and generating a download of, information for electronic program guide listings. Another application server may be responsible for monitoring user viewing habits and collecting information from that monitoring for use in selecting advertisements. Yet another application server may be responsible for formatting and inserting advertisements in a video stream being transmitted to devices in the premises102and/or to the mobile devices125. The distribution network facility103may comprise additional servers, such as a network access device (NAD) control server122(described below), additional push, content, and/or application servers, and/or other types of servers. Although shown separately, the push server105, the content server106, the application server107, the NAD control server122and/or other server(s) may be combined, and/or servers described herein may be distributed among servers or other devices in ways other than as indicated by examples included herein. Also or alternatively, one or more servers (not shown) may be part of the external network109and may be configured to communicate (e.g., via the distribution network facility103) with other computing devices (e.g., computing devices located in or otherwise associated with one or more premises102). Any of the servers105-107, and/or122, and/or other computing devices may also or alternatively be implemented as one or more of the servers that are part of the external network109. The servers105,106,107, and122, and/or other servers, may be computing devices and may comprise memory storing data and also storing computer executable instructions that, when executed by one or more processors, cause the server(s) to perform steps described herein. The NAD control server122may communicate with a network access device (NAD)121. The NAD121may be installed between the distribution network facility103and multiple premises102. For example, the NAD control server122may send a request to the NAD121for monitoring, via a number of ports of the NAD121, conditions (e.g., power level of signal, noise level of signal, etc.) affecting devices in the multiple premises102. The NAD121may (e.g., in response to communications from the NAD control server122and/or other computing device(s)) provide information including, for example, transmission power level, modulation error rate (MER), bit error rate (BER), etc. The NAD121may include a plurality of ports 1−n. The ports 1−n may connect to n corresponding premises102. The NAD control server122may determine and request appropriate actions by the NAD121, for example, based on the information, to mitigate and/or remove signal interference. For example, the NAD121may filter or disable a port n in response to a request from the NAD control server122. An example premises102amay comprise an interface120. The interface120may comprise circuitry used to communicate via the communication links101. The interface120may comprise a modem110, which may comprise transmitters and receivers used to communicate via the communication links101with the distribution network facility103. The modem110may comprise, for example, a coaxial cable modem (for coaxial cable lines of the communication links101), a fiber interface node (for fiber optic lines of the communication links101), twisted-pair telephone modem, a wireless transceiver, and/or any other desired modem device. One modem is shown inFIG.1, but a plurality of modems operating in parallel may be implemented within the interface120. The interface120may comprise a gateway111. The modem110may be connected to, or be a part of, the gateway111. The gateway111may be a computing device that communicates with the modem(s)110to allow one or more other devices in the premises102ato communicate with the distribution network facility103and/or with other devices beyond the distribution network facility103(e.g., via the distribution network facility103and the external network(s)109). The gateway111may comprise a set-top box (STB), digital video recorder (DVR), a digital transport adapter (DTA), a computer server, and/or any other desired computing device. The gateway111, and/or another computing device at a premises102, may interface with the NAD121. The NAD121may send a request to the premises computing device, for example, to provide information on different upstream and/or downstream channels. In response to the request, the premises computing device may provide information, for example, including signal-to-noise ratio (SNR), modulation error ratio (MER) on the upstream and/or downstream channels. The gateway111may also comprise one or more local network interfaces to communicate, via one or more local networks, with devices in the premises102a. Such devices may comprise, e.g., display devices112(e.g., televisions), other devices113(e.g., a DVR or STB), personal computers114, laptop computers115, wireless devices116(e.g., wireless routers, wireless laptops, notebooks, tablets and netbooks, cordless phones (e.g., Digital Enhanced Cordless Telephone—DECT phones), mobile phones, mobile televisions, personal digital assistants (PDA)), landline phones117(e.g., Voice over Internet Protocol—VoIP phones), and any other desired devices. Example types of local networks comprise Multimedia Over Coax Alliance (MoCA) networks, Ethernet networks, networks communicating via Universal Serial Bus (USB) interfaces, wireless networks (e.g., IEEE 802.11, IEEE 802.15, Bluetooth), networks communicating via in-premises power lines, and others. The lines connecting the interface120with the other devices in the premises102amay represent wired or wireless connections, as may be appropriate for the type of local network used. One or more of the devices at the premises102amay be configured to provide wireless communications channels (e.g., IEEE 802.11 channels) to communicate with one or more of the mobile devices125, which may be on- or off-premises. The mobile devices125, one or more of the devices in the premises102a, and/or other devices may receive, store, output, and/or otherwise use assets. An asset may comprise a video, a game, one or more images, software, audio, text, webpage(s), and/or other content. FIG.2shows hardware elements of a computing device200that may be used to implement any of the computing devices shown inFIG.1(e.g., the mobile devices125, any of the devices shown in the premises102a, any of the devices shown in the distribution network facility103, any of the wireless access points127, any devices associated with the external network109) and any other computing devices discussed herein (e.g., the NAD121and/or the controller430of the NAD121, the NAD control server122, the wireless computing device480, etc.). The computing device200may comprise one or more processors201, which may execute instructions of a computer program to perform any of the functions described herein. The instructions may be stored in a non-rewritable memory202such as a read-only memory (ROM), a rewritable memory203such as random access memory (RAM) and/or flash memory, removable media204(e.g., a USB drive, a compact disk (CD), a digital versatile disk (DVD)), and/or in any other type of computer-readable storage medium or memory. Instructions may also be stored in an attached (or internal) hard drive205or other types of storage media. The computing device200may comprise one or more output devices, such as a display device206(e.g., an external television and/or other external or internal display device) and a speaker214, and may comprise one or more output device controllers207, such as a video processor or a controller for an infra-red or BLUETOOTH, BLE (Bluetooth Low Energy), ZigBee, or HaLow/LoRaWAN transceivers. One or more user input devices208may comprise a remote control, a keyboard, a mouse, a touch screen (which may be integrated with the display device206), microphone, etc. The computing device200may also comprise one or more network interfaces, such as a network input/output (I/O) interface210(e.g., a network card) to communicate with an external network209. The network I/O interface210may be a wired interface (e.g., electrical, RF (via coax), optical (via fiber)), a wireless interface, or a combination of the two. The network I/O interface210may comprise a modem configured to communicate via the external network209. The external network209may comprise the communication links101discussed above, the external network109, an in-home network, a network provider's wireless, coaxial, fiber, or hybrid fiber/coaxial distribution system (e.g., a DOCSIS network), or any other desired network. The computing device200may comprise a location-detecting device, such as a global positioning system (GPS) microprocessor211, which may be configured to receive and process global positioning signals and determine, with possible assistance from an external server and antenna, a geographic position of the computing device200. The computing device200(e.g., the NAD121) may also comprise a spectrum analyzer with a wide range of spectrum (e.g., 5-1794 MHz). The spectrum analyzer may be used to detect signal issues, for example, signal interference affecting computing devices in the multiple premises102. AlthoughFIG.2shows an example hardware configuration, one or more of the elements of the computing device200may be implemented as software or a combination of hardware and software. Modifications may be made to add, remove, combine, divide, etc. components of the computing device200. Additionally, the elements shown inFIG.2may be implemented using basic computing devices and components that have been configured to perform operations such as are described herein. For example, a memory of the computing device200may store computer-executable instructions that, when executed by the processor201and/or one or more other processors of the computing device200, cause the computing device200to perform one, some, or all of the operations described herein. Such memory and processor(s) may also or alternatively be implemented through one or more Integrated Circuits (ICs). An IC may be, for example, a microprocessor that accesses programming instructions or other data stored in a ROM and/or hardwired into the IC. For example, an IC may comprise an Application Specific Integrated Circuit (ASIC) having gates and/or other logic dedicated to the calculations and other operations described herein. An IC may perform some operations based on execution of programming instructions read from ROM or RAM, with other operations hardwired into gates or other logic. Further, an IC may be configured to output image data to a display buffer. Bandwidth of a communication medium may be allocated. For example, part of a communication medium bandwidth may be allocated as upstream bandwidth (e.g., to be used for upstream communications) and part of the communication medium bandwidth may be allocated as downstream bandwidth (e.g., to be used for downstream communications). For example, a computing device (e.g., a computing device located at a premises such as the premises102) may be configured to send upstream transmission via one or more frequencies in a first frequency range and/or to receive downstream transmission via one or more frequencies in a second frequency range that is different from the first frequency range. The division between upstream and downstream transmission frequencies may sometimes be referred to as a “split.” Upstream transmission may be via frequencies below frequencies used for downstream transmission. In one type of low-band split (also known as a low-split or a sub-split), a cross-over between upstream and downstream transmission may occur between 42 MHz and 108 MHz. For example, upstream transmission may be via frequencies between 5 MHz and 30 MHz and downstream transmission may be via frequencies above 108-1002 MHz. Alternatively, for example, upstream transmission may be via frequencies between 5 MHz to 40 MHz (or 42 MHz) and downstream transmission may be via frequencies above 52-54 MHz. In one type of mid-band split (also known as an extended sub-split), a cross-over between upstream and downstream transmissions may occur between 85 MHz and 108 MHz. For example, upstream transmission may be via frequencies between 5 MHz and 85 MHz and downstream transmission may be via frequencies above 108 MHz. In one type of high-band split, a cross-over between upstream and downstream transmission may occur between 204 MHz and 258 MHz. For example, upstream transmission may be via frequencies between 5 MHz and 204 MHz and downstream transmission may be via frequencies above 370 MHz. Bandwidth of a communication medium may be reallocated (e.g., a different split may be used). FIGS.3A,3B, and3Cshow examples of coexisting premises computing devices with different band splits.FIG.3Ashows an example of a mid-band split premises computing device (MS-PCD)320coexisting in a premises with a low-band split premises computing device (LS-PCD)310. The LS-PCD310may comprise, for example, a video STB, a pre-DOCSIS 3.0 cable modem (CM), or a DOCSIS 3.0 CM with a fixed standard-split diplex filter. The MS-PCD320may be designed with software-selectable diplex filters which may switch between the standard-split and mid-split modes. A premises computing device (PCD) may be connected to a cable feed off of a splitter300within the premises. The splitter, without sufficient port-to-port isolation, may allow the MS-PCD320's upstream (US) transmission (e.g., 5-85 MHz) to interfere with the LS-PCD310's downstream (DS) transmission (e.g., 52-860 MHz). The mid-band part (e.g., 42-85 MHz) of the upstream transmission from the MS-PCD320may leak/reflect through the splitter into the downstream receiver of the LS-PCD310, unfiltered. FIG.3Bshows an example of a high-band split premises computing device (HS-PCD)350coexisting in a premises with a low-band split premises computing device (LS-PCD)340. A splitter330may allow the HS-PCD350's upstream transmission (e.g., 5-204 MHz) to interfere with the LS-PCD340's downstream transmission (e.g., 52-860 MHz). The mid-band part (e.g., 105-204 MHz) of the upstream transmission from the HS-PCD350may leak/reflect through the splitter330into the downstream receiver of the LS-PCD340, unfiltered. FIG.3Cshows an example of a high-band split premises computing device (HS-PCD)380coexisting in a premises with a mid-band split premises computing device (MS-PCD)370. A splitter360may allow the HS-PCD380's upstream transmission (e.g., 5-204 MHz) to interfere with the MS-PCD370's downstream transmission (e.g., 108-1000 MHz). The mid-band part (e.g., 105-204 MHz) of the upstream transmission from the HS-PCD380may leak/reflect through the splitter360into the downstream receiver of the MS-PCD370, unfiltered. The leaked/reflected signals, in the coexisting PCDs of different band-splits ofFIGS.3A,3B, and3C, may cause interference affecting computing devices located in other premises (e.g., premises that may be neighboring, or otherwise nearby, a premises in which the coexisting PCDs are located) with video and broadband gateways that may be operate in a low-band split (5-42 MHz). Such interference may occur because there may not be enough isolation between the neighboring premises through existing splitters or taps. For example, energy from a high-band split node (e.g., a high-band split DOCSIS 3.1 modem) may swamp low noise amplifiers (LNAs) and programmable gain amplifiers (PGAs) in the front end of the gateways, and thus may cause interference affecting the gateways (e.g., automatic gain control of the gateways may be affected). One approach to mitigate the above issue could be installing fixed notch filters at the neighboring premises to block this energy, for example, in the 85-258 MHz band, which may be received by gateways still operating in a low-band split. But installing a fixed notch filter at the neighboring premises may be not a procedure that is easily performed by many users. The solution may become prohibitively expensive if it has to be executed for every neighboring premises. In addition, over time, as the neighboring premises upgrade their service tiers that require the high-band split, removal of the previously installed filters may be appropriate. Further, it may be a challenging task, as time passes, to verify whether a filter has been installed or not at the neighboring premises. Such verification may involve dispatching a service vehicle or running targeted tests on the neighboring premises during a maintenance window to ascertain whether premises computing device such as a modem in a home is able to detect signals, for example, in the 85-258 MHz band. An alternate approach to mitigate this issue may be upgrading all the neighboring premises to newer broadband gateways that also operate in high-band split nodes and replacing video gateways with IP gateways. However, this approach may require that all legacy equipment at all the neighboring premises be upgraded or swapped at substantially the same time. Thus, both of the approaches described above may be prohibitively expensive and operationally challenging. FIG.4shows an example of a network access device (NAD)121. The NAD121(e.g., a line tap) may be installed between the distribution network facility103and the multiple premises102that are served by the distribution network facility103. The NAD121may include an RF splitter410, a plurality of RF switches420.1through420.n(collectively or generically, RF switch(es)420), a controller430, and a plurality of first ports440.1through420.n(collectively or generically, port(s)440) that are coupled (e.g., via a communication medium) to premises computing devices in the multiple premises102. For example, a port440.1may be connected to the gateway111in the premises102a. The controller430may selectively control the RF switches420and consequently control a plurality of transmission or signal paths450.1through450.n(collectively or generically, signal path(s)450) between the RF splitter410and the RF switches420. Each of the transmission paths450may form, with a corresponding RF switch420, a switchably-filtered signal path that connects (via the RF splitter410) a corresponding one of the first ports440with a second port435(e.g., an upstream-side port). The second port435may be connected to a cable connecting the NAD121to an upstream computing device A (e.g., the interface104). Each of the transmission paths450may comprise a plurality of alternate paths, between the RF splitter410and a corresponding RF switch420, to which that RF switch420may connect. Each of the transmission paths450may comprise a corresponding one of direct (e.g., unfiltered) connections459.1through459.n(collectively or generically, direct connection(s)459). Each of the transmission paths450may further comprise a plurality of notch filters460.1(1) through406.1(m) (collectively or generically, notch filter(s)460). The notch filters460may be bi-directional in that signals entering from either ends of the notch filters460may be filtered in the same fashion. Each notch filter460of a signal path450may have its own range of frequencies, different from other notch filters of that signal path450, for filtering. A notch filter460may pass signals with frequencies below a “notch” range of frequencies and above the notch range of frequencies while suppressing signals with frequencies within the notch range of frequencies. An RF switch420may switch to multiple notch filters460simultaneously so as to, for example, combine those filters and effectively form a notch filter with an expanded notch range. A plurality of ground terminals470.1through470.n(collectively or generically, ground terminal(s)470) may be used to ground (e.g., interrupt) one or more transmission paths450. An RF switch420, as controlled by the controller430, may be switched to connect a corresponding port440to any of: a corresponding ground terminal470, one or more of the corresponding notch filters460, and/or a corresponding direct connection459(e.g., the RF switch420.1may connect the port440.1to any of the ground terminal470.1, one or more of the notch filters460.1, and/or the direct connection459.1). Connecting a ground terminal470to a corresponding port440may short and interrupt the corresponding transmission path450and disable that corresponding port440. For example, the ground terminal470may be terminated through a resistor (e.g., 75 ohm resistor) to the ground to reduce signal reflections. Disconnecting an RF switch420from its corresponding ground terminal470and connecting to a portion of a corresponding transmission path450(e.g., the corresponding direct connection459and/or one or more of the corresponding notch filters460) may enable that corresponding transmission path450. An RF switch420may be switched to connect a port440(e.g., port440.1) to one or more filters460(e.g., filter460.1(1)) so that signals over a transmission path450(e.g., transmission path450.1) are filtered by the one or more filters460. Likewise, the RF switch420may be switched to disconnect the port440from the one or more filters460, and connect that port440to the splitter410via a direct connection459, so that the signals over the transmission path450are unfiltered. An RF switch420may make any connection (e.g., to a corresponding direct connection459, to one or more corresponding notch filters460, and/or to a corresponding ground terminal470) as part of an initial setup or after (or in conjunction with) any disconnection (e.g., disconnecting from a corresponding ground terminal470, from one or more corresponding notch filters, and/or from a corresponding direct connection459). The RF splitter410may split downstream signals from an upstream computing device A into the transmission paths450, which extend, via corresponding RF switches420, to the corresponding ports440. The RF splitter410may merge upstream signals from the ports440, received via the transmission paths450, and transmit via the second port435to the upstream computing device A. The controller430may communicate, via an upstream-side control interface432, with an upstream computing device B. The upstream-side control interface432may be via a separate out-of-band signal path or via the same medium over which other upstream and downstream communications are sent to/received from the distribution network facility103via the second port435. The upstream computing device B may be a distribution network device (e.g., the NAD control server122located at the distribution network facility103), the upstream computing device A, or different computing device. The controller430may communicate, via a local control interface434, with the RF switches420. The controller430may communicate, via a premises control interface433, with a premises computing device. The premises control interface433may be via the RF switches420and the ports440. The controller430may also wirelessly communicate, via a wireless interface431, with a wireless computing device480(e.g., smart phone, tablet, IoT device, etc.). The wireless interface431may, for example, comprise a WiFi interface, a ZigBee interface, a BTLEE interface, a BLUETOOTH interface, and/or some other type of wireless interface. A network operator may send one or more messages, via the upstream-side control interface432, to the controller430of the NAD121. The network operator may send one or more messages, via the upstream-side control interface432, for example, to instruct the controller430in real-time to switch on or off one or more filters460of one or more signal paths450associated with one or more of the ports440of the NAD121. Further, the network operator may send one or more messages, via the upstream-side control interface432, for example, to query the controller430of the NAD121in real-time to obtain current statuses of one or more filters460(e.g., whether a filter460is switched into a signal path450). The controller430may store (e.g., in a memory) data indicating the status of each RF switch420(e.g., whether the RF switch420is connected to its corresponding direct connection459, to one or more of its corresponding notch filters460, and/or to its corresponding ground terminal470), data indicating associations between the RF switches420and the ports440, data associated with premises102to which each the ports440are connected, data regarding one or more premises devices at each of those premises, and/or other data. A technician on site may use an application on the technicians' wireless computing device480to wirelessly communicate, via the wireless control interface431, with the controller430of the NAD121. The technician may wirelessly communicate, via the wireless control interface431, with the controller430, and be able to perform similar operations as the network operator. The technician, for example, using a tablet or a smart phone, may be able to perform any maintenance, diagnostic, troubleshooting operations without physically accessing the NAD121. FIG.5shows an example of messaging between the upstream computing device B or the wireless computing device480and the controller430of the network access device121. Communications between the upstream computing device B and the controller430may be via the upstream-side control interface432. Communications between the wireless computing device480and the controller430may be via the wireless interface431. For example, the upstream computing device B may send, via the upstream-side control interface432, the controller430a request message510indicating and/or instructing that the controller430is to monitor one or more ports440of the NAD121. The request message510may comprise or otherwise indicate a number of parameters (e.g., a power level of signal, a noise level of signal, dropped packets, timeouts, etc.) to be monitored for the one or more ports. Also or alternatively, the request message510may indicate and/or instruct that the controller430is to receive monitoring results (e.g., RX power at premises computing device, TX power at premises computing device, etc.) from a premises computing device. The controller430may send back an acknowledgement message (Ack)520, for example, to acknowledge the receipt of the request message, and start monitoring the one or more ports440of the NAD121. After a duration of time, the upstream computing device B may send another request message530to the controller430requesting reporting of the monitoring result(s). Also or alternatively, the controller430may periodically report the monitoring result(s), for example, based on a duration of time specified by the request message510and/or by the request message530, and/or based on a duration of time preset by the controller430(e.g., during setup of the NAD121). The controller430may send a report message540to the upstream computing device (or a wireless computing device) for reporting monitoring results. The message540may be sent in response to the report monitoring result request530or periodically, as indicated above. The report message540may comprise values for monitored parameters such as MER (e.g., MER 45 dB), a transmission power level (e.g., Tx Power 51 dBmV), etc. The upstream computing device B may, based on the report monitoring results540, send an adjustment request message550to the controller430. The adjustment request550may be used to provide instructions to the controller430to reconfigure one or more of the transmission paths450of the NAD121(e.g., for addressing any issues determined based on the monitoring). For example, the adjustment request550may include indicators of one or more filters460to be added or removed from a transmission path450, of one or more ground terminal connections470to be opened or closed, etc. The controller430may send an acknowledgement message (e.g., Ack560) to the upstream computing device, for example, after making the requested adjustment in response to the adjustment request550. A technician on site may use the technician's wireless computing device480and wirelessly interact with the controller430to perform the similar operations as the upstream computing device B or override the switch setting of one or more ports440. FIG.6shows an example of messaging between the controller430of the network access device121and a premises computing device. Communications between the controller430and the premises computing device may be via the premises control interface433. For example, the controller430may send a report channel condition(s) request610via the premises control interface433to a premises computing device (e.g., a broadband gateway) at a premises102. The report channel condition(s) request610may include parameters (e.g., SNR, MER on different upstream/downstream channels of one or more ports, etc.) to measure channel conditions. The premises computing device may send a report channel condition(s) message620to the controller430in response to the report channel condition(s) request610. The report channel condition(s) message620may return the result of the measurement based on the parameters. The controller430may also send report CM capabilities request630to the premises computing device. The report CM capabilities request630may include one or more ports' identifications. The premises computing device may send report CM capabilities640to the controller430. The report CM capabilities640may indicate band split configurations of the one or more ports440(e.g., ports 1-4 configured with the low-band split, ports 5-10 configured with the mid-band split, ports 11-15 configured with the high-band split, etc.). FIG.7shows an example of messaging between the upstream computing device B, the controller430of the NAD121, the RF switch420.1, and a premises computing device. For convenience,FIG.7is described using an example of communications with (or associated with) a premises computing device associated with port440.1, but similar operations and communication may be performed and/or sent/received with regard to any port440. Operations ofFIG.7performed by the upstream computing device B, and/or communications sent/received by the upstream computing device B, may also or alternatively performed and/or sent/received by the wireless computing device480. The upstream computing device B may cause the controller430to query a CM associated with a premises computing device associated with the port440.1, detect a signal interference on the CM, and further address the signal interference. For example, the upstream computing device B may send a query status message710to the controller430. The query status message710may identify a cable modem (CM) by its media access control address (MAC) (e.g., CM MAC1). The controller430may send an acknowledgement720to the upstream computing device B for acknowledging the receipt of the query status message710. For another example, the upstream computing device B may send a request to the controller430to measure a noise during a quiet time period (e.g., a period of time when modem(s) connected to a port440.nis not transmitting to get a baseline of RF conditions (e.g., SNR) on port440.n). Alternatively, the upstream computing device B may send a request to the controller430to collect statistical measurement over a period of time. Further, the controller430may send a select request730to the RF switch420.1, for example, to select MAC1 on the port440.1. The controller430may subsequently send a report status request740to the premises computing device, for example, for the status of upstream/downstream channel-M. The premises computing device may send a status report750, for example, indicating upstream channel-M being impaired (e.g., signal interference), to the controller430. The controller430may send an activation request760to the RF switch420.1, for example, to switch one or more of the filters460.1-460.minto the transmission path450.1. The switched filter(s)460may block signals within a range of frequencies (e.g., 42-258 MHz) based on a bandwidth allocated for the upstream channel-M (e.g., the low-band split 5-42 MHz). The controller430may send a query result770to the upstream computing device B. For example, the query result770may indicate CM MAC1 being operational in the low-band split. FIG.8shows an example of a finite state machine for the controller430. The controller430may be in stand-by state810, monitoring state820, or autonomous state830. The controller430may make transition T000 from the stand-by state810to the monitoring state820based on a message received, for example, from the upstream computing device B and/or the wireless computing device480. The message may be a command signal requesting a monitoring (e.g., the request message510, the query status message710). In the monitoring state820, the controller430may continue monitoring statuses of one or more ports440, for example, for a period of time or until receiving another message from the upstream computing device B and/or wireless computing device480(e.g., the request message530) requesting results of the monitoring. The controller430may make transition T001 from the monitoring state820back to the stand-by state810, for example, based on the other command signal or expiration of the period of time. Also or alternatively, the controller430may make transition T006 from the monitoring state820to the autonomous state830based on a message from the upstream computing device B and/or wireless computing device480. The message may instruct the controller430to operate autonomously, for example, adjusting switch settings of the NAD121based on the results of the monitoring. The controller, in the stand-by state810at transition T002, may receive an adjustment request, for example, a command signal from the upstream computing device B and/or the wireless computing device480(e.g., the adjustment request550), to reconfigure switch settings of one or more of the RF switches420in the NAD121. The controller430may generate one or more control signals, for example, based on the command signal, for reconfiguring the switch settings. The controller430may make transition T003 from the stand-by state810to the autonomous state830based on a message received, for example, from the upstream computing device B and/or the wireless computing device480. The message may be a command requesting the controller430to operate autonomously, for example, for a time period or indefinitely until further instruction from the upstream computing device B and/or wireless computing device480. For example, the command may instruct the controller430to operate autonomously for the time period and make transition T005 to the stand-by state810or transition T007 to the monitoring state820. The controller430in the autonomous state830may continue monitoring statuses of one or more ports440and/or receive monitoring results from premises computing devices. Further, the controller430, in the autonomous state830at transition T004, may adjust, for example, switch settings of one or more of the RF switches420in the NAD121based on results of the monitoring without intervention from the upstream computing device B and/or the wireless computing device480. For example, the controller430, in the autonomous state830, may monitor or capture a spectrum of Long-term evolution (LTE) frequency bands and determine that a port440has LTE leakage. The controller430at transition T004 may set an RF switch420to disable that port440to block out the spectrum of the LTE frequency bands (e.g., 700 MHz). For example, the controller430, in the autonomous state830, may also monitor or capture a spectrum of MoCA frequencies and determine that one or more of the ports may not have MoCA Point of Entry (POE) filters and premises computing devices on those one or more ports (e.g., premises computing devices in two premises) may be forming a MoCA link. The controller430at transition T004 may set an RF switch420to filter the MoCA frequencies thereby breaking the MoCA link between the two premises. The controller430may make transition from the autonomous state830to the monitoring state820or the stand-by state810. For example, the controller430may make transition T005 from the autonomous state830to the stand-by state810, for example, based on a message (e.g., a command to stop autonomous mode of operation and stand-by) received from the upstream computing device B and/or the wireless computing device480, an expiration of the time period, or exception handling (e.g., exceptional issues requiring interventions from the upstream computing device B and/or the wireless computing device480). The exception handling may involve, for example, a number of ports440being disabled due to impairments that may be serviced/repaired by the upstream computing device B and/or a technician on site. The controller430may make transition T007 from the autonomous state830to the monitoring state820, for example, based on a message (e.g., a command instructing the controller to perform monitoring only) from the upstream computing device B and/or the wireless computing device480and/or an expiration of the time period. The upstream computing device B and/or the wireless computing device480may send a step-back message to the controller430, for example, in the stand-by state810or the monitoring state820. The step-back message may request the controller430to go back to the last switch settings of the RF switches420in the NAD121. For this purpose, the controller430may store at least the last switch settings and be able to go back to the last switch settings upon such request. The step-back message may serve the network operator or the technician on site for performing troubleshooting or diagnostics. For example, if operations of the ports440of the NAD121degrade or may not improve after new switch settings, the network operator or the technician on site may able to go back at least to the previous switch settings of the RF switches420in the NAD121and try again with different switch settings. For example, the step-back message may also cause the controller430to make transition to the stand-by state810. The upstream computing device B and/or the wireless computing device480may send a reset message to the controller430, for example, in any one of the three states. The reset message may cause the controller430to reset the switch settings of the RF switches420in the NAD121to default switch settings. In the default switch settings, signals may passively pass through the ports440of the NAD121without being filtered, grounded, terminated, or altered. For example, the reset message may also cause the controller430to make transition from any of the three states to the stand-by state810for fresh start or re-start. FIGS.9A and9Bshow an example of a flow chart showing steps of an example method associated with a network access device (e.g., NAD121). For convenience,FIGS.9A and9Bare described by way of an example in which the steps are performed by the controller430of the NAD121. One, some, or all steps of the example method ofFIGS.9A and9B, or portions thereof, may be performed by one or more other computing devices (e.g., upstream computing device B, the wireless computing device480, a premises computing device, etc.). One, some, or all steps of the example method ofFIGS.9A and9Bmay be omitted, performed in other orders, and/or otherwise modified, and/or one or more additional steps may be added. At step900, the NAD121may receive, via a communication medium connected to the ports440, and from one or more devices at one or more premises (e.g., the premises102a), one or more first upstream signals. The one or more first upstream signals may be received via at least a portion of an allocated upstream bandwidth of the communication medium. After step900, and prior to step905, there may be a reallocation of bandwidth of the communication medium. For example, the allocated upstream bandwidth in step900may correspond to a first bandwidth allocation. In the first bandwidth allocation, a first portion of the communication medium bandwidth (the allocated upstream bandwidth) may be allocated to upstream communications. A second portion of the communication medium bandwidth (the allocated downstream bandwidth) may be allocated to downstream communications. The second portion may, for example, comprise a portion of the communication medium bandwidth that remains after exclusion of the allocated upstream bandwidth, or after exclusion of the allocated upstream bandwidth and of a first guard band between the allocated upstream bandwidth and the allocated downstream bandwidth. After the reallocation of the communication medium bandwidth, upstream and downstream bandwidth may be reallocated according to a second bandwidth allocation. In the second bandwidth allocation, a third portion of the communication medium bandwidth (the reallocated upstream bandwidth) may be allocated to upstream communications and a fourth portion of the communication medium bandwidth (the reallocated downstream bandwidth) may be allocated to downstream communications. The fourth portion may, for example, comprise a portion of the communication medium bandwidth that remains after exclusion of the reallocated upstream bandwidth, or after exclusion of the reallocated upstream bandwidth and of a second guard band between the reallocated upstream bandwidth and the reallocated downstream bandwidth. The reallocated upstream bandwidth may be larger than the allocated upstream bandwidth. For example, the reallocated upstream bandwidth may comprise the first part of the communication medium bandwidth (the allocated upstream bandwidth), as well as a portion of the first guard band (between the allocated upstream bandwidth and the allocated downstream bandwidth) and/or a portion of the second part of the communication medium bandwidth (the allocated downstream bandwidth). At step905, the NAD121may receive, via the communication medium connected to the ports440, and from one or more devices at one or more premises, one or more second upstream signals. The one or more second upstream signals may be received via at least a portion of the reallocated upstream bandwidth that was, prior to the reallocation, part of the allocated downstream bandwidth and/or part of the first guard band between the allocated upstream bandwidth and the allocated downstream bandwidth. At step910, for example, the NAD121may detect that the network operator or the technician on site upgraded a legacy computing device at the premises102a. For example, the upgraded computing device may be a mid-band split or high-band split premises computing device, as shown inFIGS.3A,3B, and3C. The upgraded computing device may cause one or more of the problems described inFIGS.3A,3B, and3C. At step920, the controller430may determine whether parts of upstream bandwidth allocated to a device (e.g., upgraded premises computing device) are blocked. For example, the blocking may be caused by a filter that was previously switched on for the legacy computing device that has been replaced by the upgraded computing device. In other words, the upgraded computing device may not be able to detect a full spectrum of the upstream bandwidth allocated due to the filter that was turned on to prevent signal interference for the legacy computing device. The controller430may determine that the parts of upstream bandwidth allocated to the upgraded computing device are blocked and perform step930. At step930, the controller430may identify the filter causing the blocking, switch off the filter, and perform step931. At step931, the controller430may determine whether a number of attempts to remove the blocking is less than a threshold quantity (e.g., a threshold-1). The controller430may determine that the quantity of attempts is less than the threshold-1, and perform step920for re-evaluation. Alternatively, the controller430may determine that the quantity of attempts is not less than the threshold-1, and perform step932. At step932, the controller430may send a support request, for example, to the upstream computing device B and/or the wireless computing device480, and perform step990to end the process. At step920, the controller430may determine that the parts of upstream bandwidth allocated to the upgraded computing device may not be blocked and perform step940. At step940, the controller430may monitor one or more of the ports440of the NAD121. At step950, the controller430may determine whether signal leakage from one or more of the ports440is detected. The controller430may comprise a spectrum analyzer with a wide range of spectrum (e.g., 5-1794 MHz). The spectrum analyzer may be used to detect signal anomalies, for example, signal leakage from one or more of the ports440. The controller430may determine that no signal leakage is detected and perform step990to end the process. The controller430may determine that signal leakage is detected and perform step960. At step960, the controller430may determine an interference level caused by the signal leakage on any of the ports440of the NAD121and perform step951, as described inFIG.9B. At step951, the controller430may determine whether a quantity of attempts to address a signal leakage problem has exceeded a threshold quantity of times (e.g., threshold-2). The controller430may determine that the quantity of attempts has exceeded the threshold-2, and perform step932to send a support request, for example, to the upstream computing device B and/or the wireless computing device480. For example, the network operator or the technician on site may address the signal leakage problem in response to the support request. The controller430may determine that the quantity of attempts has not exceeded the threshold-2, and perform step950for re-evaluation. At step950, the controller may determine that signal leakage is not detected (e.g., step960yielded successful remedy) and perform step990to end the process. InFIG.9B, at step970, the controller430may determine a level of impact (e.g., a level of signal interference), caused by the signal leakage, on devices coupled to one or more of the ports440. At step970, the controller430may compare the level of signal interference with a threshold, and may perform step980based on the result of the comparison (the level of signal interference exceeds the threshold). For example, the controller430may measure a parameter value indicative of the level of signal interference, (e.g., signal-to-noise ratio (SNR), modulation error rate (MER), bit error rate (BER), etc.), and compare the measured parameter value against a respective threshold. The threshold may be a maximum level of signal interference that may be manageable by filtering. As such, beyond the threshold, the signal interference may not be manageable by filtering, and the ports440impacted by the leaked signal, for example, may be disabled. At step980, the controller430may disable the ports440impacted by the leaked signal and perform step951, as described inFIG.9A. At step970, the controller430may determine that the level of signal interference is below the threshold, and perform step971. At step971, the controller430may identify a bandwidth allocated for upstream transmission (e.g., the low-band split (case972), the mid-band split (case974), or high-band split (case976)) for the ports440impacted by the leaked signal. The controller430may also detect a spectrum of MoCA frequencies, determine that one or more ports440may not have MoCA POE filters, and further may determine that devices coupled with one or more ports440(e.g., premises computing devices in two premises) may be forming a MoCA link (case978). The controller430may perform steps973,975,977, and/or979based on different determinations made at step971. For example, the controller430may determine that the bandwidth allocated for upstream transmission on the port440impacted may be 5-42 MHz (case972of the low-band split) (e.g., a device associated with a premises connected to the port440may have the low-band split configuration) and at step973, may switch a first filter460into the signal path450associated with that port440to block a first range of frequencies (e.g., 42-258 MHz) accordingly. The controller430may determine that the bandwidth allocated for upstream transmission on the port440impacted may be 5-85 MHz (case974of the mid-band split) and at step975, may switch a second filter460into the signal path450associated with that port440to block a second range of frequencies (e.g., 85-258 MHz) accordingly. The controller430may determine that the bandwidth allocated for upstream transmission on the port440impacted may be 5-204 MHz (case976of the high-band split) and thus that computing device(s) at a premises may be configured with a high-band split. In case976, the port440may be impacted by LTE leakage. At step977, the controller430may disable the port440(e.g., switch the port440to a ground terminal470). Further, the controller430may determine that premises computing devices in the two premises may be forming a MoCA link (case978of MoCA link), and at step979, may switch third filters460into the signal paths450associated with those ports440to block a third range of frequencies (e.g., 1002-1657 MHz) accordingly. At the end, the signal leakage processing goes back to step951, as described inFIG.9A. Although examples are described above, features and/or steps of those examples may be combined, divided, omitted, rearranged, revised, and/or augmented in any desired manner. Various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be part of this description, though not expressly stated herein, and are intended to be within the spirit and scope of the disclosure. Accordingly, the foregoing description is by way of example only, and is not limiting. | 54,751 |
11863221 | DETAILED DESCRIPTION The present invention relates to signal processor and, more specifically, to a cognitive signal processor (CSP) formed as an efficient hardware implementation with a wide instantaneous bandwidth neuromorphic adaptive core (NeurACore). The following description is presented to enable one of ordinary skill in the art to make and use the invention and to incorporate it in the context of particular applications. Various modifications, as well as a variety of uses in different applications, will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to a wide range of aspects. Thus, the present invention is not intended to be limited to the aspects presented, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. In the following detailed description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without necessarily being limited to these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention. The reader's attention is directed to all papers and documents which are filed concurrently with this specification and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference. All the features disclosed in this specification (including any accompanying claims, abstract, and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features. Furthermore, any element in a claim that does not explicitly state “means for” performing a specified function, or “step for” performing a specific function, is not to be interpreted as a “means” or “step” clause as specified in 35 U.S.C. Section 112(f). In particular, the use of “step of” or “act of” in the claims herein is not intended to invoke the provisions of 35 U.S.C. 112(f). Before describing the invention in detail, first a description of the various principal aspects of the present invention is provided. Subsequently, an introduction provides the reader with a general understanding of the present invention. Finally, specific details of various embodiment of the present invention are provided to give an understanding of the specific aspects. (2) Principal Aspects Various embodiments of the invention include at least three “principal” aspects. The first is a system embodied as a cognitive signal processor (CSP) formed as an efficient hardware implementation with a wide instantaneous bandwidth neuromorphic adaptive core (NeurACore). In one aspect, the CSP performs signal denoising and is typically in the form of a computer system operating software or in the form of a “hard-coded” instruction set. This system may be incorporated into a wide variety of devices that provide different functionalities. The second principal aspect is a method, typically in the form of software, operated using a data processing system (computer). The third principal aspect is a computer program product. The computer program product generally represents computer-readable instructions stored on a non-transitory computer-readable medium such as an optical storage device, e.g., a compact disc (CD) or digital versatile disc (DVD), or a magnetic storage device such as a floppy disk or magnetic tape. Other, non-limiting examples of computer-readable media include hard disks, read-only memory (ROM), and flash-type memories. These aspects will be described in more detail below. A block diagram depicting an example of a system (i.e., computer system100) of the present invention is provided inFIG.1. The computer system100is configured to perform calculations, processes, operations, and/or functions associated with a program or algorithm. In one aspect, certain processes and steps discussed herein are realized as a series of instructions (e.g., software program) that reside within computer readable memory units and are executed by one or more processors of the computer system100. When executed, the instructions cause the computer system100to perform specific actions and exhibit specific behavior, such as described herein. In various aspects, the computer system100can be embodied in any device(s) that operates to perform the functions as described herein as applicable to the particular application, such as a desktop computer, a mobile or smart phone, a tablet computer, a computer embodied in a mobile platform, or any other device or devices that can individually and/or collectively execute the instructions to perform the related operations/processes. The computer system100may include an address/data bus102that is configured to communicate information. Additionally, one or more data processing units, such as a processor104(or processors), are coupled with the address/data bus102. The processor104is configured to process information and instructions. In an aspect, the processor104is a microprocessor. Alternatively, the processor104may be a different type of processor such as a parallel processor, application-specific integrated circuit (ASIC), programmable logic array (PLA), complex programmable logic device (CPLD), or a field programmable gate array (FPGA) or any other processing component operable for performing the relevant operations. The computer system100is configured to utilize one or more data storage units. The computer system100may include a volatile memory unit106(e.g., random access memory (“RAM”), static RAM, dynamic RAM, etc.) coupled with the address/data bus102, wherein a volatile memory unit106is configured to store information and instructions for the processor104. The computer system100further may include a non-volatile memory unit108(e.g., read-only memory (“ROM”), programmable ROM (“PROM”), erasable programmable ROM (“EPROM”), electrically erasable programmable ROM “EEPROM”), flash memory, etc.) coupled with the address/data bus102, wherein the non-volatile memory unit108is configured to store static information and instructions for the processor104. Alternatively, the computer system100may execute instructions retrieved from an online data storage unit such as in “Cloud” computing. In an aspect, the computer system100also may include one or more interfaces, such as an interface110, coupled with the address/data bus102. The one or more interfaces are configured to enable the computer system100to interface with other electronic devices and computer systems. The communication interfaces implemented by the one or more interfaces may include wireline (e.g., serial cables, modems, network adaptors, etc.) and/or wireless (e.g., wireless modems, wireless network adaptors, etc.) communication technology. Further, one or more processors104(or devices, such as autonomous platforms or signal processors) can be associated with one or more associated memories, where each associated memory is a non-transitory computer-readable medium. Each associated memory can be associated with a single processor104(or device), or a network of interacting processors104(or devices). In one aspect, the computer system100may include an input device112coupled with the address/data bus102, wherein the input device112is configured to communicate information and command selections to the processor104. In accordance with one aspect, the input device112is an alphanumeric input device, such as a keyboard, that may include alphanumeric and/or function keys. Alternatively, the input device112may be an input device other than an alphanumeric input device. In an aspect, the computer system100may include a cursor control device114coupled with the address/data bus102, wherein the cursor control device114is configured to communicate user input information and/or command selections to the processor104. In an aspect, the cursor control device114is implemented using a device such as a mouse, a track-ball, a track-pad, an optical tracking device, or a touch screen. The foregoing notwithstanding, in an aspect, the cursor control device114is directed and/or activated via input from the input device112, such as in response to the use of special keys and key sequence commands associated with the input device112. In an alternative aspect, the cursor control device114is configured to be directed or guided by voice commands. In an aspect, the computer system100further may include one or more optional computer usable data storage devices, such as a storage device116, coupled with the address/data bus102. The storage device116is configured to store information and/or computer executable instructions. In one aspect, the storage device116is a storage device such as a magnetic or optical disk drive (e.g., hard disk drive (“HDD”), floppy diskette, compact disk read only memory (“CD-ROM”), digital versatile disk (“DVD”)). Pursuant to one aspect, a display device118is coupled with the address/data bus102, wherein the display device118is configured to display video and/or graphics. In an aspect, the display device118may include a cathode ray tube (“CRT”), liquid crystal display (“LCD”), field emission display (“FED”), plasma display, or any other display device suitable for displaying video and/or graphic images and alphanumeric characters recognizable to a user. The computer system100presented herein is an example computing environment in accordance with an aspect. However, the non-limiting example of the computer system100is not strictly limited to being a computer system. For example, an aspect provides that the computer system100represents a type of data processing analysis that may be used in accordance with various aspects described herein. Moreover, other computing systems may also be implemented. Indeed, the spirit and scope of the present technology is not limited to any single data processing environment. Thus, in an aspect, one or more operations of various aspects of the present technology are controlled or implemented using computer-executable instructions, such as program modules, being executed by a computer. In one implementation, such program modules include routines, programs, objects, components and/or data structures that are configured to perform particular tasks or implement particular abstract data types. In addition, an aspect provides that one or more aspects of the present technology are implemented by utilizing one or more distributed computing environments, such as where tasks are performed by remote processing devices that are linked through a communications network, or such as where various program modules are located in both local and remote computer-storage media including memory-storage devices. An illustrative diagram of a computer program product (i.e., storage device) embodying the present invention is depicted inFIG.2. The computer program product is depicted as floppy disk200or an optical disk202such as a CD or DVD. However, as mentioned previously, the computer program product generally represents computer-readable instructions stored on any compatible non-transitory computer-readable medium. The term “instructions” as used with respect to this invention generally indicates a set of operations to be performed on a computer, and may represent pieces of a whole program or individual, separable, software modules. Non-limiting examples of “instruction” include computer program code (source or object code) and “hard-coded” electronics (i.e. computer operations coded into a computer chip). The “instruction” is stored on any non-transitory computer-readable medium, such as in the memory of a computer or on a floppy disk, a CD-ROM, and a flash drive. In either event, the instructions are encoded on a non-transitory computer-readable medium. (2) Introduction As noted above, the present disclosure is directed to a cognitive signal processor (CSP) formed as an efficient hardware implementation with a wide instantaneous bandwidth neuromorphic adaptive core (NeurACore). A unique aspect is the NeurACore architecture that is capable of processing complex I/Q (in-phase and quadrature) signals and online learning throughout the core as well as the output layers. The CSP includes a time-evolving neural network comprising the NeurACore, which allows rapid adaptation of the CSP to new circumstances. A very simple example of such adaptation is the continuous shifting of pole frequencies along with adjusting their Quality (Q)-factors to sense and track composite signals, in order to optimally de-noise and separate them. The CSP also includes a learning output layer that computes complex-valued weights for the complex-valued neural state vector. This enables output layer learning for complex-valued I/Q (in-phase and quadrature) signals that are common in communication and radar signal processing. Based on the above, the CSP and NeurACore of the present disclosure is significantly different than existing technologies, with multiple innovations such as: (1) the ability to embed physical model equations in the cores and/or output layer, (2) the neuromorphic adaptive cores (as referenced above), and (3) the ability to extend the learning output layer to a true complex-valued formulation that maintains the phase relationship between the I- and Q-channels instead of a simplistic system that de-noises the I- and Q-channels independently. As can be appreciated by those skilled in the art, the NeurACore CSP architecture enables real-time complex/real signal denoising/detection algorithms capable of ultra-wide bandwidth operation with signal processing units that are ultra-low Cost, Size, Weight, and Power (C-SWaP). The NeurACore denoiser can detect and de-noise complex (I/Q) signals, including Low Probability of Intercept/Detection (LPI/LPD) and frequency hopping signals, improving signal-to-noise (SNR) performance by over 20 dB for a variety of different waveforms. The application domain of the NeurACore CSP includes, but is not limited to radar, communication, acoustic, audio, video and optical waveforms. The NeurACore I/Q cognitive signal processor as described in this disclosure can also be used to improve SNR of the various radar units included in an autonomous driving system. The very wide bandwidth, fast response to input changes, and low C-SWaP attributes of our CSP are enabled by a combination of (1) very rapid online learning and (2) the fast adaptability of the core weights, which enables tracking and adapting to rapid changes using a reduced complexity core. The NeurACore design can also be used as a basis to develop novel controllers, an example of which is provided in further detail below. The present disclosure also provides an implementation of the NeurACore, used for the denoising of complex (I/Q) signals on a wide instantaneous bandwidth (IBW). Thus, in one aspect, the system itself is a wideband denoiser which would greatly improve SWAP over comparable systems with the same performance (i.e., conventional channelizer). The system performs real time denoising with incorporated delay tolerant functions. The architecture consists of an adaptive core capable of embedding system physics/specifics with learning throughout the core and an online learning layer for rapid adaptation to novel situations. The two in conjunction allows for the system to perform denoising on incoming signals without prior knowledge about the signal type. A purpose of our invention is a system for real-time complex/real signal denoising. The denoiser will provide detection and denoising of complex (I/Q) signals including low probability of intercept (LPI) low probability of detection (LPD) and frequency hopping signals and improve the signal-to-noise ratio (SNR) performance by >20 dB for variety of different waveforms. Some advantages of this implementation is the low latency and utilization of system physics. Comparable systems, like a conventional channelizer, would operate over a smaller frequency spectrum and likely require larger latency. While current machine learning approaches would require large quantities of online/offline training data, would not utilize our physics approach, and would incur larger latency and Size-Weight-and-Power (SWAP). As can be appreciated by those skilled in the art, many commercial and military signal processing platforms require small size, ultra-wide bandwidth operation, ultra-low C-SWaP signal processing units, and artificial Intelligence enhanced with real-time signal processing capability. For example, the system of the present disclosure can be implemented in signal processing platforms that process radar, communication, acoustic, audio, video and optical waveforms, etc. Specific details are provided below. (3) Specific Details of Various Embodiments As referenced above, the present disclosure is directed to a cognitive signal processor (CSP) having a Neuromorphic Adaptive Core (NeurACore) and an implementation of the NeurACore used for the denoising of complex (I/Q) signals on a wide instantaneous bandwidth (IBW). For clarity, the NeurACore and subsequent implementation used on IBW are described in turn below. (3.1) Neuromorphic Adaptive Core (NeurACore) The NeurACore CSP architecture comprises three primary functional modules and two optional ones that can be brought online independently of each other. The main architecture along with a list of key innovations are shown inFIG.3. The first primary block, referred to as the neuromorphic adaptive core (NeurACore)300operates as a local learning layer block and receives as input a mixture of real and I/Q signals that it maps onto a neuromorphic core neural network with weights that are by default fixed. The local learning layers enable real-time optimization of a “feature extraction” process. The NeurACore300can be adapted in real time using various parameters including, but not limited to, the neural state vector of the cores and optional time-evolving embedded physical models. The second primary module, called “global learning layer”304, is a short-time online learning engine that adapts the complex-valued output weights (C) of the reservoir states to predict the most likely next value of the input signal. This module304uses layers to effectively learn system functions. The third primary module, referred to as the “neural combiner”306, then combines a set of delayed neural state vectors with the weights of the global learning layer module304to compute the output signal. The global learning layer module304can optionally embed308physical models into the NeurACore (e.g., such as a physics enhanced controller). Further details regarding these components are provided below. (3.1.1) Concept A neuromorphic core with fixed weights is a special type of Recurrent Neural Network (RNN) that can be represented in state-space form as follows: {dot over (X)}(t)=AX(t)+Bu(t) y(t)=C(t)TX(t)+D(t)u(t), whereAis the connectivity matrix with fixed weights,Bis the vector mapping the input u(t) to the core,X(t) is the neural state vector,C(t) is the set of tunable output layer weights that map a time-delayed set of states to the output y(t), and D(t) is the seldom used direct mapping from input to output. It should be noted that, in one aspect, u(t) is an unknown RF signal that is being decomposed into its constituent signals. By adapting the connection weights of the core in real time, it is extended into a neuromorphic adaptive core (NeurACore)300and the CSP takes on the new generic form {dot over (X)}(t)=A(X(t),u(t), . . . )X(t)+Bu(t) y(t)=C(t)TX(t)+D(t)u(t), where the expressionA(X(t), u(t), . . . ) explicitly indicates the time-dependence of the cores on the neural state vector, the input, and other possible parameters of the A matrix, such as an embedded physics model (that can be added) or any other suitable model. (3.1.2) Complex-Valued Formulation The NeurACore CSP is designed to handle both real and complex-valued In phase and Quadrature phase (I/Q) signals; therefore, any quantities in the matrices and vectors of the CSP equations can be complex-valued. From here on, the real and imaginary parts of all quantities are explicitly written to (1) show how the phase-preserving relationship between the I-(real part) and Q-(imaginary part) signals works, and (2) to make the formulation compatible with embedded hardware that typically cannot process true complex-valued variables. The first equation of the CSP takes the form: [X˙IX˙Q]=A__[XIXQ]+B_IuI+B_QuQ, where the I and Q subscripts refer to the I- and Q-channels. For example, uIis the I-channel of the input and uQis the Q-channels of the input. (3.1.3) NeurACore Instantiation for Communication and Radar Systems While the NeurACore architecture is very general, of particular practical interest is its instantiation for processing radio frequency (RF) and acoustic signals for communication and radar/sonar. For such applications, the nodes of the core are designed to be resonators/oscillators with tunable frequency ω and tunable quality factor Q. For this instantiation of NeurACore, theAmatrix comprises of the following 2×2 blocks andBconsists of 2×1 blocks: A__2×2=[-❘"\[LeftBracketingBar]"ω0❘"\[RightBracketingBar]"Q0-ω0ω0-❘"\[LeftBracketingBar]"ω0❘"\[RightBracketingBar]"Q0]B_I2×1=❘"\[LeftBracketingBar]"ω0❘"\[RightBracketingBar]"Q0[10]B_Q2×1=❘"\[LeftBracketingBar]"ω0❘"\[RightBracketingBar]"Q0[01] These then constitute the complex conjugate pole pairs, as follows: p01=-❘"\[LeftBracketingBar]"ω0❘"\[RightBracketingBar]"Q0+iω0andp02=-❘"\[LeftBracketingBar]"ω0❘"\[RightBracketingBar]"Q0-iω0. For this instantiation, the matrices and vectors of the first equation of NeurACore have the following general form: A__=[[-❘"\[LeftBracketingBar]"ω1❘"\[RightBracketingBar]"Q1-ω1ω1-❘"\[LeftBracketingBar]"ω1❘"\[RightBracketingBar]"Q1][-❘"\[LeftBracketingBar]"ω2❘"\[RightBracketingBar]"Q2-ω2ω2-|ω2|Q2]…[-❘"\[LeftBracketingBar]"ωN❘"\[RightBracketingBar]"QN-ωNωN-❘"\[LeftBracketingBar]"ωN❘"\[RightBracketingBar]"QN]]B_I=[❘"\[LeftBracketingBar]"ω1❘"\[RightBracketingBar]"Q10|ω2|Q20…❘"\[LeftBracketingBar]"ωN❘"\[RightBracketingBar]"QN0]B_Q=[0❘"\[LeftBracketingBar]"ω1❘"\[RightBracketingBar]"Q10|ω2|Q2…0❘"\[LeftBracketingBar]"ωN❘"\[RightBracketingBar]"QN] where N is the total number of pole pairs. (3.1.4) NeurACore Adaptation Module There are many possible optional adaptation strategies for the core weights of NeurACore. Provided below is an example of adaptation for the communication/radar instantiation of NeurACore described in the previous section. The neural state space vector X captures in real time the spectrum of the input signal mixture, which can be used to adapt the frequencies of the poles to detect and track optimally the various signals in the input.FIG.4is a graph depicting how the poles cluster around two signals revealed by the neural state space vector. More generally, both the frequency ω and tunable quality factor Q of each pole can be adapted based on the state space spectrum and other variables, as depicted inFIG.5. Thus, in one aspect, the adaption module302allows a user to adapt the frequency ω and tunable quality factor Q of each pole. (3.1.5) Complex-valued Online Learning Module for Output Layer The purpose of the Global Learning Layer module304is to compute at each time step values for the complex-valued weightsCfor combining a preset number of time-delayed neural state vectors to predict the most likely next value of the input signal. To optimize the likelihood of the predicted input value, a gradient descent approach is used that is cast in differential form: C˙_I=-μI∇CI_E{CI_,CQ_}C˙_Q=-μQ∇CQ_E{CI_,CQ_} where ∇CI_ is the gradient of CI, and where the weights for the K delayed state have the form CI=[CI11(t)…CI(K+1)1(t)CI12(t)…CI(K+1)2(t)………CI1N(t)…CI(K+1)N(t)]CQ=[CQ11(t)…CQ(K+1)1(t)CQ12(t)…CQ(K+1)2(t)………CQ1N(t)…CQ(K+1)N(t)]andE{CI_,CQ_}=(uI(t)-∑rowscolumsCI_(t-τpred)⊗XI_(t-τpred)+∑rowscolumsCQ_(t-τpred)⊗XQ_(t-τpred))2+(uQ(t)-∑rowscolumsCI_(t-τpred)⊗XQ_(t-τpred)-∑rowscolumsCQ_(t-τpred)⊗XI_(t-τpred))2+λI∑rowscolumsCI_(t)⊗CI_(t)+λQ∑rowscolumsCQ_(t)⊗CQ_(t)whereX_I=[xI1(t)…xI1(t-Kτ)xI2(t)…xI2(t-Kτ)………xIN(t)…xIN(t-Kτ)]X_Q=[xQ1(t)…xQ1(t-Kτ)xQ2(t)…xQ2(t-Kτ)………xQN(t)…xQN(t-Kτ)]. (3.1.6) Output Update (by the Neural Combiner306) The denoised output is determined by combining the K delayed states weighted by the C matrix elements obtained from the online Global Learning Layer module304described in the previous section, using the following formulas: yI(t)=∑rowscolumsCI_⊗XI_(t)-∑rowscolumsCQ_⊗XQ_(t)yQ(t)=∑rowscolumsCI_⊗XQ_(t)+∑rowscolumsCQ_⊗XI_(t). Thus, the neural combiner306combines the set of delayed neural state vectors with the weights of the Global Learning Layer module304to compute the output signal. The output signal is the resulting denoised complex (I/Q) signal (i.e., separate in-phase and quadrature signals). (3.2) Hardware Implementation of a Wide Instantaneous Bandwidth NeurACore As noted above, the present disclosure also provides a hardware implementation of a Wide Instantaneous Bandwidth (WIB) NeurACore. The WIB implementation of the invention can be summarized by three equations as provided below, the adaptive core equations, the output layer update equations, and the weights update equations. These three make up the system which inputs are uIand uQ, with denoised outputs yIand yq. A full overview of the system can be seen inFIG.3. [x.Ix˙Q]=A_[xIxQ]+B_IuI+B_QuQAdaptiveCoreEquationyI(t)=∑rowscolumsCI_⊗XI_(t)-∑rowscolumsCQ_⊗XQ_(t)yQ(t)=∑rowscolumsCI_⊗XQ_(t)+∑rowscolumsCQ_⊗XI(t)_OutputLayerUpdateEquations ĊI=−μforgetCI(t)+μlearnεI(t)XI(t)+μlearnεQ(t)XQ(t) ĊQ=−μforgetCQ(t)−μlearnεI(t)XQ(t)+μlearnεQ(t)XI(t) Weights Update Equation part 1, where C I/Q represent the “weights”. εI(t)=uI(t)−yI(t−τpred) εQ(t)=uQ(t)−yQ(t−τpred) Weights Update Equations part 2, ε I/Q represent the error for I/Q. (3.2.1) Neuromorphic Adaptive Core As shown inFIG.3and as referenced above, the Neuromorphic Adaptive Core300includes three matrices A, BI, and BQ (shown below), where A contains both I/Q information. The matrix values are first trained offline with specific connections made between set locations, but during operation the matrices will adapt to the incoming frequency. Typically, in a deep neural network (DNN) there would be random connectivity weights between matrices. However, in the present system, the A matrix is sparsely filled to reduce overall computations for hardware efficiency, as follows: A__S[Ns×Ns]=[[A11A12A21A22][0__]⋯[0__][0__][A33A34A43A44]⋯[0__]⋯⋯⋯⋯[0__][0__][0__][A(Ns-1),(Ns-1)A(Ns-1),NsANs,(Ns-1)ANs,Ns]] The NeurACore matrix dimensions are determined by the system size and hardware latency. The Adaptive Core Equation (provided in Section 3.2) is a simplified version of the Adaptive Core equation; however, to account for hardware delays the equation would change to the Hardware Delayed Version provided below: x_n=A_Sx_[n]-Nτx+B_I,S[uI,[n+Nτx]-NτxuI,[n+Nτx-1]-Nτx…uI,[n+1]-NτxuI,[n]-Nτx]+B_Q,S[uQ,[n+Nτx]-NτxuQ,[n+Nτx-1]-Nτx…uQ,[n+1]-NτxuQ,[n]-Nτx], where Ntxis equal to the number of clock cycles needed to perform calculations to update X. The delays would be determined based on the hardware constraints such as multiplier result latency and addition latency. As the latency would have an effect on the calculation the system would eventually find a balance between delayed versions of the input and the amount of computation needed to perform the Adaptive Core Equation. As an example, let's examine a system with three clock cycle delays per multiplication. The ideal (no delay) multiplication would look like y(n)=c(n)*x(n) where n is the nth clock cycle. Typically values of digital variables are stored in digital registers that are available for readout at the end of each clock cycle. For ideal multiplication it would mean that the multiplication result (y) would be available at the same time when the input values (c and x) would be available to read in for the multiplication unit. It would mean that the multiplication result is generated in zero time since the multiplication output (y) is available at the same time when the inputs (c and x) arrive. In a real hardware there is always a delay to execute an operation, such as multiplication. Assuming 3 clock cycle delays for executing a multiplication, the actual equation that correctly describes this delay is y(n)=c(n−3)*x(n−3). It means that the multiplication result that is available for readout at the end of the nth clock cycle contains the multiplication result of the two input variables c and x with values of three clock cycles earlier. (3.2.2) Output Layer Update The output layer will produce the final denoised I/Q output of the system. As seen in the Output Layer Update Equation provided above, the output is created after an elementwise multiplication between C(weights) and X(states). The size of which is determined by the systems embedding factor (“K” or “Kemb”). This is a value set when designing the full system. Additionally, the system must account for the hardware delays in the system, thereby expanding the equation to what is seen below for the Hardware Delay Tolerant Version of the Output Layer Update Equation, as follows: y˜I,n=yI,n-Nτout=∑i=1(rows)Np∑j=1(columns)K+1{C__I,[n]-Nτout⊗X__I,[n]-Nτout+C__Q,[n]-Nτout⊗X__Q,[n]-Nτout}y˜Q,n=yQ,n-Nτout=∑i=1(rows)Np∑j=1(columns)K+1{C__I,[n]-Nτout⊗X__Q,[n]-Nτout+C__Q,[n]-Nτout⊗XI,[n]-Nτout}, where NS=2NP, Npis a number of poles, NSis a number of states, and Ntoutis the amount of hardware clock cycles needed to compute Y I/Q. (3.2.3) Weights Update The weights are updated through the equations found the Weights Update Equations, parts 1 and 2 above. Part 2 of the Weights Update Equations represents the calculation of the error between the input value and the output value. In the hardware system these will need to be delayed such that they match in time as seen in the Error Calculation equation below: ε˜I,n=εI,(n+Nτp-Nτout-1)=uI,([n+Nτp-Nτout]-1)-y˜I,([n]-1)ε˜Q,n=εQ,(n+Nτp-Nτout-1)=uQ,([n+Nτp-Nτout]-1)-y˜Q,([n]-1). As shown in the Error Calculation equation, error is calculated with balanced input/output taking into account hardware delays to calculate each error. The Weights Update Equation part 1 shows the full weight update, including the error calculation. The forgetting rate and learning rate are constants set at the beginning of the system design. Using the previous version of the weights with a combination of the state values, learning rate, forgetting rate, and error, the system can calculate the next set of weights such that the system is always learning. To map the equation to hardware, the process expands on the calculations from the Weights Update Equation part 1, to those found in the Delay Tolerant Expansion equations below. The system can be described as implementing an online learning algorithm by using these methods described in the weight update section. Delay Tolerant Expansion of weight update equation for CI: C__I,n=(1-μforget)NτcC__I,([n]-Nτc)+(1-μforget)(Nτc-1)μlearnε˜I,([n-NτC+1]-NτC)X__I,([n-Nτout-NτC]-NτC)+(1-μforget)(Nτc-1)μlearnε˜Q,([n-NτC+1]-NτC)X__Q,([n-Nτout-NτC]-NτC)+(1-μforget)(Nτc-2)μlearnε˜I,([n-NτC+2]-NτC)X__I,([n-Nτout-NτC+1]-NτC)+(1-μforget)(Nτc-2)μlearnε˜Q,([n-NτC+2]-NτC)X__Q,([n-Nτout-NτC+1]-NτC)+…+(1-μforget)μlearnε~I,([n-1]-NτC)X__I,([n-Nτout-2]-NτC)+(1-μforget)μlearnε~Q,([n-1]-NτC)X__Q,([n-Nτout-2]-NτC)+μlearnε~I,([n]-NτC)X__I,([n-Nτout-1]-NτC)+μlearnε~Q,([n]-NτC)X__Q,([n-Nτout-1]-NτC) Delay Tolerant Expansion of weigh update equation for CQ: C__Q,n=(1-μforget)NτcC__Q,([n]-Nτc)+-(1-μforget)(Nτc-1)μlearnε˜I,([n-NτC+1]-NτC)X__Q,([n-Nτout-NτC]-NτC)+(1-μforget)(Nτc-1)μlearnε˜Q,([n-NτC+1]-NτC)X__I,([n-Nτout-NτC]-NτC)-(1-μforget)(Nτc-2)μlearnε˜I,([n-NτC+2]-NτC)X__I,([n-Nτout-NτC+1]-NτC)+(1-μforget)(Nτc-2)μlearnε˜Q,([n-NτC+2]-NτC)X__I,([n-Nτout-NτC+1]-NτC)+…-(1-μforget)μlearnε~I,([n-1]-NτC)X__Q,([n-Nτout-2]-NτC)+(1-μforget)μlearnε~Q,([n-1]-NτC)X__I,([n-Nτout-2]-NτC)-μlearnε~I,([n]-NτC)X__Q,([n-Nτout-1]-NτC)+μlearnε~Q,([n]-NτC)X__I,([n-Nτout-1]-NτC) The system described in this disclosure has been simulated using Matlab. As an example, a fast frequency hopping simulation was designed to show the denoising capability over a wide range of unknown frequencies. The same type of input would be challenging for a common channelizer as the latency through the system would be tough to keep up with the frequency hopping signal. The hardware delays were taken into account for the simulation with an architecture that contains the following parameters: 50 poles, 100 states, 5 simulated clock delays to calculate state update, 9 simulated clock cycle delays to calculate output layer, 7 simulated clock cycle delays to calculate weight update, and an embedding length of 7.FIG.6depicts the FFT of the input and output frequencies, showing that the input is quite noisy containing a wide range of signals while the output correctly detects signals found in the system greatly reducing the noise floor. The reduction of noise in the time domain is shown betweenFIGS.7A and7B. The best results can be seen inFIGS.8A and8B, which show the reduction of noise throughout the frequency domain as a result of using the denoising algorithm. Thus and as can be appreciated by those skilled in the art, the hardware implementation for signal denoising is a low SWAP and efficient system for wide instantaneous bandwidth signal denoising. (3.3) Control of a Device As shown inFIG.9, the WIB NeurACore900in its hardware implementation has many applications. In one aspect, the system with the NeurACore900can be used for signal denoising to denoise noisy input signals901. In some aspects, the NeurACore900can be used to control a device902based on the signal denoising (e.g., a mobile device display, a virtual reality display, an augmented reality display, a computer monitor, a motor, an autonomous vehicle, a machine, a drone, a camera, etc.). In some embodiments, the device902may be controlled to cause the device902to move or otherwise initiate a physical action based on the denoised signal. In some embodiments, a drone or other autonomous vehicle may be controlled to move to an area where an object is determined to be based on the imagery. In yet some other embodiments, a camera may be controlled to orient towards the identified object. In other words, actuators or motors are activated to cause the camera (or sensor) to move or zoom in on the location where the object is localized. In yet another aspect, if a system is seeking a particular object and if the object is not determined to be within the field-of-view of the camera, the camera can be caused to rotate or turn to view other areas within a scene until the sought after object is detected. In addition, in a non-limiting example of an autonomous vehicle having multiple sensors, such as cameras, which might include noisy signals that need denoising. The system can denoise the signal and then, based on the signal, cause the autonomous vehicle to perform a vehicle operation. For instance, if two vehicle sensors detect the same object, object detection and classification accuracy is increased and the system described herein can cause a precise vehicle maneuver for collision avoidance by controlling a vehicle component. For example, if the object is a stop sign, the system may denoise a noisy input signal to identify the stop sign and then may cause the autonomous vehicle to apply a functional response, such as a braking operation, to stop the vehicle. Other appropriate responses may include one or more of a steering operation, a throttle operation to increase speed or to decrease speed, or a decision to maintain course and speed without change. The responses may be appropriate for avoiding a collision, improving travel speed, or improving efficiency. Non-limiting examples of devices that can be controlled via the NeurACore include a vehicle or a vehicle component, such as a brake, a steering mechanism, suspension, or safety device (e.g., airbags, seatbelt tensioners, etc.). Further, the vehicle could be an unmanned aerial vehicle (UAV), an autonomous ground vehicle, or a human operated vehicle controlled either by a driver or by a remote operator. As can be appreciated by one skilled in the art, control of other device types is also possible. Finally, while this invention has been described in terms of several embodiments, one of ordinary skill in the art will readily recognize that the invention may have other applications in other environments. It should be noted that many embodiments and implementations are possible. Further, the following claims are in no way intended to limit the scope of the present invention to the specific embodiments described above. In addition, any recitation of “means for” is intended to evoke a means-plus-function reading of an element and a claim, whereas, any elements that do not specifically use the recitation “means for”, are not intended to be read as means-plus-function elements, even if the claim otherwise includes the word “means”. Further, while particular method steps have been recited in a particular order, the method steps may occur in any desired order and fall within the scope of the present invention. | 37,907 |
11863222 | DESCRIPTION OF EXAMPLE EMBODIMENTS Overview Briefly, a receiver is provided that includes a plurality of sub-rate receiver lanes each of which is configured to receive an analog receive signal from an analog front-end and produce digital sub-rate receiver data. The receiver includes one or more first digital-to-analog converters (DACs) (also referred to herein as “average” DACs) shared across the plurality of sub-rate receiver lanes, and one or more second DACs (also referred to herein as “mismatch cancellation” DACs) for each sub-rate receiver lane of the plurality of sub-rate receiver lanes. The one or more second DACs of a respective sub-rate receiver lane are configured to provide output to be combined with an output of a corresponding one of the one or more first DACs during processing of the analog receive signal in the respective sub-rate receiver lane to account for a sub-rate receiver lane specific offset and mismatch with respect to a corresponding one of the one or more first DACs. A logic circuit is provided that is configured to generate a first DAC control for each of the one or more first DACs and a second DAC control for each second DAC of each of the plurality of sub-rate receiver lanes. Example Embodiments Presented herein are embodiments that allow a sub-rate receiver architecture to overcome lane dependent mismatch with an area efficient technique and optimize the signal-to-noise radio (SNR) for per sub-rate lane and achieve high sensitivity. A mismatch adaptive sub-rate receiver architecture is provided. This sub-rate architecture reduces the area for mismatch adaptation digital-to-analog converters (DACs) by sharing first larger size DACs across receiver lanes while utilizing second smaller per receiver lane DACs for mismatch/offset cancellation. An adaption engine is also disclosed to provide control signals to the larger average DACs and to the smaller mismatch cancellation DACs, simultaneously. The receiver architecture presented herein, with the adaptation engine, enables cancellation of the mismatch impact on inter-symbol interference (ISI) in an area efficient manner. FIGS.1A-1Dis a block diagram of a mismatch adaptive sub-rate receiver architecture, according to an example embodiment. The receiver100may be used for receive processing optical signals received over an optical fiber, such as optical fiber102, that is coupled to a photodiode104. The receiver100includes an analog front-end110coupled to the output of the photodiode104. The analog front-end110may include a transimpedance amplifier (TIA)112and a variable gain amplifier114. While the present disclosure refers to an optical receiver architecture, these techniques are applicable to electrical receiver architectures. In a quarter-rate receiver architecture, there are four sub-rate (quarter-rate) receiver lanes. Thus, the receiver100includes four sub-rate receiver lanes120-1,120-2,120-3, and120-4, also denoted quarter lanes 0-3 or sub-rate domains 000, 090, 180, and 270, respectively. In general, the number of sub-rate receiver lanes depends on the sub-rate degree employed. If the receiver100is configured for a ⅛ sub-rate architecture, then the receiver100would include eight sub-rate receiver lanes. The concepts presented herein are applicable to a receiver that employs any type of sub-rate receiver architecture (one-half, one-quarter, one-eighth, etc.) The receiver100also includes a block of first larger digital-to-analog converters (DACs)140, called “average” DACs because they are controlled to account for an averaging across the sub-rate receiver lanes. The receiver100also further includes a deserializer (e.g.,4:64deserializer)150, a tap weight/slicer threshold adaptation engine160and a clock and data recovery (CDR) block170. The block of average DACs140are shared across the sub-rate receiver lanes120-1to120-4, as described further below. The tap weight/slicer threshold adaptation engine160may be embodied by digital logic circuitry (i.e., digital logic gates in field programmable gate array or an application specific integrated circuit) or may be embodied by software instructions running on a microprocessor or microcontroller. As will become apparent from the following, the receiver100also includes a block of second smaller DACs, called mismatch cancellation DACs because they are controlled to account for the offset or mismatch cancellation from the average DACs on a sub-rate receiver lane specific basis. The receiver100involves the sharing of larger size coarse DACs across sub-rate receiver lanes and the use of smaller fine DACs specific to the individual sub-rate receiver lanes to account for small offsets/mismatches with respect to the larger size DACs. The analog front-end110outputs an analog receive signal. Each sub-rate receiver lane120-1to120-4obtains the analog receive signal from the analog front-end110and performs decision feedback equalizing and slicer threshold processing of the analog receive signal in its respective sub-rate domain. To this end, each of the sub-rate receiver lanes120-1to120-4includes a distributed decision feedback equalizer (DFE). For example, here a Pulse Amplitude Modulation-4 (PAM-4) modulation scheme is used, and each sub-rate receiver lane includes a 3-tap DFE to equalize inter symbol interference (ISI). However, it is to be understood that the techniques presented herein are applicable to other types of equalizations, such as that achieved with a Feed-Forward Equalizer (FFE), and may be used with a combination FFE+DFE equalization processing scheme. Furthermore, these techniques are applicable to any sub-rate electrical/optical receiver and any direct-detect modulation scheme such as Non-Return to Zero (NRZ), PAM-4, PAM-6 etc. Each sub-rate receiver lane120-1to120-4includes a transconductance (Gm) amplifier122. Sub-rate receiver lane120-1further includes a 3-tap DFE124-1that comprises a DFE summer node125-1, three mismatch tap cancellation DACs126-1A,126-1B, and126-1C, one for each DFE tap, a block of decision slicers128-1A,128-1B, and128-1C and two delay blocks132-1and134-1. Post-cursor cancellation data bits (DA, DB, and DC) control the ISI cancellation current going to the DFE summer node125-1. The outputs of each of the mismatch tap cancellation DACs126-1A,126-1B, and126-1C are combined with an output of a respective average DAC. That is, the output of mismatch tap cancellation DAC126-1A is combined with the output of average tap weight DAC142-1, the output of mismatch tap cancellation DAC126-1B is combined with the output of average tap weight DAC142-2, and the output of mismatch tap cancellation DAC126-1C is combined with the output of average tap weight DAC142-3. Each decision slicer128-1A,128-1B, and128-1C includes a threshold slicer129-1that receives the output of the DFE summer node125-1. The decision slicers128-1A,128-1B, and128-1C perform thresholding with respect to a corresponding one of three different thresholds: low (L), zero (Z), and high (H) in the middle of three eyes, for a PAM-4 modulation scheme, as an example. For example, decision slicer128-1A performs decision slicing with respect to a threshold referred to as DL, decision slicer128-1B performs decision slicing with respect to a threshold referred to as DZ, and decision slicer128-1C performs decision slicing with respect to a threshold referred to as DH. The decision thresholds DL, DZ, and DH, are generically denoted by DX. The delay blocks132-1and134-1form the delay functions—to store previous symbol values as employed for decision feedback equalization, as depicted inFIG.1B. Sub-rate receiver lane120-2includes a 3-tap DFE124-2that comprises a DFE summer node125-2, three mismatch tap cancellation DACs126-2A,126-2B and126-2C, a block of decision slicers128-2A,128-2B, and128-1C and two delay blocks132-2and134-2. Each decision slicer128-2A,128-2B, and128-2C includes a threshold slicer129-2that receives the output of the DFE124-2(at the DFE summer node125-2). The outputs of each of the mismatch tap cancellation DACs126-2A,126-2B, and126-2C are combined with an output of a respective average DAC. The output of mismatch tap cancellation DAC126-2A is combined with the output of average tap weight DAC142-1, the output of mismatch tap cancellation DAC126-2B is combined with the output of average tap weight DAC142-2, and the output of mismatch tap cancellation DAC126-2C is combined with the output of average tap weight DAC142-3. Sub-rate receiver lane120-3further includes a 3-tap DFE124-3that comprises a DFE summer node125-3, mismatch tap cancellation DACs126-3A,126-3B and126-3C, a block of decision slicers128-3A,128-3B, and128-3C and two delay blocks132-3and134-3. Each decision slicer128-3A,128-3B, and128-3C includes a threshold slicer129-3that receives the output of the DFE124-3(at the DFE summer node125-3). The outputs of each of the mismatch tap cancellation DACs126-3A,126-3B, and126-3C are combined with an output of a respective average DAC. The output of mismatch tap cancellation DAC126-3A is combined with the output of average tap weight DAC142-1, the output of mismatch tap cancellation DAC126-3B is combined with the output of average tap weight DAC142-2, and the output of mismatch tap cancellation DAC126-3C is combined with the output of average tap weight DAC142-3. Similarly, sub-rate receiver lane120-4further includes a 3-tap DFE124-4that comprises a DFE summer node125-4, three mismatch tap cancellation DACs126-4A,126-4B and126-4C, a block of decision slicers128-4A,128-4B, and128-4C and two delay blocks132-4and134-4. Each decision slicer128-4A,128-4B, and128-4C includes a threshold slicer129-4that receives the output of the DFE124-4(at the DFE summer node125-4). The outputs of each of the mismatch tap cancellation DACs126-4A,126-4B, and126-4C are combined with an output of a respective average DAC. The output of mismatch tap cancellation DAC126-4A is combined with the output of average tap weight DAC142-1, the output of mismatch tap cancellation DAC126-4B is combined with the output of average tap weight DAC142-2, and the output of mismatch tap cancellation DAC126-4C is combined with the output of average tap weight DAC142-3. Each sub-rate receiver lane120-1to120-4further includes a mismatch slicer threshold DAC for each of the thresholds L, Z, and H, but for simplicity only one mismatch slicer threshold DAC is shown in each sub-rate receiver lane inFIGS.1B and1C. Specifically, sub-rate receiver lane120-1includes a mismatch slicer threshold DAC136-1and a slicer threshold summer node138-1(for each of the thresholds L, Z, and H), sub-rate receiver lane120-1includes a mismatch slicer threshold DAC136-2and a slicer threshold summer node138-2(for each of the thresholds L, Z and H), sub-rate receiver lane120-3includes a mismatch slicer threshold DAC136-3and a slicer threshold summer node138-3(for each of the thresholds L, Z and H) and sub-rate receiver lane120-4includes a mismatch slicer threshold DAC136-4and a slicer threshold summer node138-4(for each of the thresholds L, Z, and H). In order to minimize the area in an integrated circuit, the block of shared DACs140includes three average (main) tap weight DACs142-1,142-2, and142-3, each associated a corresponding one of the three taps of the 3-tap DFEs124-1to124-4, and shared across the sub-rate receiver lanes120-1to120-4. Specifically, the average tap weight DAC142-1provides an average tap weight DAC value h1d-AVGfor the first post cursor ISI cancellation that is combined with the mismatch tap cancellation value h1d(specific to sub-rate domain 000) provided by mismatch tap cancellation DAC126-1A for the first tap of the 3-tap DFE124-1in sub-rate receiver lane120-1. Similarly, the average tap weight DAC142-1provides the average tap weight DAC value h1d-AVGthat is combined with the mismatch tap cancellation value h1d(specific to sub-rate domain 090) provided by mismatch tap cancellation DAC126-2A for the first tap of the 3-tap DFE124-2in sub-rate receiver lane120-2, and so on for sub-rate lanes120-3and120-4. The average tap weight DAC142-2provides an average tap weight DAC value h2d-AVGthat is combined with the mismatch tap cancellation value h2d. (specific to sub-rate domain 000) provided by mismatch tap cancellation DAC126-1B for the second tap of the 3-tap DFE124-1in sub-rate receiver lane120-1. Similarly, the average tap weight DAC142-2provides the average tap weight DAC value h2d-AVGfor the second post cursor ISI cancellation that is combined with the mismatch tap cancellation value h2d. (specific to sub-rate domain 090) provided by mismatch tap cancellation DAC126-2B for the second tap of the 3-tap DFE124-2in sub-rate receiver lane120-2, and so on for sub-rate receiver lanes120-3and120-4. Further, the average tap weight DAC142-3provides an average tap weight DAC value h3d-AVGfor the third post cursor ISI cancellation that is combined with the mismatch tap cancellation value h3d. (specific to sub-rate domain 000) provided by mismatch tap cancellation DAC126-1C for the third tap of the 3-tap DFE124-1in sub-rate receiver lane120-1. Similarly, the average tap weight DAC142-3provides the average tap weight DAC value h3-AVG that is combined with the mismatch tap cancellation value h3d. (specific to sub-rate domain 090) provided by mismatch tap cancellation DAC126-2C for the third tap of the 3-tap DFE124-2in sub-rate receiver lane120-2, and so on for sub-rate receiver lanes120-3and120-4. Similarly, the block of average DACs140further includes average slicer threshold DACs144-1,144-2, and144-3, one for each of the thresholds used by the decision slicers in each sub-rate receiver lane. Specifically, the average slicer DAC144-1provides an average slicer threshold value DLAVGfor the lower eye in the PAM-4 signal level that is combined with a mismatch slicer threshold DAC value for the decision slicers128-1A,128-2A,128-3A, and128-4A (for the L threshold) in the sub-rate receiver lanes120-1-120-4, respectively. The average slicer DAC144-2provides an average slicer threshold value DZAVGfor the middle eye in the PAM-4 signal level that is combined with a mismatch slicer threshold DAC value for the decision slicers128-1B,128-2B,128-3B, and128-4B (for the Z threshold) in the sub-rate receiver lanes120-1-120-4, respectively. The average slicer DAC144-3provides an average slicer threshold value DHAVGfor the upper eye in the PAM-4 signal level that is combined with a mismatch slicer threshold DAC value for the decision slicers128-1C,128-2C,128-3C, and128-4C (for the H threshold) in the sub-rate receiver lanes120-1-120-4, respectively. In a convention receiver architecture, for each tap, a DAC is used in each of the quarter rate lanes. Thus, the number of DACs is equal to the number of DFE taps. However, different quarter rate lanes can have mismatch between them. This mismatch results in different bandwidths, ISI, optimum sampling position, etc. With a limited number of DACs, this results in sub-optimal bit error rate (BER) performance. Lane mismatch can be addressed by using independent DACs for each receiver lane. In the case of a quarter-rate receiver architecture, the number of DACs is equal to 4 times the number of DFE taps. Independent DACs can take care of ISI and bandwidth of each quarter rate lane, resulting in the optimum sampling of each sub-rate lane. However, four relatively large size (in terms of semiconductor die area) DACs are needed for each tap weight implementation, taking up a relatively large area. Thus, using conventional techniques, mismatch adaptation comes at a significant area penalty as the number of full-sized DACs needed are four times more than a conventional receiver. The architecture depicted inFIGS.1A-1Dis a very realistic and practical compromise. For each tap cancellation, one average tap weight DAC (h1d-AVG) is used with current mirroring across all the sub-rate receiver lanes, along with four independent small DACs in each sub-rate receiver lane, i.e., h1d-off-000, h1d-off-090, etc. The mismatch cancellation DAC range needs to be approximately less than 10% of the main or average tap weight DAC as it only has to account for the mismatch in the individual sub-rate receiver lanes. Said another way, a semiconductor area size of each respective mismatch DAC may be approximately 5-10 percent of a semiconductor area size of each average DAC. Thus, the number of larger average or main DACs is equal to the number of DFE taps. The number of smaller mismatch cancellation DACs is equal to 4 times the number of DFE taps. As shown inFIG.1D, the tap weight/slicer threshold adaptation engine160generates average tap weight DAC controls (values) h1d-AVG, h2d-AVG, and h3d-AVGfor the average tap weight DACs142-1,142-2, and142-3, respectively. The tap weight/slicer threshold adaptation engine160generates the mismatch tap cancellation DAC controls (values) for each mismatch tap cancellation DAC in each of the plurality of sub-rate receiver lanes. In addition, the tap weight/slicer threshold adaptation engine160generates the average slicer threshold DAC values DXAVGfor the average slicer threshold DACs144-1-144-3(for each of the L, Z, and H thresholds), and the mismatch slicer threshold values for each mismatch slicer threshold DAC136-1,136-2,136-3and136-4in the sub-rate receiver lanes120-1-120-4. Thus, for sub-rate receiver lane120-1(sub-rate domain 000), the tap weight/slicer threshold adaptation engine160generates h1d-off-000, h2d-off-000, h3d-off-000 and DX-off-000 (for each of the L, Z, and H thresholds). For sub-rate receiver lane120-2(sub-rate domain 090), the tap weight/slicer threshold adaptation engine160generates h1d-off-090, h2d-off-090, h3d-off-090 and DX-off-090 (for each of the L, Z and H thresholds). For sub-rate receiver lane120-3(sub-rate domain 180), the tap weight/slicer threshold adaptation engine160generates h1d-off-180, h2d-off-180, h3d-off-180 and DX-off-180 (for each of the L, Z and H thresholds). Finally, for sub-rate receiver lane120-4(sub-rate domain 270), the tap weight/slicer threshold adaptation engine160generates h1d-off-270, h2d-off-270, h3d-off-270, and DX-off-270 (for each of the L, Z, and H thresholds). Again, in most cases, the mismatch or offset between the average DACs that are shared across the sub-rate receiver lanes and the mismatch or offset for a given sub-rate receiver lane, may be 5-10%. Thus, smaller DACs can be used to account for the mismatch/offset cancellation. The larger average DACs are shared for each of the sub-rate receiver lanes, and then smaller “mismatch” DACs unique to each respective lane are provided to account for mismatch or offset from the average. The clock and data recovery block170generates clock signals that are used by the sub-rate receiver lanes120-1-120-4as shown inFIGS.1B,1C and1D. Specifically, the clock and data recovery block170generates a first pair of in-phase and quadrature clocks CKIand CKQ, and a second pair of in-phase and quadrature clocks CKIBand CKQB. The deserializer150receives the outputs of the sub-rate receiver lanes120-1-120-4and generates most significant bit data (DataMsB), least significant bit data (DataLsB) and Error bits. The Error bits are derived from a slicer in addition to DH, DZ, and DL and are used in the tap weight/slicer threshold adaptation engine160. For simplicity, this additional slicer is not shown in the figures. Reference is now made toFIG.2A, with continued reference toFIGS.1A-1D.FIG.2Ashows a tap weight computation section200of the tap weight/slicer threshold adaptation engine160. The arrangement shown inFIG.2Ais just for h1d (for the DACs associated with the first tap of the DFE in each sub-rate receiver lane). The same arrangement is used for h2dand h3d, not shown to avoid redundancy and to simplify the figure. The tap weight computation section200of the tap weight/slicer threshold adaptation engine160is run on a per quarter-rate receiver lane basis to determine the tap weights for a particular receiver lane. For each sub-rate domain, the tap weight computation section200includes a tap weight correlator and an accumulator. For sub-rate domain 000, there is a tap weight correlator202-1and an accumulator206-1. For sub-rate domain 090, there is a tap weight correlator202-2and an accumulator206-2, for sub-rate domain 180, there is a tap weight correlator202-3and an accumulator206-3, and for sub-rate domain 270, there is a tap weight correlator202-4and an accumulator206-4. The tap weight correlators202-1-202-4receive as input the Data and Error output by the deserializer150(FIG.1D). The tap weight correlators try to determine the correlation factor between incoming deserialized DataMsBand DataLsBand the Error on a bit-by-bit basis for each of the lanes to produce an output that is provided to the accumulators206-1-206-4. For different tap weights, a different sequence of correlation is calculated. For example, for the first post cursor ISI cancellation tap h1d, the current error bit is correlated with the one unit interval (UI) delayed DataMsBand DataLsBbits. Similarly, the second post cursor ISI cancellation tap h2d, the current error bit is correlated with the 2 UI delayed DataMsBand DataLsBbits, and the third post cursor ISI cancellation tap h3d, the current error bit is correlated with the 3 UI delayed DataMsBand DataLsBbits. This correlator output along with an accumulator forms a Least Mean Square (LMS) estimator that helps estimate the magnitude and the sign of ISI present on each post-cursor tap. The output of the accumulators206-1-206-4contain the final value of the post-cursor ISI on a per-lane basis. It is noted that in place of an LMS correlator/estimator as shown, other types of correlators/optimizers, such as gradient descent, etc., can also be used depending upon the bit error rate and other application requirements. The outputs for each sub-rate domain, e.g., h1d-000ouTfrom accumulator206-1, h1d-090ouTfrom accumulator206-2, h1d-180ouTfrom accumulator206-3, and h1 d-270OuTfrom accumulator206-4are coupled to an average and residue calculation circuit210. The average and residue calculation circuit210includes a tap weight code averaging block212and summers214-1,214-2,214-3, and214-4. The tap weight code averaging block212receives as input the outputs h1d-000ouT, h1d-090ouT, h1d-180ouT, and h1d-270ouTfrom the accumulators206-1to206-4, respectively. The tap weight code averaging block212computes the average tap weight DAC code that is fed to main DAC142-1, e.g., h1d-AVG, for the first DFE tap of the DFE's124-1to124-1, shared across the sub-rate receiver lanes. The summers214-1,214-2,214-3, and214-4each compute a sub-rate domain specific (sub-rate receiver lane specific) difference from the average tap weight DAC code. Specifically, summer214-1computes the difference between h1d-AVGand h1d-000ouTand the result is the offset DAC code h1d-off-000 that is used for the mismatch cancellation DAC126-1A in sub-rate receiver lane120-1. The summer214-2computes the difference between the h1d-AVGand h1d-090ouTand the result is the offset DAC code h1d-off-090 that is used for the mismatch cancellation DAC126-2A in sub-rate receiver lane120-2. The summer214-3computes the difference between the h1d-AVGand h1d-180ouTand the result is the offset DAC code h1d-off-180 that is used for the mismatch cancellation DAC126-3A in sub-rate receiver lane120-3. Lastly, the summer214-4computes the difference between the h1d-AVGand h1d-270ouTand the result is the offset DAC code h1d-off-270 that is used for the mismatch cancellation DAC126-4A in sub-rate receiver lane120-4. Thus, the tap weight computation section200of the tap weight/slicer threshold adaptation engine160determines the average tap weight values that can be used for the larger DACs that are shared across the four sub-rate receiver lanes, and smaller offset values from that average that can be handled by smaller DACs in each of the respective sub-rate receiver lanes. Normally, the mismatch to be accounted for is 5-10%, again, for which much smaller DACs can be used. The larger average DACs are shared across each of the sub-rate receiver lanes, and the smaller “mismatch” DACs unique to each respective lane are used to account for the mismatch or offset from the average. As explained above, the circuitry shown inFIG.2Ais replicated to compute h2d-AVGand h3d-AVGfor the main or average DACs142-2and142-3shared across the respective DFE taps in the sub-rate receiver lanes, and the sub-rate receiver lane specific smaller offset values used for the mismatch DACs in the respective sub-rate receiver lanes. Reference is now made toFIG.2B, which shows a slicer threshold computation section220of the tap weight/slicer threshold adaptation engine160. The slicer threshold computation section220computes the average slicer threshold DAC values used for the larger DACs that are shared across the sub-rate receiver lanes, and smaller offset values from that average that can be handled by smaller DACs in each of the respective sub-rate receiver lanes. The slicer threshold computation section220has a similar arrangement as the tap weight computation section200shown inFIG.2A. The slicer threshold computation section220includes a slicer threshold correlator and an accumulator for each sub-rate domain. For sub-rate domain 000, there is a slicer threshold correlator222-1and an accumulator226-1. For sub-rate domain 090, there is a slicer threshold correlator222-2and an accumulator226-2, for sub-rate domain 180, there is a slicer threshold correlator222-3and an accumulator226-3, and for sub-rate domain 270, there is a slicer threshold correlator222-4and an accumulator226-4. The outputs for each sub-rate domain, e.g., DX-000ouTfrom accumulator226-1, DX-090ouTfrom accumulator226-2, DX-180ouTfrom accumulator226-3, and DX-270OuTfrom accumulator226-4are coupled to an average and residue calculation circuit230. The average and residue calculation circuit230includes a slicer threshold averaging block232and summers234-1,234-2,234-3, and234-4. The slicer threshold averaging block232receives as input the outputs DX-000ouT, DX-090ouT, DX-180ouT, and DX-270ouTfrom the accumulators226-1to226-4, respectively. The slicer threshold averaging block232computes the average slicer DAC control that is fed to the average DAC, for each of the thresholds L, Z, and H. For example, for L threshold slicing, the slicer threshold averaging block232computes the average slicer DAC control DLAVGthat is provided to the DAC144-1, and similarly, for the Z threshold, computes the average slicer DAC control DZAVGthat is provided to the DAC144-2, and computes the average slicer DAC control DHAVGthat is provided to the DAC144-3. Again, each of DACs144-1,144-2, and144-3are shared across the sub-rate receiver lanes. The summers234-1,234-2,234-3and234-4each compute the difference from the average tap weight DAC control. Specifically, summer234-1computes the difference between DXAVGand DX-000ouTand the result is the offset DAC control DX-off-000 that is used for the mismatch DAC136-1in sub-rate receiver lane120-1. The summer234-2computes the difference between the DXAVGand DX-090ouTand the result is the offset DAC control DX-off-090 that is used for the mismatch DAC136-2in sub-rate receiver lane120-2. The summer234-3computes the difference between the DXAVGand DX-180ouTand the result is the offset DAC control DX-off-180 that is used for the mismatch DAC136-3in sub-rate receiver lane120-3. Lastly, the summer234-4computes the difference between the DXAVGand DX-270ouTand the result is the offset DAC control DX-off-270 that is used for the mismatch DAC136-4in sub-rate receiver lane120-4. Again, this arrangement is replicated for each of the L, Z, and H slicer thresholds. Reference is now made toFIG.3.FIG.3illustrates a DFE tap current circuit300used in each sub-rate receiver lane to apply the output of the larger average DAC with the output of the smaller mismatch cancellation DAC, for a given DFE tap. The same DFE tap circuit300is replicated for each DFE tap, but for simplicityFIG.3shows the DFE tap current circuit300only for one DFE tap. Moreover, the DFE tap current circuit300is replicated in each of the sub-rate receiver lanes120-1to120-4, but for simplicity is shown only for sub-rate receiver lane120-1. WhileFIG.3shows an example using current DACs it is to be understood that similar techniques may be performed using voltage DACs, and the use of current DACs is not to be limiting. The output of the average tap weight DAC142-1is converted to a current, via N-channel metal-oxide semiconductor (nMOS) transistor302, and this average tap weight current is mirrored across all four sub-rate receiver lanes120-1to120-4and a mismatch cancellation current is added independently for each of the sub-rate receiver lanes. Specifically, the DFE tap current circuit300includes a current mirror310comprising two nMOS transistors312A and312B. The DFE tap current circuit300further includes another nMOS transistor314that receives the output of the mismatch cancellation DAC126-1A. There is a switch316at the output of the current mirror310that controls when the output of the current mirror310is coupled to the DFE summer node125-1(not shown inFIG.3). The switch316is responsive to a data control signal DA, shown inFIGS.1B and1C. The DA control signal represents the previous decision symbol. The average tap weight DAC142-1receives as input a tap weight control (Ctrl) that provides the appropriate value h1d-AVGcomputed by the tap weight/slicer threshold adaptation engine160. Similarly, the mismatch cancellation DAC126-1A receives as input a mismatch tap weight control that provides the appropriate value h1d-off-000computed by the tap weight/slicer threshold adaptation engine160. The current mirror310combines an average tap weight current supplied from transistor302to transistor312A, with a mismatch tap weight current supplied from transistor314to transistor312B, and a resulting tap weight current, when switch316is closed, is coupled to the DFE summer node for the DFE summer node125-1for sub-rate receiver lane120-1. As an alternate implementation, if the DAC current direction is switched from a source to a sync type, all the current mirror and summing devices would be changed from nMOS to pMOS. Similarly, in place of a current DAC, a voltage DAC and a voltage summer can also be used for implementation. The DAC and the tap weight cancellation can also be implemented in a differential manner. Reference is now made toFIG.4.FIG.4shows a slicer threshold circuit400that is employed for each decision slicer in each of the sub-rate receiver lanes. For simplicity,FIG.4shows the slicer threshold circuit400for the decision slicer for one of the L, Z, and H thresholds, using the “X” nomenclature also used inFIGS.1A-1D. The average slicer DAC (DXAVG) is shown generically at144using the “X” nomenclature to represent each of the DACs144-1,144-2, and144-3shown inFIGS.1B and1C. Similarly, the mismatch slicer threshold DAC (DXoff-000) is shown generically at136-1to represent the mismatch slicer threshold DACs for each of the L, Z, and H thresholds. The output of the average slicer DAC144is converted to a current by an nMOS transistor402. Again, this is done for each of the L, Z and H decision slicers, and the average slicer threshold current (for each of the L, Z, and H thresholds) is mirrored across all four sub-rate receiver lanes120-1to120-4, and a mismatch slicer current is added independently for each of the sub-rate receiver lanes. Specifically, the slicer threshold circuit400includes a current mirror410comprising two nMOS transistors412A and412B. The slicer threshold circuit400further includes another nMOS transistor414that receives the output of the mismatch slicer DAC136-1. There is a resistor416at the output of the current mirror410that controls converts the current output by the current mirror410to a voltage that is coupled to the slicer threshold summer node138-1(not shown inFIG.4). The average slicer DAC144receives as input a slicer control that provides the appropriate average slicer threshold value computed by the tap weight/slicer threshold adaptation engine160. Similarly, the mismatch slicer DAC threshold136-1receives as input a mismatch slicer threshold control that provides the appropriate value DXoff-000computed by the tap weight/slicer threshold adaptation engine160. The current mirror410combines an average slicer threshold current supplied from transistor402to transistor412A, with a mismatch slicer threshold current supplied from transistor414to transistor412B, and a resulting slicer threshold voltage across resistor416is coupled to the slicer threshold summer node138-1for sub-rate receiver lane120-1. As an alternate implementation, if the DAC current direction is switched from a source to a sync type, all the current mirror and summing devices would be changed from nMOS to pMOS. Similarly, in place of a current DAC, a voltage DAC and a voltage summer can also be used for implementation. The DACs and the slicer decision control can also be implemented in a differential manner. In summary, with higher data rates, sub-rate receiver architectures, such as half-rate or quarter-rate are becoming more prevalent. The embodiments presented herein introduce a mismatch adaptive sub-rate receiver architecture that enables canceling mismatch impact on ISI in an area efficient manner. The semiconductor area for mismatch adaptation DACs is reduced by sharing average DACs across sub-rate receiver lanes, while utilizing small per receiver lane, DACs for mismatch effect cancellation. In addition, a computation engine provides control signals to the average DAC and mismatch cancellation DACs, simultaneously. Reference is now made toFIG.5.FIG.5depicts a flow chart for a method500that employs the concepts presented herein. At step510, the method500involves providing to a plurality of sub-rate receiver lanes, an analog receive signal. The analog receive signal may be derived from a received wired signal, received wireless signal, or receive optical signal (free-space or within an optical fiber). At step520, the method500involves, in each of the plurality of sub-rate receiver lanes, processing the analog receive signal with one or more first DACs (so-called “average” DACs herein) shared across the plurality of sub-rate receiver lanes and with one or more second DACs (so-called “mismatch” DACs herein) for each sub-rate receiver lane of the plurality of sub-rate receiver lanes. The one or more second DACs of a respective sub-rate receiver lane are configured to provide output to be combined with an output of a corresponding one of the one or more first DACs to account for a sub-rate receiver lane specific offset with respect to a corresponding one of the one or more first DACs. Referring toFIG.6,FIG.6illustrates a hardware block diagram of a computing device600that may be representative of the tap weight/slicer threshold computation engine160described herein. In at least one embodiment, the computing device600may include one or more processor(s)602, one or more memory element(s)604, storage606, a bus608, one or more I/O interface(s)610, and control logic620. In various embodiments, instructions associated with logic for computing device600can overlap in any manner and are not limited to the specific allocation of instructions and/or operations described herein. In at least one embodiment, processor(s)602is/are at least one hardware processor configured to execute various tasks, operations, and/or functions for computing device600as described herein according to software and/or instructions configured for computing device600. Processor(s)602(e.g., a hardware processor) can execute any type of instructions associated with data to achieve the operations detailed herein. In one example, processor(s)602can transform an element or an article (e.g., data, information) from one state or thing to another state or thing. Any of potential processing elements, microprocessors, digital signal processor, logic, and/or machines described herein can be construed as being encompassed within the broad term ‘processor’. In at least one embodiment, memory element(s)604and/or storage606is/are configured to store data, information, software, and/or instructions associated with computing device600, and/or logic configured for memory element(s)604and/or storage606. For example, any logic described herein (e.g., control logic620) can, in various embodiments, be stored for computing device600using any combination of memory element(s)604and/or storage606. Note that in some embodiments, storage606can be consolidated with memory element(s)604(or vice versa), or can overlap/exist in any other suitable manner. In at least one embodiment, bus608can be configured as an interface that enables one or more elements of computing device600to communicate in order to exchange information and/or data. Bus608can be implemented with any architecture designed for passing control, data, and/or information between processors, memory elements/storage, peripheral devices, and/or any other hardware and/or software components that may be configured for computing device600. In at least one embodiment, bus608may be implemented as a fast kernel-hosted interconnect, potentially using shared memory between processes (e.g., logic), which can enable efficient communication paths between the processes. In various embodiments, I/O interface(s)610allow for input and output of data and/or information with other entities that may be connected to computing device600. In various embodiments, control logic620can include instructions that, when executed, cause processor(s)602to perform operations, which can include, but not be limited to, providing overall control operations of computing device; interacting with other entities, systems, etc. described herein; maintaining and/or interacting with stored data, information, parameters, etc. (e.g., memory element(s), storage, data structures, databases, tables, etc.); combinations thereof; and/or the like to facilitate various operations for embodiments described herein. The programs described herein (e.g., control logic620) may be identified based upon application(s) for which they are implemented in a specific embodiment. However, it should be appreciated that any particular program nomenclature herein is used merely for convenience; thus, embodiments herein should not be limited to use(s) solely described in any specific application(s) identified and/or implied by such nomenclature. In various embodiments, entities as described herein may store data/information in any suitable volatile and/or non-volatile memory item (e.g., magnetic hard disk drive, solid state hard drive, semiconductor storage device, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM), application specific integrated circuit (ASIC), etc.), software, logic (fixed logic, hardware logic, programmable logic, analog logic, digital logic), hardware, and/or in any other suitable component, device, element, and/or object as may be appropriate. Any of the memory items discussed herein should be construed as being encompassed within the broad term ‘memory element’. Data/information being tracked and/or sent to one or more entities as discussed herein could be provided in any database, table, register, list, cache, storage, and/or storage structure: all of which can be referenced at any suitable timeframe. Any such storage options may also be included within the broad term ‘memory element’ as used herein. Note that in certain example implementations, operations as set forth herein may be implemented by logic encoded in one or more tangible media that is capable of storing instructions and/or digital information and may be inclusive of non-transitory tangible media and/or non-transitory computer readable storage media (e.g., embedded logic provided in: an ASIC, digital signal processing (DSP) instructions, software [potentially inclusive of object code and source code], etc.) for execution by one or more processor(s), and/or other similar machine, etc. Generally, memory element(s)604and/or storage606can store data, software, code, instructions (e.g., processor instructions), logic, parameters, combinations thereof, and/or the like used for operations described herein. This includes memory element(s)604and/or storage606being able to store data, software, code, instructions (e.g., processor instructions), logic, parameters, combinations thereof, or the like that are executed to carry out operations in accordance with teachings of the present disclosure. In some instances, software of the present embodiments may be available via a non-transitory computer useable medium (e.g., magnetic or optical mediums, magneto-optic mediums, CD-ROM, DVD, memory devices, etc.) of a stationary or portable program product apparatus, downloadable file(s), file wrapper(s), object(s), package(s), container(s), and/or the like. In some instances, non-transitory computer readable storage media may also be removable. For example, a removable hard drive may be used for memory/storage in some implementations. Other examples may include optical and magnetic disks, thumb drives, and smart cards that can be inserted and/or otherwise connected to a computing device for transfer onto another computer readable storage medium. In summary, in some aspects, the techniques described herein relate to an apparatus including: a plurality of sub-rate receiver lanes each of which is configured to receive an analog receive signal from an analog front-end and produce digital sub-rate receiver data; one or more first digital-to-analog converters (DACs) shared across the plurality of sub-rate receiver lanes; one or more second DACs for each sub-rate receiver lane of the plurality of sub-rate receiver lanes, wherein the one or more second DACs of a respective sub-rate receiver lane are configured to provide output to be combined with an output of a corresponding one of the one or more first DACs during processing of the analog receive signal in a respective sub-rate receiver lane to account for a sub-rate receiver lane specific offset with respect to a corresponding one of the one or more first DACs; and a logic circuit configured to generate a first DAC control for each of the one or more first DACs and a second DAC control for each second DAC of each of the plurality of sub-rate receiver lanes. In some aspects, a semiconductor area size of each respective second DAC is approximately 5-10 percent of a semiconductor area size of each first DAC. In some aspects, each of the plurality of sub-rate receiver lanes includes an equalizer having a number of taps, and wherein a number of the one or more first DACs is equal to the number of taps of the equalizer, and a number of the one or more second DACs for each of the plurality of sub-rate receiver lanes is equal to the number of taps of the equalizer. In some aspects, the logic circuit is configured to generate an average tap weight DAC control for each of the one or more first DACs and a mismatch tap cancellation DAC control for each second DAC in each of the plurality of sub-rate receiver lanes. In some aspects, the logic circuit is configured to generate the mismatch tap cancellation DAC control for a respective tap of the equalizer of a corresponding sub-rate receiver lane of the plurality of sub-rate receiver lanes based on a difference from the average tap DAC control for the respective tap in the corresponding sub-rate receiver lane. In some aspects, the logic circuit generates the average tap DAC control for each of the one or more first DACs based on an averaging of weights for a tap of the equalizer, across the plurality of sub-rate receiver lanes. In some aspects, each of the one or more first DACs outputs an average tap weight current that is shared across the plurality of sub-rate receiver lanes, and each of the one or more second DAC, for a corresponding tap, outputs a mismatch tap weight current that is added to the average tap weight current, independently for each of the plurality of sub-rate receiver lanes. In some aspects, outputs of the one or more first DACs and outputs of the one or more second DACs are combined and applied to equalizer or decision slicer circuits. In some aspects, each of the plurality of sub-rate receiver lanes includes one or more decision slicer circuits configured to apply a corresponding slicer threshold of one or more slicer thresholds in generating the digital sub-rate receiver data, wherein a number of first DACs is equal to a number of the one or more slicer thresholds, and a number of the one or more second DACs for each of the plurality of sub-rate receiver lanes is equal to a number of the one or more slicer thresholds, wherein the logic circuit is further configured to generate an average slicer threshold control for each of the one or more first DACs and a mismatch slicer threshold control for each of the one or more second DACs in each of the plurality of sub-rate receiver lanes. In some aspects, the logic circuit further generates the mismatch slicer threshold control for a respective second DAC of a corresponding sub-rate receiver lane of the plurality of sub-rate receiver lanes based on a difference from the average slicer threshold control for the respective second DAC in the corresponding sub-rate receiver lane. In some aspects, each of the plurality of sub-rate receiver lanes includes: an equalizer having a number of taps and one or more decision slicer circuits configured to apply a corresponding slicer threshold of one or more slicer thresholds in generating the digital sub-rate receiver data; wherein the one or more first DACs include a number of average tap weight DACs equal to the number of taps, and a number of average slicer threshold DACs equal to a number of the one or more slicer thresholds; wherein the one or more second DACs include a number of mismatch tap cancellation DACs equal to the number of taps of the equalizer, and a number of mismatch slicer threshold DACs equal to the number of the one or more slicer thresholds for each of the plurality of sub-rate receiver lanes. In some aspects, the logic circuit is configured to: generate an average tap weight DAC control for each of the one or more average tap weight DACs and a mismatch tap cancellation DAC control for each of the one or more mismatch tap cancellation DACs of each of the plurality of sub-rate receiver lanes; and generate an average slicer threshold control for each of the one or more slicer threshold DACs and a mismatch slicer threshold control for each of the one or more mismatch slicer threshold DACs in each of the plurality of sub-rate receiver lanes. In some aspects, the techniques described herein relate to a method including: providing to a plurality of sub-rate receiver lanes, an analog receive signal; and in each of the plurality of sub-rate receiver lanes, processing the analog receive signal with one or more first digital-to-analog converters (DACs) shared across the plurality of sub-rate receiver lanes and with one or more second DACs for each sub-rate receiver lane of the plurality of sub-rate receiver lanes, wherein the one or more second DACs of a respective sub-rate receiver lane are configured to provide output to be combined with an output of a corresponding one of the one or more first DACs to account for a sub-rate receiver lane specific offset with respect to a corresponding one of the one or more first DACs. In some aspects, the method further includes: generating an average DAC control for each of the one or more first DACs and a mismatch tap cancellation DAC control for each second DAC of each of the plurality of sub-rate receiver lanes. In some aspects, generating includes generating the mismatch tap cancellation DAC control for a respective tap of an equalizer of a corresponding sub-rate receiver lane of the plurality of sub-rate receiver lanes based on a difference from the average tap DAC control for the respective tap in the corresponding sub-rate receiver lane. In some aspects, generating includes generating the average tap DAC control for each of the one or more first DACs based on an averaging of weights for a tap of the equalizer, across the plurality of sub-rate receiver lanes. In some aspects, the techniques described herein relate to an apparatus including: a plurality of sub-rate receiver lanes each of which is configured to receive an analog receive signal from an analog front-end and produce digital sub-rate receiver data, wherein each of the plurality of sub-rate receiver lanes includes an equalizer having a number of taps; one or more first digital-to-analog converters (DACs) shared across the plurality of sub-rate receiver lanes; one or more second DACs for each sub-rate receiver lane of the plurality of sub-rate receiver lanes, wherein the one or more second DACs of a respective sub-rate receiver lane are configured to provide output to be combined with an output of a corresponding one of the one or more first DACs during processing of the analog receive signal in the respective sub-rate receiver lane to account for a sub-rate receiver lane specific offset with respect to a corresponding one of the one or more first DACs; and a logic circuit configured to generate an average tap weight DAC control for each of the one or more first DACs and a mismatch tap cancellation DAC control for each second DAC in each of the plurality of sub-rate receiver lanes. In some aspects, a number of the one or more first DACs is equal to the number of taps of the equalizer, and a number of the one or more second DACs for each of the plurality of sub-rate receiver lanes is equal to the number of taps of the equalizer. In some aspects, each of the plurality of sub-rate receiver lanes includes one or more decision slicer circuits configured to apply a corresponding slicer threshold of one or more slicer thresholds in generating the digital sub-rate receiver data, wherein a number of first DACs is equal to a number of the one or more slicer thresholds, and a number of the one or more second DACs for each of the plurality of sub-rate receiver lanes is equal to a number of the one or more slicer thresholds, wherein the logic circuit is further configured to generate an average slicer threshold control for each of the one or more first DACs and a mismatch slicer threshold control for each of the one or more second DACs in each of the plurality of sub-rate receiver lanes. In some aspects, the logic circuit further generates the mismatch slicer threshold control for a respective second DAC of a corresponding sub-rate receiver lane of the plurality of sub-rate receiver lanes based on a difference from the average slicer threshold control for the respective second DAC in the corresponding sub-rate receiver lane. Variations and Implementations Embodiments described herein may include one or more networks, which can represent a series of points and/or network elements of interconnected communication paths for receiving and/or transmitting messages (e.g., packets of information) that propagate through the one or more networks. These network elements offer communicative interfaces that facilitate communications between the network elements. A network can include any number of hardware and/or software elements coupled to (and in communication with) each other through a communication medium. Such networks can include, but are not limited to, any local area network (LAN), virtual LAN (VLAN), wide area network (WAN) (e.g., the Internet), software defined WAN (SD-WAN), wireless local area (WLA) access network, wireless wide area (WWA) access network, metropolitan area network (MAN), Intranet, Extranet, virtual private network (VPN), Low Power Network (LPN), Low Power Wide Area Network (LPWAN), Machine to Machine (M2M) network, Internet of Things (IoT) network, Ethernet network/switching system, any other appropriate architecture and/or system that facilitates communications in a network environment, and/or any suitable combination thereof. Networks through which communications propagate can use any suitable technologies for communications including wireless communications (e.g., 4G/5G/nG, IEEE 802.11 (e.g., Wi-Fi®/Wi-Fi6®), IEEE 802.16 (e.g., Worldwide Interoperability for Microwave Access (WiMAX)), Radio-Frequency Identification (RFID), Near Field Communication (NFC), Bluetooth™ mm.wave, Ultra-Wideband (UWB), etc.), and/or wired communications (e.g., T1 lines, T3 lines, digital subscriber lines (DSL), Ethernet, Fibre Channel, etc.). Generally, any suitable means of communications may be used such as electric, sound, light, infrared, and/or radio to facilitate communications through one or more networks in accordance with embodiments herein. Communications, interactions, operations, etc. as discussed for various embodiments described herein may be performed among entities that may directly or indirectly connected utilizing any algorithms, communication protocols, interfaces, etc. (proprietary and/or non-proprietary) that allow for the exchange of data and/or information. Communications in a network environment can be referred to herein as ‘messages’, ‘messaging’, ‘signaling’, ‘data’, ‘content’, ‘objects’, ‘requests’, ‘queries’, ‘responses’, ‘replies’, etc. which may be inclusive of packets. As referred to herein and in the claims, the term ‘packet’ may be used in a generic sense to include packets, frames, segments, datagrams, and/or any other generic units that may be used to transmit communications in a network environment. Generally, a packet is a formatted unit of data that can contain control or routing information (e.g., source and destination address, source and destination port, etc.) and data, which is also sometimes referred to as a ‘payload’, ‘data payload’, and variations thereof. In some embodiments, control or routing information, management information, or the like can be included in packet fields, such as within header(s) and/or trailer(s) of packets. Internet Protocol (IP) addresses discussed herein and in the claims can include any IP version 4 (IPv4) and/or IP version 6 (IPv6) addresses. To the extent that embodiments presented herein relate to the storage of data, the embodiments may employ any number of any conventional or other databases, data stores or storage structures (e.g., files, databases, data structures, data or other repositories, etc.) to store information. Note that in this Specification, references to various features (e.g., elements, structures, nodes, modules, components, engines, logic, steps, operations, functions, characteristics, etc.) included in ‘one embodiment’, ‘example embodiment’, ‘an embodiment’, ‘another embodiment’, ‘certain embodiments’, ‘some embodiments’, ‘various embodiments’, ‘other embodiments’, ‘alternative embodiment’, and the like are intended to mean that any such features are included in one or more embodiments of the present disclosure, but may or may not necessarily be combined in the same embodiments. Note also that a module, engine, client, controller, function, logic or the like as used herein in this Specification, can be inclusive of an executable file comprising instructions that can be understood and processed on a server, computer, processor, machine, compute node, combinations thereof, or the like and may further include library modules loaded during execution, object files, system files, hardware logic, software logic, or any other executable modules. It is also noted that the operations and steps described with reference to the preceding figures illustrate only some of the possible scenarios that may be executed by one or more entities discussed herein. Some of these operations may be deleted or removed where appropriate, or these steps may be modified or changed considerably without departing from the scope of the presented concepts. In addition, the timing and sequence of these operations may be altered considerably and still achieve the results taught in this disclosure. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by the embodiments in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the discussed concepts. As used herein, unless expressly stated to the contrary, use of the phrase ‘at least one of’, ‘one or more of’, ‘and/or’, variations thereof, or the like are open-ended expressions that are both conjunctive and disjunctive in operation for any and all possible combination of the associated listed items. For example, each of the expressions ‘at least one of X, Y and Z’, ‘at least one of X, Y or Z’, ‘one or more of X, Y and Z’, ‘one or more of X, Y or Z’ and ‘X, Y and/or Z’ can mean any of the following: 1) X, but not Y and not Z; 2) Y, but not X and not Z; 3) Z, but not X and not Y; 4) X and Y, but not Z; 5) X and Z, but not Y; 6) Y and Z, but not X; or 7) X, Y, and Z. Additionally, unless expressly stated to the contrary, the terms ‘first’, ‘second’, ‘third’, etc., are intended to distinguish the particular nouns they modify (e.g., element, condition, node, module, activity, operation, etc.). Unless expressly stated to the contrary, the use of these terms is not intended to indicate any type of order, rank, importance, temporal sequence, or hierarchy of the modified noun. For example, ‘first X’ and ‘second X’ are intended to designate two ‘X’ elements that are not necessarily limited by any order, rank, importance, temporal sequence, or hierarchy of the two elements. Further as referred to herein, ‘at least one of’ and ‘one or more of can be represented using the’(s)′ nomenclature (e.g., one or more element(s)). One or more advantages described herein are not meant to suggest that any one of the embodiments described herein necessarily provides all of the described advantages or that all the embodiments of the present disclosure necessarily provide any one of the described advantages. Numerous other changes, substitutions, variations, alterations, and/or modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and/or modifications as falling within the scope of the appended claims. | 58,775 |
11863223 | DETAILED DESCRIPTION EMBODIMENTS OF THE INVENTION Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings, which form a part of this disclosure. It is to be understood that this invention is not limited to the specific devices, methods, conditions or parameters described and/or shown herein, and that the terminology used herein is for the purpose of describing particular embodiments by way of example only and is not intended to be limiting of the claimed invention. Also, as used in the specification including the appended claims, the singular forms “a”, “an”, and “the” include the plural, and reference to a particular numerical value includes at least that particular value, unless the context clearly dictates otherwise. Ranges may be expressed herein as from “about” or “approximately” one particular value and/or to “about” or “approximately” another particular value. When such a range is expressed, another embodiment includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent “about”, it will be understood that the particular value forms another embodiment. FIGS.1A-Dare front view, rear view, front perspective view, and rear perspective view of the metal plate (30) according to the present invention.FIG.2Ais the view before the metal plate (30) is attached to a case (100), andFIG.2Bis the view where the metal plate (30) is attached to the case (100). The case (100) for an electronic device (300) wherein the electronic device (300) has a receiver coil (310) for wireless charging and the receiver coil (310) is formed in between an inner boundary (320) and an outer boundary (330), comprises: a hard protective frame (20) constructed to receive the electronic device (300) therein wherein the hard protective frame (20) has a recess (26) which faces the electronic device (300); and a metal plate (30) constructed to be received in the recess (26) of the hard protective frame (20). The metal plate (30) is constructed to enable magnetic retention or attachment of the case (100) to a support (40) having a magnet (50). The metal plate (30) is made of ferromagnetic material. The metal plate (30) has a rounded concave edge (35), and the metal plate (30) does not overlap with the inner boundary (320). Here, “overlap” means when the metal plate (30) and the inner boundary (320) are viewed from the front, they do not overlap as illustrated inFIG.6B. The rounded concave edge (35) may be circular, elliptical, substantially circular, or substantially elliptical, including the rounded concave edge (35) formed by a smoothly curved line. Both ends of the rounded concave edge (35) may be rounded the other way as shown inFIGS.3A through3C. The case (100) may be made of a hard protective frame (20) alone, a soft protective cover (10) alone, or combination thereof. In any structure, the metal plate (30) is attached to the case (100). Preferably, the soft protective cover (10) is made of thermoplastic polyurethane and the hard protective frame (20) is made of polycarbonate. Preferably, the inner boundary (320) and the outer boundary (330) are circular as inFIG.5B. Alternatively, the inner boundary (320) and the outer boundary (330) may be rectangular with rounded corners as inFIG.5B. As illustrated inFIG.6B, when the metal plate (30) is attached to the case (100) and the electronic device (300) is installed in the case (100), the rounded concave edge (35), the inner boundary (320), and the outer boundary (330) are substantially symmetrical with respect to the same line. The inner boundary (320) and the outer boundary (330) are generally concentric, and the rounded concave edge (35) may or may not be concentric as well. FIG.6Bshows various relative locations between the metal plate (35) and the boundaries (320,330). The rounded concave edge (35) may be placed in between the inner boundary (320) and the outer boundary (330). Alternatively, the rounded concave edge (35) may overlap with the outer boundary (330). Or, the rounded concave edge (35) may be placed outside the outer boundary (330). As inFIGS.1A-D, the metal plate (30) may be substantially rectangular with one side (34) having the rounded concave edge (35). In addition, the metal plate (30) may have rounded corners, preferably, all four corners are rounded. InFIG.1A, sides (31,32) are substantially parallel to each other and sides (33,34) are substantially parallel to each other as well. In the alternative embodiment as inFIG.3B, the metal plate (30) may be substantially trapezoidal wherein the rounded concave edge (35) is formed on a shorter side (34) of two parallel sides (33,34). In addition, the metal plate (30) may have rounded corners, preferably, all four corners are rounded. FIGS.3A,3B and3Cshow alternative designs of the metal plate (30), andFIGS.4A,4B and4Cshow them attached to the case (100). The metal plate (30) is attached to the hard protective frame (20) by an adhesive (37). Preferably, the adhesive (37) is a double-sided adhesive. As inFIG.1B, the metal plate (30) may have an adhesive layer covered with transparent film. A user may remove the transparent film and attach the adhesive layer to the case (100). The hard protective frame (20) may further comprise a surrounding recess (28) which surrounds the recess (26) and gradually recesses from a flat surface (23) of the hard protective frame (20) to the recess (26). The case, having the hard protective frame (20), may further comprises a soft protective cover (10) which comprises a back panel (12) to cover a back portion of the electronic device (300), and a side wall (14) extending from a top surface (11) of the back panel (12) along edges (13) of the back panel (12). In the alternative embodiment, a metal plate (30) for magnetically mounting an electronic device (300) or a case (100) with an electronic device (300) installed therein wherein the electronic device (300) has a receiver coil (310) for wireless charging and the receiver coil (310) is formed in between an inner boundary (320) and an outer boundary (330), the metal plate (30) comprising: a rounded concave edge (35). The metal plate (30) is constructed to be attachable to the electronic device (300) or the case (100). The metal plate (30) is constructed to enable magnetic retention or attachment of the electronic device (300) to a support (40) having a magnet (50), and the metal plate (30) attached to the electronic device (300) or the case (100) is constructed not to substantially block magnetic waves passing through the area formed by the inner boundary. The metal plate (30) may be directly attached to the electronic device (300) or to the case (100) for magnetically mounting the electronic device (300) to a support (40). Alternatively, the metal plate (30) may be attached to the electronic device (300) which is then installed in the case (100). As illustrated inFIG.5B, the inner boundary (320) and the outer boundary (330) are substantially circular, or substantially rectangular with rounded corners. The rounded concave edge (35) substantially symmetrically aligns to respective orthographic parallel projections of the inner boundary (320) and the outer boundary (330) onto a projection plane of the metal plate (30). The rounded concave edge (35) may be placed in between the inner boundary (320) and the outer boundary (330). Alternatively, the rounded concave edge (35) at least partially overlaps with an orthographic parallel projection of the outer boundary (330) onto a projection plane of the metal plate (30). Or, the rounded concave edge (35) may be placed outside an orthographic parallel projection of the outer boundary (330) onto a projection plane of the metal plate (30). The metal plate (30) is substantially rectangular with a first side (34) having the rounded concave edge (35), and the metal plate (30) has rounded corners. The metal plate (30) has third and fourth sides (31,32) that are substantially parallel to each other and has a second side (33) that is substantially parallel to the first side (34). Alternatively, the metal plate (30) may be substantially trapezoidal, wherein the rounded concave edge (35) is formed on a shorter side (34) of two parallel sides (33,34), and wherein the metal plate (30) has rounded corners. The metal plate (30) may further comprise an adhesive layer (37). The adhesive may be glue, bond, paste, tape, double-sided adhesive, or the like known in the art, but preferably, the adhesive is a double-sided adhesive. The double-sided adhesive may be a double-sided adhesive tape or sheet where one side of the adhesive tape can affix onto the metal plate (30) and the other side can be used to affix onto the case or electronic device thereby coupling the metal plate (30) with case or electronic device. Typically, the side of the double-sided adhesive tape used to affix the metal plate (30) onto the case or electronic device is initially covered by a release tape that is removable by the user when ready to affix or attach the metal plate (30) to either the electronic device (300) or the case (100) for magnetically mounting the electronic device (300) to a support (40). The metal plate (30) may be attached to the recess of the case, or the metal plate (30) may be attached to outer side of the case or directly to the electronic device. Table 1 shows data from the wireless charging of an electronic device having a receiver circuit already installed therein, as also shown inFIG.6A. The current driven by the wireless charging (transmitter coil as shown inFIG.5A) is reported by the electronic device. The first column of table 1 contains data on the wireless charging of an electronic device having a receiver circuit installed therein without a metal plate (30) and without being attached to a case. The current shown by the electronic device during wireless charging ranged from 170 mA to 250 mA with an average charging current of 210 mA. The second and third columns contain data on the wireless charging of an electronic device having a receiver circuit installed therein with the electronic device being reversibly attached to cases having a metal plate (30) affixed in the manner as shown inFIGS.4B and4Crespectively. For the case ofFIG.4Bwith the affixed metal plate (30), the charging current as displayed on the electronic device is 90 mA to 230 mA with an average charging current of 190 mA. For the case ofFIG.4Cwith the affixed metal plate (30), the charging current as displayed on the electronic device is 40 mA to 180 mA with an average charging current of 80 mA. Thus, even with the case having the metal plate (30), the wireless charging for the electronic device affixed therein is still very effective. TABLE 1Wireless ChargingNo CaseFIG. 4BFIG. 4CCharging Current Range170-250mA90-230mA40-180mAAvg Charging Current210mA190mA80mA The metal plate (30) of the present invention helps a case with an electronic device installed therein that is magnetically attracted and retained by the support having a magnet, and at the same time, the case having the metal plate (30) does not substantially impede or prevent wireless charging of the electronic device. In addition, the metal plate does not substantially impede or prevent wireless charging of the electronic device, and thus a user does not have to remove the metal plate for wireless charging. Below is the description of the invention disclosed in the U.S. patent application Ser. No. 15,359,465, filed on Nov. 22, 2016, the disclosure of which is incorporated herein by reference in its entirety. FIGS.7and8respectively show front and rear perspective views of a case (100) according to the present invention.FIG.9shows an exploded view of the case (100) having a soft protective cover (10), a hard protective frame (20) and a metal plate (30). A magnetic mount (200) of the present invention for an electronic device (300) comprises: a case (100) and a support (40). The case (100) comprises a soft protective cover (10) which comprises a back panel (12) to cover a back portion of the electronic device (300), and a side wall (14) extending from a top surface (11) of the back panel (12) along edges (13) of the back panel (12); a hard protective frame (20) constructed to removably mount over the soft protective cover (10); and a metal plate (30) placed between the soft protective cover (10) and the hard protective frame (20). The support (40) has a magnet (50). In addition, the metal plate (30) and magnet (50) are magnetically attractable to each other for magnetically attracting and retaining the case (100) to the support (40). FIG.10shows a rear view of the case (100) andFIG.11shows a cross sectional view ofFIG.10. The soft protective cover (10) may have a recess (not shown) formed on the back panel (12) to receive the metal plate (30). In the alternative as shown inFIG.11, the hard protective frame (20) may have a recess to receive the metal plate (30). Or, the soft protective cover (10) may have a first recess formed on the back panel (12) and the hard protective frame (20) may have a second recess such that the first and second recesses form a housing to receive the metal plate (30) therein. FIG.12shows a schematic perspective view of the magnetic mount (200) illustrating magnetic attraction between the support (40) and the metal plate (30) to support or retain the electronic device (300). The support (40) comprises a body (42) and a plurality of legs (46), wherein the body (42) is substantially in a form of a geometric prism or cylinder which has two bases (43,44) facing each other wherein the legs (46) are attached to one (44) of the two bases (43,44), wherein the magnet (50) is placed in the body (42). One base (43) of the support (40) has a flat surface to be magnetically attached to the case (100). The body (42) is substantially in a shape of a prism, a right prism, a uniform prism, or cylinder. Preferably, the body is substantially cylindrical or substantially in a shape of a right prism having two bases of regular convex and rectangular sides, for example, regular hexagon right prism, regular octagon right prism, or the like. When the case (100) is magnetically retained by the support (40), the base (43) of the body (42) and an outer surface (22) of the hard protective frame (20) create enough friction to prevent the case (100) from sliding on the base (43). To create such friction, the surface of the base (43) may be rough. FIG.13shows a perspective view of the magnetic mount (200) such that the support (40) magnetically attracts and retains the case (100) so that the case (100) can stand on a flat surface.FIG.14shows a perspective view of the magnetic mount (200) such that the support (40) magnetically attracts and retains the case (100) and the support (40) is inserted into and securely retained by an air vent of a vehicle which is comprised of parallel vanes. The plurality of legs (46) of the support (40) are configured to support the case (100) magnetically retained by the support (40) so that the case (100) stands on a flat surface. When the case (100) is magnetically retained by the support (40), the case (100) can stand on a flat surface and the angle between the case (100) and the flat surface can be adjusted by adjusting the location of the support (40) with respect to the case (100). The adjusted location can be maintained. The friction between the case (100) and the support (40) should be weak enough to allow such location adjustment, but strong enough to prevent slipping of the case (100) away from the support (40). In addition, the plurality of legs (46) of the support (40) is constructed to be received and retained by an air vent of a vehicle.FIG.14shows the support (40) inserted into and retained by the air vent of a vehicle. By the magnetic attraction between the support (40) and the case (100), the case (100) can be mounted onto the air vent of a vehicle. The support (40) is detachably fixed to the air vent and the case (100) can rotate or slide a little with respect to the support (40). Because of friction between the support (40) and the case (100), adjusted rotation or sliding of the case (100) can be maintained so that a user can adjust the angle of the case (100) suitable and convenient for him. Preferably, the metal plate (30) is made of ferromagnetic material and the magnet (50) produces magnetic flux. More specifically, the metal plate (30) may be made of steel, stainless steel, or iron. As inFIGS.15and16, the metal plate (30) may be made of ferromagnetic metal plate. The metal plate (30) is close enough to an outer surface (22) of the hard protective frame (20) and the magnet produces enough magnetic flux so that the support (40) attracts and retains the case (100) with the electronic device (300) installed therein in place. Additionally, the support (40) and the case (100) create enough friction to prevent the case (100) from sliding on or slipping from the support (40) when the case (100) is magnetically retained by the support (40). Alternatively, the magnet (50) may be made of ferromagnetic material and the metal plate (30) may be made of a magnet which produces magnetic flux. The body (42) of the support (40) may be geometrically a prism or cylinder. More preferably, the body (42) is substantially cylindrical asFIG.12orFIG.13. Cylindrical shape is preferable in rotating the support (40) or case (100) with respect to each other. The plurality of legs (46) is attached about a center of the base (44) and preferably, there are four legs (46). Four legs (46) may form a square layout so that they can be easily inserted into an air vent of a vehicle. The metal plate (30) may be a circular metal plate or a rectangular metal plate, as shown inFIGS.15and16, or a substantially rectangular or substantially trapezoidal plate as shown inFIGS.1and3. The soft protective cover (10) is made of soft material and the hard protective frame (20) is made of hard material. Preferably, the soft protective cover (10) is made of thermoplastic polyurethane and the hard protective frame (20) is made of polycarbonate. As shown inFIG.11, the soft protective cover (10) comprises a longitudinal recess (15) and the hard protective frame (20) comprises a longitudinal protrusion (25) such that the longitudinal recess (15) of the soft protective cover (10) receives the longitudinal protrusion (25) of the hard protective frame (20) therein for secure coupling between the soft protective cover (10) and the hard protective frame (20). The soft protective cover (10) is sufficiently flexible to accept insertion of the electronic device (300) therein and sufficiently rigid to securely retain the inserted electronic device (300). The magnetic mount (200) may further comprise a double-sided adhesive for attaching the metal plate (30) either to the soft protective cover (10) or to the hard protective frame (20). The double-sided adhesive may be a double-sided adhesive tape or sheet. If the soft protective cover (10) has a recess (not shown) formed on the back panel (12) to receive the metal plate (30), the double-sided adhesive may be located in the recess to attach the metal plate (30) to the soft protective cover (10). The double-sided adhesive may be a double-sided adhesive tape or sheet. If the hard protective frame (20) has a recess to receive the metal plate (30), the double-sided adhesive may be located in the recess to attach the metal plate (30) to the hard protective frame (20). Alternatively, the magnetic mount (200) may further comprise two double-sided adhesives one of which is for attaching the metal plate (30) to the soft protective cover (10) and the other of which is for attaching the first metal plate (30) to the hard protective frame (20). In case that the soft protective cover (10) has a first recess formed on the back panel (12) and the hard protective frame (20) has a second recess, and the first and second recesses form a housing to receive the metal plate (30) therein, two double-sided adhesives may be located in both of the recesses for attaching the metal plate (30) to the soft protective cover (10) and the hard protective frame (20). The metal plate (30) may be attached either to the soft protective cover (10) or to the hard protective frame (20) by an adhesive such as glue, bond, paste, tape, double-sided adhesive, etc. The double-sided adhesive may be a double-sided adhesive tape or sheet. In the alternative embodiment, a magnetic mount (200) for an electronic device (300) may comprise: a case (100) for receiving an electronic device therein; a metal plate (30) attached to a back of the case (100) by an adhesive; and a support (40) having a magnet (50). The metal plate (30) and magnet (50) are magnetically attractable to each other for magnetically attracting and retaining the case (100) to the support (40). The adhesive may be glue, bond, paste, tape, double-sided adhesive, etc. The metal plate (30) may be circular, rectangular, substantially circular, substantially rectangular or substantially trapezoidal. The metal plate (30) may be made of ferromagnetic material and the magnet (50) may be made of a magnet which produces magnetic flux. More specifically, the metal plate (30) may be made of steel, stainless steel, or iron. Preferably, the metal plate (30) may be made of ferromagnetic metal plate. Alternatively, the magnet (50) may be made of ferromagnetic material and the metal plate (30) may be made of a magnet which produces magnetic flux. Still in the alternative embodiment of the present invention, a magnetic mount (200) for an electronic device (300) comprises: a metal plate (30) to be attachable to a back of the electronic device (300) by an adhesive; and a support (40) having a magnet (50). The metal plate (30) and the magnet (50) are magnetically attractable to each other for magnetically attracting and retaining the case (100) to the support (40). In addition, one side of the adhesive is attached to the metal plate (30). The metal plate (30) may be made of ferromagnetic material and the magnet (50) may be made of a magnet which produces magnetic flux. The magnet produces enough magnetic flux so that the support (40) attracts and retains the electronic device (300). The support (40) and the metal plate (30) create enough friction to prevent the metal plate (30) from sliding on the support (40) when the electronic device (300) is magnetically retained by the support (40). Another side of the adhesive may be covered with a release tape so that after removing the release tape, the another side of the adhesive can be attached to the back of the case. The adhesive may be glue, bond, paste, tape, double-sided adhesive, or the like known in the art, but preferably, the adhesive is a double-sided adhesive. The double-sided adhesive may be a double-sided adhesive tape or sheet. FIGS.11and17show cross-sectional views of the present invention. The embodiment ofFIG.11is explained above and the embodiment ofFIG.17further comprises a surrounding recess (28). In this embodiment, the hard protective frame (20) has a recess (26) to receive the metal plate (30) therein, and the hard protective frame (20) may further comprise a surrounding recess (28) which surrounds the recess (26) and slopes away from a flat surface (23) of the hard protective frame (20) to the recess (26) creating a cavity. In the alternative, the case (100) may comprise a recess (26), formed on its back (23) facing the electronic device (300), for receiving the metal plate (30) therein, and the case (100) may further comprise a surrounding recess (28) which surrounds the recess (26) and slopes away from a flat surface (23) of the case (100) to the recess (26) thereby creating a cavity.FIG.18shows a drawing of the case (100) according to this embodiment. InFIG.18, the recess (26) is formed on the inner surface (23) of the case (100) and the surrounding recess (28) is formed along the outer boundary of the recess (26) to surround the recess (26). The surrounding recess (28) may gradually recess from a flat surface (23) of the case (100) (or the hard protective frame (20)) to the recess (26). As shown inFIG.17, the surrounding recess (28) gradually recesses and becomes flat toward the recess (26). The metal plate (30) may be flush with the recess (26) or slightly protrude out of the recess (26). In other words, the height of the metal plate (30) may be about or slightly greater than the depth of the recess (26). However, the metal plate (30) does not protrude beyond the flat surface (23) of the case (100) (or the hard protective frame (20)). Accordingly, the surrounding recess (28) forms space between the metal plate (30) and the electronic device (300) in order to prevent scratches or damages by the metal plate (30) to the electronic device (300). In addition, without the surrounding recess (28), a boundary line of the recess (26) may be formed on the outer surface (22) of the case (100) or the hard protective frame (30) which is visible from outside. The surrounding recess (28) prevents such boundary line of the recess from being formed on the outer surface (22) and makes the part of the case (100) or the hard protective frame (20) in contact with the metal plate (30) less vulnerable to damage. While the invention has been shown and described with reference to different embodiments thereof, it will be appreciated by those skilled in the art that variations in form, detail, compositions and operation may be made without departing from the spirit and scope of the invention as defined by the accompanying claims. | 25,862 |
11863224 | DETAILED DESCRIPTION An electronic device such as electronic device10ofFIG.1may be provided with wireless circuitry that includes antennas. The antennas may be used to transmit and/or receive wireless radio-frequency signals. The antennas may include phased antenna arrays that are used for performing wireless communications and/or spatial ranging operations using millimeter and centimeter wave signals. Millimeter wave signals, which are sometimes referred to as extremely high frequency (EHF) signals, propagate at frequencies above about 30 GHz (e.g., at 60 GHz or other frequencies between about 30 GHz and 300 GHz). Centimeter wave signals propagate at frequencies between about 10 GHz and 30 GHz. If desired, device10may also contain antennas for handling satellite navigation system signals, cellular telephone signals, local wireless area network signals, near-field communications, light-based wireless communications, or other wireless communications. Device10may be a portable electronic device or other suitable electronic device. For example, device10may be a laptop computer, a tablet computer, a somewhat smaller device such as a wrist-watch device, pendant device, headphone device, earpiece device, headset device, or other wearable or miniature device, a handheld device such as a cellular telephone, a media player, or other small portable device. Device10may also be a set-top box, a desktop computer, a display into which a computer or other processing circuitry has been integrated, a display without an integrated computer, a wireless access point, a wireless base station, an electronic device incorporated into a kiosk, building, or vehicle, or other suitable electronic equipment. Device10may include a housing such as housing12. Housing12, which may sometimes be referred to as a case, may be formed of plastic, glass, ceramics, fiber composites, metal (e.g., stainless steel, aluminum, etc.), other suitable materials, or a combination of these materials. In some situations, parts of housing12may be formed from dielectric or other low-conductivity material (e.g., glass, ceramic, plastic, sapphire, etc.). In other situations, housing12or at least some of the structures that make up housing12may be formed from metal elements. Device10may, if desired, have a display such as display14. Display14may be mounted on the front face of device10. Display14may be a touch screen that incorporates capacitive touch electrodes or may be insensitive to touch. The rear face of housing12(i.e., the face of device10opposing the front face of device10) may have a substantially planar housing wall such as rear housing wall12R (e.g., a planar housing wall). Rear housing wall12R may have slots that pass entirely through the rear housing wall and that therefore separate portions of housing12from each other. Rear housing wall12R may include conductive portions and/or dielectric portions. If desired, rear housing wall12R may include a planar metal layer covered by a thin layer or coating of dielectric such as glass, plastic, sapphire, or ceramic (e.g., a dielectric cover layer). Housing12may also have shallow grooves that do not pass entirely through housing12. The slots and grooves may be filled with plastic or other dielectric materials. If desired, portions of housing12that have been separated from each other (e.g., by a through slot) may be joined by internal conductive structures (e.g., sheet metal or other metal members that bridge the slot). Housing12may include peripheral housing structures such as peripheral structures12W. Conductive portions of peripheral structures12W and conductive portions of rear housing wall12R may sometimes be referred to herein collectively as conductive structures of housing12. Peripheral structures12W may run around the periphery of device10and display14. In configurations in which device10and display14have a rectangular shape with four edges, peripheral structures12W may be implemented using peripheral housing structures that have a rectangular ring shape with four corresponding edges and that extend from rear housing wall12R to the front face of device10(as an example). In other words, device10may have a length (e.g., measured parallel to the Y-axis), a width that is less than the length (e.g., measured parallel to the X-axis), and a height (e.g., measured parallel to the Z-axis) that is less than the width. Peripheral structures12W or part of peripheral structures12W may serve as a bezel for display14(e.g., a cosmetic trim that surrounds all four sides of display14and/or that helps hold display14to device10) if desired. Peripheral structures12W may, if desired, form sidewall structures for device10(e.g., by forming a metal band with vertical sidewalls, curved sidewalls, etc.). Peripheral structures12W may be formed of a conductive material such as metal and may therefore sometimes be referred to as peripheral conductive housing structures, conductive housing structures, peripheral metal structures, peripheral conductive sidewalls, peripheral conductive sidewall structures, conductive housing sidewalls, peripheral conductive housing sidewalls, sidewalls, sidewall structures, or a peripheral conductive housing member (as examples). Peripheral conductive housing structures12W may be formed from a metal such as stainless steel, aluminum, alloys, or other suitable materials. One, two, or more than two separate structures may be used in forming peripheral conductive housing structures12W. It is not necessary for peripheral conductive housing structures12W to have a uniform cross-section. For example, the top portion of peripheral conductive housing structures12W may, if desired, have an inwardly protruding ledge that helps hold display14in place. The bottom portion of peripheral conductive housing structures12W may also have an enlarged lip (e.g., in the plane of the rear surface of device10). Peripheral conductive housing structures12W may have substantially straight vertical sidewalls, may have sidewalls that are curved, or may have other suitable shapes. In some configurations (e.g., when peripheral conductive housing structures12W serve as a bezel for display14), peripheral conductive housing structures12W may run around the lip of housing12(i.e., peripheral conductive housing structures12W may cover only the edge of housing12that surrounds display14and not the rest of the sidewalls of housing12). Rear housing wall12R may lie in a plane that is parallel to display14. In configurations for device10in which some or all of rear housing wall12R is formed from metal, it may be desirable to form parts of peripheral conductive housing structures12W as integral portions of the housing structures forming rear housing wall12R. For example, rear housing wall12R of device10may include a planar metal structure and portions of peripheral conductive housing structures12W on the sides of housing12may be formed as flat or curved vertically extending integral metal portions of the planar metal structure (e.g., housing structures12R and12W may be formed from a continuous piece of metal in a unibody configuration). Housing structures such as these may, if desired, be machined from a block of metal and/or may include multiple metal pieces that are assembled together to form housing12. Rear housing wall12R may have one or more, two or more, or three or more portions. Peripheral conductive housing structures12W and/or conductive portions of rear housing wall12R may form one or more exterior surfaces of device10(e.g., surfaces that are visible to a user of device10) and/or may be implemented using internal structures that do not form exterior surfaces of device10(e.g., conductive housing structures that are not visible to a user of device10such as conductive structures that are covered with layers such as thin cosmetic layers, protective coatings, and/or other coating/cover layers that may include dielectric materials such as glass, ceramic, plastic, or other structures that form the exterior surfaces of device10and/or serve to hide peripheral conductive housing structures12W and/or conductive portions of rear housing wall12R from view of the user). Display14may have an array of pixels that form an active area AA that displays images for a user of device10. For example, active area AA may include an array of display pixels. The array of pixels may be formed from liquid crystal display (LCD) components, an array of electrophoretic pixels, an array of plasma display pixels, an array of organic light-emitting diode display pixels or other light-emitting diode pixels, an array of electrowetting display pixels, or display pixels based on other display technologies. If desired, active area AA may include touch sensors such as touch sensor capacitive electrodes, force sensors, or other sensors for gathering a user input. Display14may have an inactive border region that runs along one or more of the edges of active area AA. Inactive area IA of display14may be free of pixels for displaying images and may overlap circuitry and other internal device structures in housing12. To block these structures from view by a user of device10, the underside of the display cover layer or other layers in display14that overlap inactive area IA may be coated with an opaque masking layer in inactive area IA. The opaque masking layer may have any suitable color. Inactive area IA may include a recessed region or notch8that extends into active area AA (e.g., at speaker port16). Active area AA may, for example, be defined by the lateral area of a display module for display14(e.g., a display module that includes pixel circuitry, touch sensor circuitry, etc.). Display14may be protected using a display cover layer such as a layer of transparent glass, clear plastic, transparent ceramic, sapphire, or other transparent crystalline material, or other transparent layer(s). The display cover layer may have a planar shape, a convex curved profile, a shape with planar and curved portions, a layout that includes a planar main area surrounded on one or more edges with a portion that is bent out of the plane of the planar main area, or other suitable shapes. The display cover layer may cover the entire front face of device10. In another suitable arrangement, the display cover layer may cover substantially all of the front face of device10or only a portion of the front face of device10. Openings may be formed in the display cover layer. For example, an opening may be formed in the display cover layer to accommodate a button. An opening may also be formed in the display cover layer to accommodate ports such as speaker port16or a microphone port. Openings may be formed in housing12to form communications ports (e.g., an audio jack port, a digital data port, etc.) and/or audio ports for audio components such as a speaker and/or a microphone if desired. Display14may include conductive structures such as an array of capacitive electrodes for a touch sensor, conductive lines for addressing pixels, driver circuits, etc. Housing12may include internal conductive structures such as metal frame members and a planar conductive housing member (sometimes referred to as a conductive support plate or backplate) that spans the walls of housing12(e.g., a substantially rectangular sheet formed from one or more metal parts that is welded or otherwise connected between opposing sides of peripheral conductive housing structures12W). The conductive support plate may form an exterior rear surface of device10or may be covered by a dielectric cover layer such as a thin cosmetic layer, protective coating, and/or other coatings that may include dielectric materials such as glass, ceramic, plastic, or other structures that form the exterior surfaces of device10and/or serve to hide the conductive support plate from view of the user (e.g., the conductive support plate may form part of rear housing wall12R). Device10may also include conductive structures such as printed circuit boards, components mounted on printed circuit boards, and other internal conductive structures. These conductive structures, which may be used in forming a ground plane in device10, may extend under active area AA of display14, for example. In regions22and20, openings may be formed within the conductive structures of device10(e.g., between peripheral conductive housing structures12W and opposing conductive ground structures such as conductive portions of rear housing wall12R, conductive traces on a printed circuit board, conductive electrical components in display14, etc.). These openings, which may sometimes be referred to as gaps, may be filled with air, plastic, and/or other dielectrics and may be used in forming slot antenna resonating elements for one or more antennas in device10, if desired. Conductive housing structures and other conductive structures in device10may serve as a ground plane for the antennas in device10. The openings in regions22and20may serve as slots in open or closed slot antennas, may serve as a central dielectric region that is surrounded by a conductive path of materials in a loop antenna, may serve as a space that separates an antenna resonating element such as a strip antenna resonating element or an inverted-F antenna resonating element from the ground plane, may contribute to the performance of a parasitic antenna resonating element, or may otherwise serve as part of antenna structures formed in regions22and20. If desired, the ground plane that is under active area AA of display14and/or other metal structures in device10may have portions that extend into parts of the ends of device10(e.g., the ground may extend towards the dielectric-filled openings in regions22and20), thereby narrowing the slots in regions22and20. Region22may sometimes be referred to herein as lower region22or lower end22of device10. Region20may sometimes be referred to herein as upper region20or upper end20of device10. In general, device10may include any suitable number of antennas (e.g., one or more, two or more, three or more, four or more, etc.). The antennas in device10may be located at opposing first and second ends of an elongated device housing (e.g., at lower region22and/or upper region20of device10ofFIG.1), along one or more edges of a device housing, in the center of a device housing, in other suitable locations, or in one or more of these locations. The arrangement ofFIG.1is merely illustrative. Portions of peripheral conductive housing structures12W may be provided with peripheral gap structures. For example, peripheral conductive housing structures12W may be provided with one or more dielectric-filled gaps such as gaps18, as shown inFIG.1. The gaps in peripheral conductive housing structures12W may be filled with dielectric such as polymer, ceramic, glass, air, other dielectric materials, or combinations of these materials. Gaps18may divide peripheral conductive housing structures12W into one or more peripheral conductive segments. The conductive segments that are formed in this way may form parts of antennas in device10if desired. Other dielectric openings may be formed in peripheral conductive housing structures12W (e.g., dielectric openings other than gaps18) and may serve as dielectric antenna windows for antennas mounted within the interior of device10. Antennas within device10may be aligned with the dielectric antenna windows for conveying radio-frequency signals through peripheral conductive housing structures12W. Antennas within device10may also be aligned with inactive area IA of display14for conveying radio-frequency signals through display14. In order to provide an end user of device10with as large of a display as possible (e.g., to maximize an area of the device used for displaying media, running applications, etc.), it may be desirable to increase the amount of area at the front face of device10that is covered by active area AA of display14. Increasing the size of active area AA may reduce the size of inactive area IA within device10. This may reduce the area behind display14that is available for antennas within device10. For example, active area AA of display14may include conductive structures that serve to block radio-frequency signals handled by antennas mounted behind active area AA from radiating through the front face of device10. It would therefore be desirable to be able to provide antennas that occupy a small amount of space within device10(e.g., to allow for as large of a display active area AA as possible) while still allowing the antennas to communicate with wireless equipment external to device10with satisfactory efficiency bandwidth. In a typical scenario, device10may have one or more upper antennas and one or more lower antennas. An upper antenna may, for example, be formed in upper region20of device10. A lower antenna may, for example, be formed in lower region22of device10. Additional antennas may be formed along the edges of housing12extending between regions20and22if desired. An example in which device10includes three or four upper antennas and five lower antennas is described herein as an example. The antennas may be used separately to cover identical communications bands, overlapping communications bands, or separate communications bands. The antennas may be used to implement an antenna diversity scheme or a multiple-input-multiple-output (MIMO) antenna scheme. Other antennas for covering any other desired frequencies may also be mounted at any desired locations within the interior of device10. The example ofFIG.1is merely illustrative. If desired, housing12may have other shapes (e.g., a square shape, cylindrical shape, spherical shape, combinations of these and/or different shapes, etc.). A schematic diagram of illustrative components that may be used in device10is shown inFIG.2. As shown inFIG.2, device10may include control circuitry28. Control circuitry28may include storage such as storage circuitry30. Storage circuitry30may include hard disk drive storage, nonvolatile memory (e.g., flash memory or other electrically-programmable-read-only memory configured to form a solid-state drive), volatile memory (e.g., static or dynamic random-access-memory), etc. Control circuitry28may include processing circuitry such as processing circuitry32. Processing circuitry32may be used to control the operation of device10. Processing circuitry32may include on one or more microprocessors, microcontrollers, digital signal processors, host processors, baseband processor integrated circuits, application specific integrated circuits, central processing units (CPUs), etc. Control circuitry28may be configured to perform operations in device10using hardware (e.g., dedicated hardware or circuitry), firmware, and/or software. Software code for performing operations in device10may be stored on storage circuitry30(e.g., storage circuitry30may include non-transitory (tangible) computer readable storage media that stores the software code). The software code may sometimes be referred to as program instructions, software, data, instructions, or code. Software code stored on storage circuitry30may be executed by processing circuitry32. Control circuitry28may be used to run software on device10such as internet browsing applications, voice-over-internet-protocol (VOIP) telephone call applications, email applications, media playback applications, operating system functions, etc. To support interactions with external equipment, control circuitry28may be used in implementing communications protocols. Communications protocols that may be implemented using control circuitry28include internet protocols, wireless local area network protocols (e.g., IEEE 802.11 protocols—sometimes referred to as WiFi®), protocols for other short-range wireless communications links such as the Bluetooth® protocol or other WPAN protocols, IEEE 802.11ad protocols, cellular telephone protocols, MIMO protocols, antenna diversity protocols, satellite navigation system protocols, antenna-based spatial ranging protocols (e.g., radio detection and ranging (RADAR) protocols or other desired range detection protocols for signals conveyed at millimeter and centimeter wave frequencies), etc. Each communication protocol may be associated with a corresponding radio access technology (RAT) that specifies the physical connection methodology used in implementing the protocol. Device10may include input-output circuitry24. Input-output circuitry24may include input-output devices26. Input-output devices26may be used to allow data to be supplied to device10and to allow data to be provided from device10to external devices. Input-output devices26may include user interface devices, data port devices, sensors, and other input-output components. For example, input-output devices may include touch screens, displays without touch sensor capabilities, buttons, joysticks, scrolling wheels, touch pads, key pads, keyboards, microphones, cameras, speakers, status indicators, light sources, audio jacks and other audio port components, digital data port devices, light sensors, gyroscopes, accelerometers or other components that can detect motion and device orientation relative to the Earth, capacitance sensors, proximity sensors (e.g., a capacitive proximity sensor and/or an infrared proximity sensor), magnetic sensors, and other sensors and input-output components. Input-output circuitry24may include wireless circuitry such as wireless circuitry34for wirelessly conveying radio-frequency signals. While control circuitry28is shown separately from wireless circuitry34in the example ofFIG.2for the sake of clarity, wireless circuitry34may include processing circuitry that forms a part of processing circuitry32and/or storage circuitry that forms a part of storage circuitry30of control circuitry28(e.g., portions of control circuitry28may be implemented on wireless circuitry34). As an example, control circuitry28may include baseband processor circuitry or other control components that form a part of wireless circuitry34. Wireless circuitry34may include millimeter and centimeter wave transceiver circuitry such as millimeter/centimeter wave transceiver circuitry38. Millimeter/centimeter wave transceiver circuitry38may support communications at frequencies between about 10 GHz and 300 GHz. For example, millimeter/centimeter wave transceiver circuitry38may support communications in Extremely High Frequency (EHF) or millimeter wave communications bands between about 30 GHz and 300 GHz and/or in centimeter wave communications bands between about 10 GHz and 30 GHz (sometimes referred to as Super High Frequency (SHF) bands). As examples, millimeter/centimeter wave transceiver circuitry38may support communications in an IEEE K communications band between about 18 GHz and 27 GHz, a Kacommunications band between about 26.5 GHz and 40 GHz, a Kucommunications band between about 12 GHz and 18 GHz, a V communications band between about 40 GHz and 75 GHz, a W communications band between about 75 GHz and 110 GHz, or any other desired frequency band between approximately 10 GHz and 300 GHz. If desired, millimeter/centimeter wave transceiver circuitry38may support IEEE 802.11ad communications at 60 GHz (e.g., WiGig or 60 GHz Wi-Fi bands around 57-61 GHz), and/or 5thgeneration mobile networks or 5thgeneration wireless systems (5G) New Radio (NR) Frequency Range 2 (FR2) communications bands between about 24 GHz and 90 GHz. Millimeter/centimeter wave transceiver circuitry38may be formed from one or more integrated circuits (e.g., multiple integrated circuits mounted on a common printed circuit in a system-in-package device, one or more integrated circuits mounted on different substrates, etc.). Millimeter/centimeter wave transceiver circuitry38(sometimes referred to herein simply as transceiver circuitry38or millimeter/centimeter wave circuitry38) may perform spatial ranging operations using radio-frequency signals at millimeter and/or centimeter wave frequencies that are transmitted and received by millimeter/centimeter wave transceiver circuitry38. The received signals may be a version of the transmitted signals that have been reflected off of external objects and back towards device10. Control circuitry28may process the transmitted and received signals to detect or estimate a range between device10and one or more external objects in the surroundings of device10(e.g., objects external to device10such as the body of a user or other persons, other devices, animals, furniture, walls, or other objects or obstacles in the vicinity of device10). If desired, control circuitry28may also process the transmitted and received signals to identify a two or three-dimensional spatial location of the external objects relative to device10. Spatial ranging operations performed by millimeter/centimeter wave transceiver circuitry38are unidirectional. If desired, millimeter/centimeter wave transceiver circuitry38may also perform bidirectional communications with external wireless equipment such as external wireless equipment10(e.g., over a bi-directional millimeter/centimeter wave wireless communications link). The external wireless equipment may include other electronic devices such as electronic device10, a wireless base station, wireless access point, a wireless accessory, or any other desired equipment that transmits and receives millimeter/centimeter wave signals. Bidirectional communications involve both the transmission of wireless data by millimeter/centimeter wave transceiver circuitry38and the reception of wireless data that has been transmitted by external wireless equipment. The wireless data may, for example, include data that has been encoded into corresponding data packets such as wireless data associated with a telephone call, streaming media content, internet browsing, wireless data associated with software applications running on device10, email messages, etc. If desired, wireless circuitry34may include transceiver circuitry for handling communications at frequencies below 10 GHz such as non-millimeter/centimeter wave transceiver circuitry36. For example, non-millimeter/centimeter wave transceiver circuitry36may handle wireless local area network (WLAN) communications bands such as the 2.4 GHz and 5 GHz Wi-Fi® (IEEE 802.11) bands, wireless personal area network (WPAN) communications bands such as the 2.4 GHz Bluetooth® communications band, cellular telephone communications bands such as a cellular low band (LB) (e.g., 600 to 960 MHz), a cellular low-midband (LMB) (e.g., 1400 to 1550 MHz), a cellular midband (MB) (e.g., from 1700 to 2200 MHz), a cellular high band (HB) (e.g., from 2300 to 2700 MHz), a cellular ultra-high band (UHB) (e.g., from 3300 to 5000 MHz, or other cellular communications bands between about 600 MHz and about 5000 MHz (e.g., 3G bands, 4G LTE bands, 5G New Radio Frequency Range 1 (FR1) bands below 10 GHz, etc.), a near-field communications (NFC) band (e.g., at 13.56 MHz), satellite navigations bands (e.g., an L1 global positioning system (GPS) band at 1575 MHz, an L5 GPS band at 1176 MHz, a Global Navigation Satellite System (GLONASS) band, a BeiDou Navigation Satellite System (BDS) band, etc.), ultra-wideband (UWB) communications band(s) supported by the IEEE 802.15.4 protocol and/or other UWB communications protocols (e.g., a first UWB communications band at 6.5 GHz and/or a second UWB communications band at 8.0 GHz), and/or any other desired communications bands. The communications bands handled by the radio-frequency transceiver circuitry may sometimes be referred to herein as frequency bands or simply as “bands,” and may span corresponding ranges of frequencies. Non-millimeter/centimeter wave transceiver circuitry36and millimeter/centimeter wave transceiver circuitry38may each include one or more integrated circuits, power amplifier circuitry, low-noise input amplifiers, passive radio-frequency components, switching circuitry, transmission line structures, and other circuitry for handling radio-frequency signals. In general, the transceiver circuitry in wireless circuitry34may cover (handle) any desired frequency bands of interest. As shown inFIG.2, wireless circuitry34may include antennas40. The transceiver circuitry may convey radio-frequency signals using one or more antennas40(e.g., antennas40may convey the radio-frequency signals for the transceiver circuitry). The term “convey radio-frequency signals” as used herein means the transmission and/or reception of the radio-frequency signals (e.g., for performing unidirectional and/or bidirectional wireless communications with external wireless communications equipment). Antennas40may transmit the radio-frequency signals by radiating the radio-frequency signals into free space (or to freespace through intervening device structures such as a dielectric cover layer). Antennas40may additionally or alternatively receive the radio-frequency signals from free space (e.g., through intervening devices structures such as a dielectric cover layer). The transmission and reception of radio-frequency signals by antennas40each involve the excitation or resonance of antenna currents on an antenna resonating element in the antenna by the radio-frequency signals within the frequency band(s) of operation of the antenna. In satellite navigation system links, cellular telephone links, and other long-range links, radio-frequency signals are typically used to convey data over thousands of feet or miles. In Wi-Fi® and Bluetooth® links at 2.4 and 5 GHz and other short-range wireless links, radio-frequency signals are typically used to convey data over tens or hundreds of feet. Millimeter/centimeter wave transceiver circuitry38may convey radio-frequency signals over short distances that travel over a line-of-sight path. To enhance signal reception for millimeter and centimeter wave communications, phased antenna arrays and beam forming (steering) techniques may be used (e.g., schemes in which antenna signal phase and/or magnitude for each antenna in an array are adjusted to perform beam steering). Antenna diversity schemes may also be used to ensure that the antennas that have become blocked or that are otherwise degraded due to the operating environment of device10can be switched out of use and higher-performing antennas used in their place. Antennas40in wireless circuitry34may be formed using any suitable antenna types. For example, antennas40may include antennas with resonating elements that are formed from stacked patch antenna structures, loop antenna structures, patch antenna structures, inverted-F antenna structures, slot antenna structures, planar inverted-F antenna structures, monopole antenna structures, dipole antenna structures, helical antenna structures, Yagi (Yagi-Uda) antenna structures, hybrids of these designs, etc. In another suitable arrangement, antennas40may include antennas with dielectric resonating elements such as dielectric resonator antennas. If desired, one or more of antennas40may be cavity-backed antennas. Different types of antennas may be used for different bands and combinations of bands. For example, one type of antenna may be used in forming a non-millimeter/centimeter wave wireless link for non-millimeter/centimeter wave transceiver circuitry36and another type of antenna may be used in conveying radio-frequency signals at millimeter and/or centimeter wave frequencies for millimeter/centimeter wave transceiver circuitry38. Antennas40that are used to convey radio-frequency signals at millimeter and centimeter wave frequencies may be arranged in one or more phased antenna arrays. The phased antenna arrays may convey radio-frequency signals using signal beam that is steered (e.g., by adjusting the phase and magnitude of each antenna) to point in a desired beam direction (e.g., towards external communications equipment). A schematic diagram of an antenna40that may be formed in a phased antenna array for conveying radio-frequency signals at millimeter and centimeter wave frequencies is shown inFIG.3. As shown inFIG.3, antenna40may be coupled to millimeter/centimeter (MM/CM) wave transceiver circuitry38. Millimeter/centimeter wave transceiver circuitry38may be coupled to antenna feed44of antenna40using a radio-frequency transmission line path such as transmission line path42. Transmission line path42may include a positive signal conductor such as signal conductor46and may include a ground conductor such as ground conductor48. Ground conductor48may be coupled to the antenna ground for antenna40(e.g., over a ground antenna feed terminal of antenna feed44located at the antenna ground). Signal conductor46may be coupled to the antenna resonating element for antenna40. For example, signal conductor46may be coupled to a positive antenna feed terminal of antenna feed44located at the antenna resonating element. In another suitable arrangement, antenna40may be a probe-fed antenna that is fed using a feed probe. In this arrangement, antenna feed44may be implemented as a feed probe. Signal conductor46may be coupled to the feed probe. Transmission line path42may convey radio-frequency signals to and from the feed probe. When radio-frequency signals are being transmitted over the feed probe and the antenna, the feed probe may excite the resonating element for the antenna (e.g., may excite electromagnetic resonant modes of a dielectric antenna resonating element for antenna40). The resonating element may radiate the radio-frequency signals in response to excitation by the feed probe. Similarly, when radio-frequency signals are received by the antenna (e.g., from free space), the radio-frequency signals may excite the resonating element for the antenna (e.g., may excite electromagnetic resonant modes of the dielectric antenna resonating element for antenna40). This may produce antenna currents on the feed probe and the corresponding radio-frequency signals may be passed to the transceiver circuitry over the radio-frequency transmission line. Transmission line path42may include a stripline transmission line (sometimes referred to herein simply as a stripline), a coaxial cable, a coaxial probe realized by metalized vias, a microstrip transmission line, an edge-coupled microstrip transmission line, an edge-coupled stripline transmission lines, a waveguide structure, combinations of these, etc. Multiple types of transmission lines may be used to form the transmission line path that couples millimeter/centimeter wave transceiver circuitry38to antenna feed44. Filter circuitry, switching circuitry, impedance matching circuitry, phase shifter circuitry, amplifier circuitry, and/or other circuitry may be interposed on transmission line path42, if desired. Radio-frequency transmission lines transmission line path42may be integrated into ceramic substrates, rigid printed circuit boards, and/or flexible printed circuits. In one suitable arrangement, radio-frequency transmission lines in device10may be integrated within multilayer laminated structures (e.g., layers of a conductive material such as copper and a dielectric material such as a resin that are laminated together without intervening adhesive) that may be folded or bent in multiple dimensions (e.g., two or three dimensions) and that maintain a bent or folded shape after bending (e.g., the multilayer laminated structures may be folded into a particular three-dimensional shape to route around other device components and may be rigid enough to hold its shape after folding without being held in place by stiffeners or other structures). All of the multiple layers of the laminated structures may be batch laminated together (e.g., in a single pressing process) without adhesive (e.g., as opposed to performing multiple pressing processes to laminate multiple layers together with adhesive). In general, it may be desirable to perform impedance matching along transmission line42to minimize signal reflections along the transmission line. This may in turn serve to maximize the antenna efficiency of antenna40. However, contact pads along transmission line path42are not naturally matched. In addition, it can be difficult to perform impedance matching at relatively high frequencies such as frequencies greater than 10 GHz. For example, packaged impedance matching components for performing impedance matching at these frequencies and any associated switching circuitry may be undesirably bulky and may not fit within the small form factor of device10. In order to mitigate these issues, device10may include multi-layer impedance matching structures interposed along transmission line path42.FIG.4is a cross sectional side view showing how transmission line path42may include multi-layer impedance matching structures. As shown inFIG.4, transmission line path42may include multi-layer impedance matching structures74. Multi-layer impedance matching structures74may be integrated (embedded) within a dielectric substrate such as substrate50. Substrate50may be, for example, a rigid printed circuit board, a flexible printed circuit, or another dielectric substrate. Substrate50may include multiple stacked dielectric layers52(e.g., layers of printed circuit board substrate, layers of fiberglass-filled epoxy, layers of polyimide, layers of ceramic substrate, or layers of other dielectric materials). If desired, a radio-frequency integrated circuit (RFIC) may be mounted to substrate50to form an integrated antenna module. Substrate50may be used to route transmission lines for each of the antennas40in a given phased antenna array, if desired. Substrate50may include ground traces such as ground traces62. Ground traces62may, for example, be patterned onto a first layer52of substrate50. Ground traces62may form part of ground conductor48(FIG.3) for transmission line path42. The signal conductor46of transmission line path42may include signal traces66and68patterned onto a second layer52of substrate50(e.g., where the second layer52is layered over the first layer52of substrate50). Multi-layer impedance matching structures74may couple signal traces66to contact pad60at upper-most surface54of substrate50. Contact pad60may be, for example, a surface-mount technology (SMT) contact pad patterned onto surface54of substrate50(e.g., on the upper-most layer52of substrate50). If desired, an optional hole or opening such as opening64may be formed in ground traces62. Contact pad60may completely or partially overlap opening64. Multi-layer impedance matching structures74may include signal trace68, opening64, and a set of N conductive via pads70and N conductive vias72coupled in series between signal trace68and contact pad60. Each conductive via pad70may be patterned onto a respective layer52of substrate50. Each conductive via72may extend through a respective layer52of substrate50. Conductive via pads70may include, for example, a first via pad70-0coupled to contact pad60by a first conductive via72-0, a second via pad70-1coupled to via pad70-0by a second conductive via72-1, an Nth via pad70-N coupled to signal trace68(e.g., via pad70-N, signal trace68, and signal trace66may be formed from the same layer of conductive traces on the same layer52of substrate50), an (N−1)th via pad70-(N−1) coupled to via pad70-N by an Nth conductive via72-N, and N−4 via pads70coupled between via pads70-1and70-2by N−4 conductive vias72(e.g., including conductive vias72-2and72-(N−1)). N may be any desired integer (e.g., two, three, four, five, six, seven, eight, nine, ten, eleven, twelve, thirteen, greater than thirteen, etc.). Each via pad70may have a corresponding lateral area A (e.g., as measured into and out of the plane of the page ofFIG.4). Each conductive via may have a corresponding width W (e.g., as measured from the left to the right of the page ofFIG.4). Each conductive via may also have a corresponding aspect ratio given by the ratio of the height of the conductive via (e.g., as measured from the top to the bottom of the page ofFIG.4) to one-half of the width W (e.g., the radius) of the conductive via. The aspect ratios of conductive vias72and/or the areas A of via pads70may be varied across the N conductive vias72and the N via pads70to perform impedance matching between contact pad60and signal trace66. The aspect ratio of conductive vias72may vary between about 0.75 and 0.85, as one example. For example, as shown inFIG.4, conductive via72-0may have a first width W0and thus a first aspect ratio, conductive via72-1may have a second width W1that is less than width W0and thus may have a second aspect ratio that is greater than the first aspect ratio, conductive via72-N may have an Nth width WN that is different from widths W0and W1and thus may have an Nth aspect ratio that is different from the first and second aspect ratios, etc. In addition, as shown inFIG.4, via pad70-0may have a first area A0, via pad70-1may have a second area A1that is less than area A0, via pad70-(N−1) may have an (N−1)th area A(N−1) that is greater than area A2and less than area A1, etc. These examples are merely illustrative and, in general, each via pad70may have any desired area A and each conductive via72may have any desired aspect ratio. Each via pad70may have a different respective area A or two or more of the via pads70may have the same area A. Similarly, each conductive via72may have a different respective width W and thus a different respective aspect ratio or two or more of the conductive vias72may have the same aspect ratio. In other words, the aspect ratio of conductive vias72may vary across the set of N conductive vias in multi-layer impedance matching structures74and/or the lateral area of the via pads70may vary across the set of N via pads in multi-layer impedance matching structures74. The aspect ratios (e.g., widths W) and areas A in multi-layer impedance matching structures74may be selected to provide suitable impedance matching between signal trace66and contact pad60at the frequencies of the radio-frequency signals conveyed by transmission line path42(e.g., frequencies greater than 10 GHz). For example, the width W of each conductive via72may be selected to interpose a desired inductance L between the conductive pads coupled to that conductive via72. In general, increasing width W (e.g., decreasing the aspect ratio) reduces the inductance L produced by a given conductive via72whereas decreasing width W (e.g., increasing the aspect ratio) increases the inductance L produced by the conductive via. At the same time, the area A of each via pad70may be selected to interpose a desired capacitance C between signal trace68and contact pad60. In general, increasing area A increases the capacitance produced by a given via pad70whereas decreasing area A decreases the capacitance produced by the via pad. The capacitances (e.g., areas A) and the inductances (e.g., widths W) may be selected to ensure that there is a smooth impedance transition between signal trace66and contact pad60at millimeter/centimeter wave frequencies. In some scenarios, increasing the area A of via pads70may undesirably increase the capacitance between the via pads and ground traces62. In these scenarios, opening64in ground traces62may help to counteract this increased capacitance. Multi-layer impedance matching structures74may also include signal trace68. Signal trace68may have a width that is different from the width of signal trace66. This may configure signal trace68to help perform impedance matching between signal trace66and contact pad60. A radio-frequency structure such as radio-frequency component56may be mounted to contact pad60(e.g., using solder58). Radio-frequency component56may, for example, be surface-mounted to contact pad60(e.g., using an SMT process or hot-bar process). Radio-frequency component56may include any desired radio-frequency structures for conveying radio-frequency signals at frequencies greater than 10 GHz such as a board-to-board connector (e.g., for coupling contact pad60to other portions of transmission line path42that are located on another substrate and that are coupled to one or more antennas or to the millimeter/centimeter wave transceiver), an interposer (e.g., an interposer having conductive structures for coupling contact pad60to one or more antennas or to the millimeter/centimeter wave transceiver), or the antenna feed of a given antenna (e.g., antenna feed44of antenna40ofFIG.3), as examples. FIG.5is a top view showing how signal trace68may have a width that is different from the width of signal trace66(e.g., as taken in the direction of arrow75ofFIG.4). As shown inFIG.5, substrate50may be used to route multiple transmission line paths for multiple antennas (e.g., multiple antennas in a phased antenna array). In the example ofFIG.5, substrate50includes eight contact pads60for eight different antennas in a given eight-element phased antenna array (e.g., a first contact pad60-0for a first antenna in the phased antenna array, a second contact pad60-1for a second antenna in the phased antenna array, etc.). In one suitable arrangement that is described herein as an example, radio-frequency component56ofFIG.4includes a probe feed for a dielectric resonator antenna. Each contact pad60as shown inFIG.5may be coupled to a respective probe feed for a respective dielectric resonator antenna in the phased antenna array. The phased antenna array may, if desired, be mounted in alignment with notch8in inactive area IA of display14(FIG.1) for radiating through a display cover in display14(e.g., for radiating through the front face of device10at notch8). As shown inFIG.5, each contact pad60may be coupled to a respective signal trace68(e.g., by an underlying stack of conductive vias and via pads). Each signal trace68may, if desired, be thicker than the corresponding signal trace66. This may configure signal trace68to help perform impedance matching between the corresponding signal trace66and contact pad60. Fences of grounded vias may be interposed between signal traces66for isolation if desired. The example ofFIG.5is merely illustrative. The signal traces may have other shapes. The phased antenna array may include any desired number of antennas. FIG.6shows a transmission line model76for multi-layer impedance matching structures74ofFIG.4. As shown in transmission line model76ofFIG.6, multi-layer impedance matching structures74may have a first terminal78(e.g., at signal trace66ofFIG.4) and a second terminal80(e.g., at contact pad60ofFIG.6). Multi-layer impedance matching structures74may have a transmission line transition82formed from signal trace68(FIGS.4and5). The width and length of signal trace68may be selected to transform real impedance to standard impedance (e.g., the impedance of signal trace66ofFIG.4such as 50 Ohm impedance). A capacitance CGND may be coupled between the signal trace and ground84. Capacitance CGND may be established between via pad70-N and ground traces62ofFIG.4. Multi-layer impedance matching structures74may include N resonant circuits86coupled in series between transmission line transition82and terminal80(e.g., a first resonant circuit86-0, an Nth resonant circuit86-N, etc.). Each resonant circuit86may include a corresponding parallel-coupled capacitance C and inductance L (e.g., resonant circuit86-N may have a capacitance CN coupled in parallel with inductance LN, resonant circuit86-0may have a capacitance C0coupled in parallel with inductance L0, etc.). Each capacitance C is determined by the areas A of a respective pair of via pads70in multi-layer transmission line structures74. Each inductance L is determined by the width W and thus the aspect ratio of a respective conductive via72in multi-layer transmission line structures74. For example, the areas A of via pads70-N and70-(N−1) ofFIG.4may be selected to produce capacitance CN of resonant circuit86-N, the width WN of conductive via72-N may be selected to produce inductance LN of resonant circuit86-N, the areas A0and A1of via pads70-0and70-1may be selected to produce capacitance C1of resonant circuit86-1, the width W1of conductive via72-1may be selected to produce inductance L1of resonant circuit86-1, etc. A capacitance CPAD may also be coupled between the output of resonant circuit86-0and ground84. Capacitance CPAD may be established between contact pad60and ground traces62ofFIG.4. Opening64in ground traces62may be used to counteract an increase in the capacitances between the via pads and the ground traces. The widths W of conductive vias72and thus the inductances L in transmission line model76, the areas A of via pads70and thus the capacitances C in transmission line model76, and optionally the dimensions of opening64may be selected to have a real impedance at reference point R, which serves to match the impedance of terminal78to the impedance of terminal80. FIG.7is a perspective view of an illustrative dielectric resonator antenna having a probe feed that may form radio-frequency component56ofFIG.4. As shown inFIG.7, antenna40may include a dielectric resonating element such as dielectric resonating element92. Dielectric resonating element92may be mounted to an underlying substrate such as substrate50. Contact pad60may be patterned onto surface54of substrate50. Multi-layer impedance matching structures74(FIG.4) may underly contact pad60and couple contact pad60to a corresponding signal trace66in substrate50(not shown inFIG.7for the sake of clarity). Dielectric resonating element92of antenna40may be formed from a column (pillar) of dielectric material mounted or otherwise coupled to surface54of substrate50. If desired, dielectric resonating element92may be embedded within (e.g., laterally surrounded by) an additional dielectric substrate mounted to surface54of substrate50(not shown inFIG.7for the sake of clarity). The additional dielectric substrate may be an injection-molded plastic substrate in one suitable arrangement. Antenna40ofFIG.7may be formed in a phased antenna array of antennas having dielectric resonating elements such as dielectric resonating element92. Each dielectric resonating element in the phased antenna array may, if desired, be embedded within the same injection-molded plastic substrate. The operating (resonant) frequency of antenna40may be selected by adjusting the dimensions of dielectric resonating element92. Dielectric resonating element92may be formed from a column of dielectric material having a first dielectric constant. The first dielectric constant may be relatively high (e.g., greater than 10.0, greater than 12.0, greater than 15.0, greater than 20.0, between 15.0 and 40.0, between 10.0 and 50.0, between 18.0 and 30.0, greater than 30.0, between 12.0 and 45.0, etc.). In one suitable arrangement, dielectric resonating element92may be formed from zirconia or a ceramic material. Other dielectric materials may be used to form dielectric resonating element92if desired. The additional dielectric substrate surrounding dielectric resonating element92may have a dielectric constant that differs from the dielectric constant of dielectric resonating element92by at least a predetermined margin. The difference in dielectric constant between dielectric resonating element92and the surrounding additional dielectric substrate may establish a strong radio-frequency boundary condition that configures dielectric resonating element92to serve as a waveguide for propagating radio-frequency signals at millimeter and centimeter wave frequencies. Dielectric resonating element92may radiate radio-frequency signals90when excited by the transmission line path coupled to contact pad60. Antenna40may be fed using one or more radio-frequency feed probes such as feed probe94. Feed probe94may form part of the antenna feed for antenna40(e.g., antenna feed44ofFIG.3). As shown inFIG.7, feed probe94may include feed conductor96. In one suitable arrangement that is described herein as an example, feed conductors96may be formed from stamped sheet metal that has been folded into a desired shape and that is press against a given sidewall102of dielectric resonating element92. If desired, biasing structures (not shown inFIG.6for the sake of clarity) may hold or press feed conductor96against sidewall102to help ensure a reliable coupling between the feed conductor and the dielectric resonating element. In another suitable arrangement, feed conductor96may be formed from a conductive trace that is patterned directly onto sidewall102(e.g., using a laser direct structuring (LDS) process, a sputtering process, or other conductive metallization techniques). Feed conductor96may have a first portion on a first sidewall102of dielectric resonating element92. Feed conductor96may have a second portion coupled to contact pad60using solder58(e.g., feed probe94may form radio-frequency component56ofFIG.4). The transmission line path coupled to contact pad60may convey radio-frequency signals to and from feed probe94. Feed probe94may electromagnetically couple the radio-frequency signals into dielectric resonating element92. This may serve to excite one or more electromagnetic modes (e.g., radio-frequency cavity or waveguide modes) of dielectric resonating element92. When excited by feed probe94, the electromagnetic modes of dielectric resonating element92may configure the dielectric resonating element to serve as a waveguide that propagates the wavefronts of radio-frequency signals90along the length of dielectric resonating element92and through the top surface of dielectric resonating element92(e.g., in the direction of the central/longitudinal axis104of dielectric resonating element92). For example, during signal transmission, the transmission line path coupled to contact pad60may supply radio-frequency signals from the millimeter/centimeter wave transceiver circuitry to antenna40. Feed probes94may couple the radio-frequency signals into dielectric resonating element92. This may serve to excite one or more electromagnetic modes of dielectric resonating element92, resulting in the propagation of radio-frequency signals90up the length of dielectric resonating element92. Similarly, during signal reception, radio-frequency signals90may be received by dielectric resonating element92. The received radio-frequency signals may excite the electromagnetic modes of dielectric resonating element92, resulting in the propagation of the radio-frequency signals down the length of dielectric resonating element92. Feed probes94may couple the received radio-frequency signals onto the underlying transmission line path, which passes the radio-frequency signals to the millimeter/centimeter wave transceiver circuitry. The multi-layer impedance matching structures74(FIG.4) coupled to contact pad60may ensure that there is a smooth impedance transition between feed probe94and the rest of the transmission line path. This may serve to minimize signal reflections along the transmission line path, thereby maximizing the antenna efficiency of antenna40. Dielectric resonating element92may have a length98, a width100(e.g., measured orthogonal to length98), and a height88(e.g., measured parallel to central/longitudinal axis104and orthogonal to length98and width100). Length98, width100, and height88may be selected to provide dielectric resonating element92with a corresponding mix of electromagnetic cavity/waveguide modes that, when excited by feed probe94and/or the additional feed probe, configure antenna40to radiate at desired frequencies. For example, height88may be 2-10 mm, 4-6 mm, 3-7 mm, 4.5-5.5 mm, or greater than 2 mm. Width100and length98may each be 0.5-1.0 mm, 0.4-1.2 mm, 0.7-0.9 mm, 0.5-2.0 mm, 1.5 mm-2.5 mm, 1.7 mm-1.9 mm, 1.0 mm-3.0 mm, etc. Width100may be equal to length98or, in other arrangements, may be different than length98. The example ofFIG.7is merely illustrative. If desired, dielectric resonating element92may also be fed by an additional feed probe coupled to a sidewall102orthogonal to that of feed probe94. The additional feed probe may be coupled to an additional transmission line path and additional multi-layer impedance matching structures. Feed probe94and the additional feed probe may allow dielectric resonating element92to cover orthogonal linear polarizations or other polarizations, for example. Feed probe94may sometimes be referred to herein as a feed conductor, feed patch, or probe feed. Dielectric resonating element92may sometimes be referred to herein as a dielectric radiating element, dielectric radiator, dielectric resonator, dielectric antenna resonating element, dielectric column, dielectric pillar, radiating element, or resonating element. When fed by one or more feed probes such as feed probe94, dielectric resonator antennas such as antenna40ofFIG.7may sometimes be referred to herein as probe-fed dielectric resonator antennas. Dielectric resonating element92may have other shapes. In general, any desired radio-frequency structures may form radio-frequency component56ofFIG.4. Device10may gather and/or use personally identifiable information. It is well understood that the use of personally identifiable information should follow privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining the privacy of users. In particular, personally identifiable information data should be managed and handled so as to minimize risks of unintentional or unauthorized access or use, and the nature of authorized use should be clearly indicated to users The foregoing is merely illustrative and various modifications can be made by those skilled in the art without departing from the scope and spirit of the described embodiments. The foregoing embodiments may be implemented individually or in any combination. | 57,900 |
11863225 | DETAILED DESCRIPTION Electronic device10ofFIG.1may be a computing device such as a laptop computer, a desktop computer, a computer monitor containing an embedded computer, a tablet computer, a cellular telephone, a media player, or other handheld or portable electronic device, a smaller device such as a wristwatch device, a pendant device, a headphone or earpiece device, a device embedded in eyeglasses or other equipment worn on a user's head, or other wearable or miniature device, a television, a computer display that does not contain an embedded computer, a gaming device, a navigation device, an embedded system such as a system in which electronic equipment with a display is mounted in a kiosk or automobile, a wireless internet-connected voice-controlled speaker, a home entertainment device, a remote control device, a gaming controller, a peripheral user input device, a wireless base station or access point, equipment that implements the functionality of two or more of these devices, or other electronic equipment. As shown in the functional block diagram ofFIG.1, device10may include components located on or within an electronic device housing such as housing12. Housing12, which may sometimes be referred to as an outer casing, may be formed of plastic, glass, ceramics, fiber composites, metal (e.g., stainless steel, aluminum, metal alloys, etc.), other suitable materials, or a combination of these materials. In some situations, parts or all of housing12may be formed from dielectric or other low-conductivity material (e.g., glass, ceramic, plastic, sapphire, etc.). In other situations, housing12or at least some of the structures that make up housing12may be formed from metal elements. Device10may include control circuitry14. Control circuitry14may include storage such as storage circuitry16. Storage circuitry16may include hard disk drive storage, nonvolatile memory (e.g., flash memory or other electrically-programmable-read-only memory configured to form a solid-state drive), volatile memory (e.g., static or dynamic random-access-memory), etc. Storage circuitry16may include storage that is integrated within device10and/or removable storage media. Control circuitry14may include processing circuitry such as processing circuitry18. Processing circuitry18may be used to control the operation of device10. Processing circuitry18may include on one or more microprocessors, microcontrollers, digital signal processors, host processors, baseband processor integrated circuits, application specific integrated circuits, central processing units (CPUs), etc. Control circuitry14may be configured to perform operations in device10using hardware (e.g., dedicated hardware or circuitry), firmware, and/or software. Software code for performing operations in device10may be stored on storage circuitry16(e.g., storage circuitry16may include non-transitory (tangible) computer readable storage media that stores the software code). The software code may sometimes be referred to as program instructions, software, data, instructions, or code. Software code stored on storage circuitry16may be executed by processing circuitry18. Control circuitry14may be used to run software on device10such as satellite navigation applications, internet browsing applications, voice-over-internet-protocol (VOIP) telephone call applications, email applications, media playback applications, operating system functions, etc. To support interactions with external equipment, control circuitry14may be used in implementing communications protocols. Communications protocols that may be implemented using control circuitry14include internet protocols, wireless local area network (WLAN) protocols (e.g., IEEE 802.11 protocols—sometimes referred to as Wi-Fi®), protocols for other short-range wireless communications links such as the Bluetooth® protocol or other wireless personal area network (WPAN) protocols, IEEE 802.11ad protocols (e.g., ultra-wideband protocols), cellular telephone protocols (e.g., 3G protocols, 4G (LTE) protocols, 5G protocols, etc.), antenna diversity protocols, satellite navigation system protocols (e.g., global positioning system (GPS) protocols, global navigation satellite system (GLONASS) protocols, etc.), antenna-based spatial ranging protocols (e.g., radio detection and ranging (RADAR) protocols or other desired range detection protocols for signals conveyed at millimeter and centimeter wave frequencies), or any other desired communications protocols. Each communications protocol may be associated with a corresponding radio access technology (RAT) that specifies the physical connection methodology used in implementing the protocol. Device10may include input-output circuitry20. Input-output circuitry20may include input-output devices22. Input-output devices22may be used to allow data to be supplied to device10and to allow data to be provided from device10to external devices. Input-output devices22may include user interface devices, data port devices, and other input-output components. For example, input-output devices22may include touch sensors, displays (e.g., touch-sensitive and/or force-sensitive displays), light-emitting components such as displays without touch sensor capabilities, buttons (mechanical, capacitive, optical, etc.), scrolling wheels, touch pads, key pads, keyboards, microphones, cameras, buttons, speakers, status indicators, audio jacks and other audio port components, digital data port devices, motion sensors (accelerometers, gyroscopes, and/or compasses that detect motion), capacitance sensors, proximity sensors, magnetic sensors, force sensors (e.g., force sensors coupled to a display to detect pressure applied to the display), etc. In some configurations, keyboards, headphones, displays, pointing devices such as trackpads, mice, and joysticks, and other input-output devices may be coupled to device10using wired or wireless connections (e.g., some of input-output devices22may be peripherals that are coupled to a main processing unit or other portion of device10via a wired or wireless link). Input-output circuitry20may include wireless circuitry24to support wireless communications. Wireless circuitry24(sometimes referred to herein as wireless communications circuitry24) may include two or more antennas40. Wireless circuitry24may also include baseband processor circuitry, transceiver circuitry, amplifier circuitry, filter circuitry, switching circuitry, radio-frequency transmission lines, and/or any other circuitry for transmitting and/or receiving radio-frequency signals using antennas40. Wireless circuitry24may transmit and/or receive radio-frequency signals within a corresponding frequency band at radio frequencies (sometimes referred to herein as a communications band or simply as a “band”). The frequency bands handled by wireless circuitry24may include wireless local area network (WLAN) frequency bands (e.g., Wi-Fi® (IEEE 802.11) or other WLAN communications bands) such as a 2.4 GHz WLAN band (e.g., from 2400 to 2480 MHz), a 5 GHz WLAN band (e.g., from 5180 to 5825 MHz), a Wi-Fi® 6E band (e.g., from 5925-7125 MHz), and/or other Wi-Fi® bands (e.g., from 1875-5160 MHz), wireless personal area network (WPAN) frequency bands such as the 2.4 GHz Bluetooth® band or other WPAN communications bands, cellular telephone frequency bands (e.g., bands from about 600 MHz to about 5 GHz, 3G bands, 4G LTE bands, 5G New Radio Frequency Range 1 (FR1) bands below 10 GHz, 5G New Radio Frequency Range 2 (FR2) bands between 20 and 60 GHz, etc.), other centimeter or millimeter wave frequency bands between 10-300 GHz, near-field communications frequency bands (e.g., at 13.56 MHz), satellite navigation frequency bands (e.g., a GPS band from 1565 to 1610 MHz, a Global Navigation Satellite System (GLONASS) band, a BeiDou Navigation Satellite System (BDS) band, etc.), ultra-wideband (UWB) frequency bands that operate under the IEEE 802.15.4 protocol and/or other ultra-wideband communications protocols, communications bands under the family of 3GPP wireless communications standards, communications bands under the IEEE 802.XX family of standards, and/or any other desired frequency bands of interest. Antennas40may be formed using any desired antenna structures. For example, antennas40may include antennas with resonating elements that are formed from loop antenna structures, patch antenna structures, inverted-F antenna structures, slot antenna structures, planar inverted-F antenna structures, helical antenna structures, monopole antennas, dipoles, hybrids of these designs, etc. Filter circuitry, switching circuitry, impedance matching circuitry, and/or other antenna tuning components may be adjusted to adjust the frequency response and wireless performance of antennas40over time. The radio-frequency signals handled by antennas40may be used to convey wireless communications data between device10and external wireless communications equipment (e.g., one or more other devices such as device10). Wireless communications data may be conveyed by wireless circuitry24bidirectionally or unidirectionally. The wireless communications data may, for example, include data that has been encoded into corresponding data packets such as wireless data associated with a telephone call, streaming media content, internet browsing, wireless data associated with software applications running on device10, email messages, etc. Wireless circuitry24may additionally or alternatively perform spatial ranging operations using antennas40. In scenarios where wireless circuitry24both conveys wireless communications data and performs spatial ranging operations, one or more of the same antennas40may be used to both convey wireless communications data and perform spatial ranging operations. In another implementation, wireless circuitry24may include a set of antennas40that only conveys wireless communications data and a set of antennas40that is only used to perform spatial ranging operations. When performing spatial ranging operations, antennas40may transmit radio-frequency signals36. Wireless circuitry24may transmit radio-frequency signals36in a corresponding radio-frequency band (e.g., a frequency band that includes frequencies greater than around 10 GHz, greater than around 20 GHz, less than 10 GHz, etc.). Radio-frequency signals36may reflect off of objects external to device10such as external object34. External object34may be, for example, the ground, a building, a wall, furniture, a ceiling, a person, a body part, an animal, a vehicle, a landscape or geographic feature, an obstacle, or any other object or entity that is external to device10. Antennas40may receive reflected radio-frequency signals38. Reflected signals38may be a reflected version of the transmitted radio-frequency signals36that have reflected off of external object34and travel back towards device10. Control circuitry14may process the transmitted radio-frequency signals36and the received reflected signals38to detect or estimate the range (distance) R between device10and external object34. If desired, control circuitry14may also process the transmitted and received signals to identify a two or three-dimensional spatial location (position) of external object34, a velocity of external object34, and/or an angle of arrival of reflected signals38. In one implementation that is described herein as an example, wireless circuitry24performs spatial ranging operations using a frequency-modulated continuous-wave (FMCW) radar scheme. This is merely illustrative and, in general, other radar schemes or spatial ranging schemes may be used (e.g., an OFDM radar scheme, an FSCW radar scheme, a phase coded radar scheme, etc.). As shown inFIG.1, wireless circuitry24may include transmit (TX) circuitry26. Transmit circuitry26may include a transmit signal generator such as signal generator50. Transmit signal generator50may generate signals for transmission over antenna(s)40. In some implementations that are described herein as an example, transmit signal generator50includes a chirp generator that generates chirp signals for transmission over antenna(s)40(e.g., in embodiments where wireless circuitry24uses an FMCW radar scheme). Transmit signal generator50may therefore sometimes be referred to herein as a chirp generator. Transmit circuitry26may also include a digital-to-analog converter (DAC) circuit such as DAC30. Digital-to-analog converter30may convert the transmit signals (e.g., the chirp signals) from the digital domain to the analog domain prior to transmission by antennas40. Wireless circuitry24may also include receive (RX) circuitry28. Receive circuitry28may include an analog-to-digital converter (ADC) circuit such as ADC32. Analog-to-digital converter32may convert radio-frequency signals received from antennas40from the analog domain to the digital domain for subsequent processing by control circuitry14. While control circuitry14is shown separately from wireless circuitry24in the example ofFIG.1for the sake of clarity, wireless circuitry24may include processing circuitry that forms a part of processing circuitry18and/or storage circuitry that forms a part of storage circuitry16of control circuitry14(e.g., portions of control circuitry14may be implemented on wireless circuitry24). For example, wireless circuitry24includes a baseband processor that can be considered part of processing circuitry18. Transmit circuitry26can be implemented using a variety of different radio-frequency transmitter architectures.FIG.2illustrates circuitry26implemented as a digital polar transmitter. As shown inFIG.2, digital polar transmitter circuitry26may include a transmit signal generator50configured to output transmit signals, a differentiator52configured to differentiate or compute the derivative of the transmit signal with respect to time, upsampling circuits54and58, an oscillation circuit such as digitally controlled oscillator56, a data converter such as digital-to-analog converter (DAC)30, a filtering circuit such as bandpass filter60, and one or more power amplifiers62. In accordance with some embodiments, transmit signal generator50may be configured to generate constant amplitude zero autocorrelation (CAZAC) sequences. As its name suggests, a CAZAC sequence has two identifying properties. The first property of any CAZAC sequence is that the sequence has constant amplitude. In other words, the numbers in a CAZAC sequence will all lie along a circle when plotted on a complex plane. The second property of any CAZAC sequence is that the correlation of a given sequence and a shifted version of that given sequence will be approximately equal to zero. In other words, the two sequences are orthogonal. Generating orthogonal sequences having constant amplitude can be useful in radar and many wireless applications. In order to enable high quality transmission, it may be preferable to generate an interpolated CAZAC sequence using the transmit signal generator50. One way of generating a CAZAC sequence is to use a signal generator that can output a signal with quadratic phase. Such type of CAZAC sequence generator is sometimes referred to as a quadratic phase generator. Transmit signal generator50ofFIG.2may be a quadratic phase generator capable of outputting one or more CAZAC sequences. Signals output from quadratic phase generator50may be fed to digital-to-analog converter30via upsampling circuit58. Upsampling circuit58may perform up-sampling or up-conversion operations. Signals output from quadratic phase generator50may also be fed to digitally controlled oscillator56via differentiator52and upsampling circuit54. Differentiator52may be configured to compute the derivative of a signal with a quadratic phase waveform with respect to time, which yields an instantaneous frequency that is linear as a function of time. As a result, signal generator50is also sometimes referred to as a linear frequency generator or linear instantaneous frequency generator. This linear instantaneous frequency can be up up-sampled or up-converted by upsampling circuit54before being received by digitally controlled oscillator56. Digitally controlled oscillator56may have an output that is coupled to a control input of digital-to-analog converter30. Digital-to-analog converter30may output a corresponding analog signal that can be filtered using bandpass filter60. The filtered signals can then be amplified by one or more power amplifiers62before be radiated by antenna(s)40. This example in which at least an upsampling circuit, a digital-to-analog converter, a bandpass filter, and power amplifiers are included in the transmit path is merely illustrative. If desired, transmitter circuitry26may include additional digital components coupled to or inserted before DAC30, fewer digital components, additional analog front-end components coupled to or inserted after DAC30, fewer analog front-end components, and/or additional filter, switching, or coupling circuitry. Since the DAC has a finite bandwidth and the band edges are almost always disturbed (e.g., by associated anti-aliasing circuitry), the CAZAC sequence generated by quadratic phase generator50should be perfectly interpolated in almost all practical applications. The digital polar transmitter architecture ofFIG.2is merely illustrative.FIG.3illustrates circuitry26implemented as an IQ transmitter. As shown inFIG.3, IQ transmitter circuitry26may include a quadric phase generator50configured to output signals having a quadratic phase, an IQ conversion circuit such as IQ converter70configured to output in-phase (I) signals and quadrature (Q) signals, a first digital-to-analog converter (DAC)74-1configured to convert the in-phase signals to the analog domain, a second digital-to-analog converter (DAC)74-2configured to convert the quadrature signals to the analog domain, a digitally controlled oscillator72for controlling the two DACs74-1and74-2, a summing circuit such as combiner76, a filtering circuit such as bandpass filter78, and one or more power amplifiers80. DACs74-1and74-2may be represented collectively by block30inFIG.1. In some embodiments, upsampling circuits may be interposed between IQ converter70and DACs74-1and74-2. In such scenarios, the additional upsampling circuits may be configured to up-sample or up-convert the IQ signals prior to the digital-to-analog conversion step. In yet other embodiments, filtering circuits such as low pass filters can be interposed between the DACs and combiner76. If desired, additional mixers may be interposed between the DACs and combiner76. These additional mixers may, for example, be used to modulate the analog signals to an intermediate frequency range that is between the baseband frequency and the transmitting radio frequency. In such scenarios, yet another set of mixers may be interposed between bandpass filter78and the power amplifiers and are used to further modulate the analog signals from the intermediate frequency range to the transmitting radio frequency. The example ofFIG.3in which at least an IQ converter, multiple DACs, a DCO, a summing circuit, a bandpass filter, and power amplifiers are included in the transmit path is merely illustrative. If desired, IQ transmitter circuitry26may include additional digital components coupled to or inserted before DACs74, fewer digital components, additional analog front-end components coupled to or inserted after DACs74, fewer analog front-end components, and/or additional filter, switching, or radio-frequency coupling circuitry. In general, quadratic phase generator50can be incorporated into any radar, analog front-end, or wireless communications architecture. FIG.4is a block diagram showing one implementation of quadratic phase generator50. As shown inFIG.4, quadratic phase generator50may include a switch such as switch98, a control circuit such as numerically controlled oscillator (NCO)100, adder circuits such as adders102and104, and delay circuits such as delay circuits106and108. Quadratic phase generator50is generally a function of two inputs: (1) a chirp count q and (2) word length M. The chirp count q represents the total number of chirps in each sequence. Word length M represents the number of elements being sampled as a function of time for each CAZAC sequence (e.g., M is equal to the number of samples per sequence). In general, chirp count q can be any positive or negative integer such as ±3, ±4, ±5, ±6, ±7, ±8, ±9, ±(10 to 100), or more. The value of word length M should be an integer such that the greatest common divisor of q and M is equal to one. In general, word length M is at least greater than 10, 10-100, at least 100 or more, 100-200, at least 200 or more, etc. These values are merely exemplary and are not intended to limit the scope of the present embodiments. Numerically controlled oscillator (NCO)100may have a first input configured to receive an absolute value of chirp count q, a second input configured to receive word length M, and an output coupled to switch98. Oscillator100may output an integrator value on its output that determines the state of switch98. Switch98may have a first switch input port configured to receive two times chirp count q, a second switch input port configured to receive two times the quantity of chirp count q minus the product of the sign of chirp count q and word length M, and a switch output port. When numerically controlled oscillator100outputs an integrator value that is less than or equal to a threshold value, then switch98may connect the first switch input port to the switch output port (e.g., such that adder102receives 2*q). When numerically controlled oscillator100outputs an integrator value that is greater than the threshold value, then switch98may connect the second switch input port to the switch output port (e.g., such that adder102receives 2*(q−sign(q)*M). The predetermined threshold value can be equal to M (as an example). The threshold value can be a fixed number or can be programmable. Adder102may have a first adder input coupled to the switch output port, a second adder input, and an adder output. Delay circuit106may have a first input coupled to the adder output of adder102, a second input configured to receive a preset signal, and an output that is fed back to the second adder input of adder102via feedback path110. The preset signal can help reset (initialize) delay circuit106to some predetermined (preset) value. Coupled in this way, adder102, delay circuit106, and feedback path110can operate as a first integrator (accumulator) stage. Adder104may have a first adder input coupled to the output of delay circuit106, a second adder input, and an adder output. Delay circuit108may have a first input coupled to the adder output of adder104, a second input configured to receive a start signal, and an output that is fed back to the second adder input of adder104via feedback path112. The start signal can help reset (initialize) delay circuit106to zero (as an example). Coupled in this way, adder104, delay circuit108, and feedback path112can operate as a second integrator (accumulator) stage. The output of delay circuit108is coupled to the final output port Out of quadratic phase generator50, on which a signal with quadratic phase is generated. In general, the preset and start signals can be set to any suitable value for initializing or resetting quadratic phase generator50. Configured as such, quadratic phase generator50ofFIG.4is thus able to sum up during one symbol M samples with two instantaneous frequency increments as shown in equation 1 below: ∑m=1❘"\[LeftBracketingBar]"q❘"\[RightBracketingBar]"(❘"\[LeftBracketingBar]"q❘"\[RightBracketingBar]"-M)+∑m=❘"\[LeftBracketingBar]"q❘"\[RightBracketingBar]"+1M❘"\[LeftBracketingBar]"q❘"\[RightBracketingBar]"=0(1) A weighted sum of zero ensures that quadratic phase generator50produces minimal quantization error. FIG.5Ais a diagram plotting the output phase for a quadratic phase generator50of the type described in connection withFIG.4for one sequence, where chirp count q is equal to 3 and where word length M is equal to 100. The output phase is normalized (i.e., divided) by a factor of π. As shown inFIG.5A, the sequence can be divided into three chirps (since q=3), each having a quadratic phase profile. The phase of the sequence wraps around three times and starts and ends at the same phase value. Unlike conventional CAZAC sequence generators that output a number of perfectly identical chirps, the different chirps generated by quadratic phase generator50are slightly distinct (e.g., each chirp exhibits a different respective magnitude response). A quadratic phase generator is thus defined as a circuit that is configured to output a signal having one or more chirps each having a quadratic phase response as shown in the example ofFIG.5A. FIG.5Bis a diagram plotting the instantaneous frequency corresponding to the quadratic phase profile ofFIG.5A. In other words, the plot ofFIG.5Bis obtained by differentiating or computing the derivative of the samples inFIG.5A. As shown inFIG.5B, the sequence is again divided into three chirps (since q=3), each chirp exhibiting a linear instantaneous frequency extending from around −π to approximately +π before wrapping around to −π. There is no frequency outliers between chirps. FIG.5Cis a diagram plotting instantaneous frequency differences corresponding to the linear instantaneous frequency profile ofFIG.5B. Similarly, the plot ofFIG.5Cis obtained by differentiating or computing the derivative of the samples inFIG.5B. As shown inFIG.5C, the instantaneous frequency differences can be two constant values: a higher value that is equal to the absolute value of q and a lower value that is equal to the absolute value of q minus M Thus, the difference between the higher value at the first input of the switch and the second value at the second input of the switch is proportional to word length M The instantaneous frequency difference only switches to the lower value when the sequence is wrapping around in between chirps (e.g., when wrapping around from the first chirp to the second chirp and from the second chirp to the third chirp). These two constant values may correspond to the values at the inputs of switch98(seeFIG.4). The delta between the higher value and the lower value is equal to 2π. The quadratic phase generator50described in connection withFIGS.4,5A,5B, and5Cis therefore sometimes referred to as a “full-bandwidth” quadratic phase generator. A full-bandwidth quadratic phase generator that only needs to switch between two constant instantaneous frequency difference values is fairly straightforward to implement in hardware (see, e.g., the topology ofFIG.4) and therefore consumes a small amount of circuit area. The quadratic phase generator implementation ofFIG.4also consumes less power and provides enhanced performance compared to conventional CAZAC sequence generators. The embodiment ofFIGS.4-5relating to a full-bandwidth quadratic phase generator is merely illustrative. In other embodiments, it may be desirable to limit or reduce the operating bandwidth of quadratic phase generator50.FIG.6illustrates another implementation of quadratic phase generator50having a limited bandwidth while also delivering an interpolated CAZAC sequence. To properly scale the bandwidth, a scaling (weighting) factor b may be applied to the two inputs of the NCO switch. In particular, the scaling factor b may be a fraction with a numerator value that weights both of the switch input values and a numerator value that scales the bit width of the summing circuits in the integrator (accumulator) stages. As shown inFIG.6, switch98may have a first switch input port configured to receive two times scaling factor b times the quantity of chirp count q minus the product of the sign of chirp count q and word length Mt. When numerically controlled oscillator100outputs an integrator value that is greater a threshold value, then switch98may connect the first switch input port to the switch output port (e.g., such that adder102receives 2*b*q). When numerically controlled oscillator100outputs an integrator value that is less than or equal to the threshold value, then switch98may connect the second switch input port to the switch output port (e.g., such that adder102receives 2*b*(q−sign(q)*M). The predetermined threshold value can be equal to M (as an example). The threshold value can be a fixed number or can be programmable. The remaining structure of quadratic phase generator50ofFIG.6may be similar to that already described with respect toFIG.4and need not be reiterated in detail to avoid obscuring the present embodiment. Configured as such, quadratic phase generator50ofFIG.6is thus able to sum up during one symbol M samples with two instantaneous frequency increments as shown in equations 2 and 3 below: ∑m=1qb*(q-M)+∑m=q+1Mb*q=0(2)∑m=1❘"\[LeftBracketingBar]"q❘"\[RightBracketingBar]"b*(q+M)+∑m=❘"\[LeftBracketingBar]"q❘"\[RightBracketingBar]"+1Mb*q=0(3) Equation 2 represents an expression for the sum when q is positive, whereas question 3 represents an expression of the sum when q is negative. A weighted sum of zero in either scenario ensures that quadratic phase generator50produces minimal quantization error even with the band-limiting scaling factor b. FIG.7Ais a diagram plotting the output phase for a quadratic phase generator50of the type described in connection withFIG.6for one sequence, where chirp count q is equal to 3 and where word length M is equal to 100. The output phase is normalized (i.e., divided) by a factor of re. As shown inFIG.7A, the sequence can be divided into three chirps (since q=3), each having a quadratic phase profile. The phase of the sequence wraps around three times and starts and ends at the same phase value. Unlike conventional CAZAC sequence generators that output a number of perfectly identical chirps, the different chirps generated by quadratic phase generator50are again slightly distinct (e.g., each chirp is slightly shifted in its phase representation). FIG.7Bis a diagram plotting the instantaneous frequency corresponding to the quadratic phase profile ofFIG.7A. In other words, the plot ofFIG.7Bis obtained by differentiating the samples inFIG.7A. As shown inFIG.7B, the sequence is again divided into three chirps (since q=3), each chirp exhibiting a linear instantaneous frequency extending from around −0.625π to approximately +0.625π before wrapping back around. There are no frequency outliers between chirps. FIG.7Cis a diagram plotting instantaneous frequency differences corresponding to the linear instantaneous frequency profile ofFIG.7B. Similarly, the plot ofFIG.7Cis obtained by differentiating the samples inFIG.7B. As shown inFIG.7C, the instantaneous frequency differences can be two constant values: a higher value that is equal to the absolute value of q scaled by factor b and a lower value that is equal to the absolute value of q minus M, also scaled by factor b. The instantaneous frequency difference only switches to the lower value when the sequence is wrapping around in between chirps (e.g., when wrapping around from the first chirp to the second chirp and from the second chirp to the third chirp, etc.). These two constant values may correspond to the values at the inputs of switch98(seeFIG.6). The delta between the higher value and the lower value may be equal to 1.25π in this particular example, which corresponds to a scaling factor b=1.25/2. In general, scaling factor b can be any fraction or value that is less than one. As examples, scaling factor b may be equal to 0.6, 0.7, 0.8, 0.6-0.7, 0.5-0.8, less than 0.99, less than 0.9, less than 0.8, less than 0.7, less than 0.6, less than 0.5, etc. The quadratic phase generator50described in connection withFIGS.6,7A,7B, and7Cis therefore sometimes referred to as a “band-limited” or “reduced-bandwidth” quadratic phase generator. Such band-limited quadratic phase generator that only needs to switch between two constant instantaneous frequency difference values is fairly straightforward to implement in hardware (see, e.g., the topology ofFIG.6) and therefore consumes a small amount of circuit area. The reduced-bandwidth quadratic phase generator implementation ofFIG.6generates an exact band-limited version of the full-bandwidth CAZAC sequence without any approximation. The embodiment ofFIG.6showing a reduced-bandwidth quadratic phase generator having a numerically controlled oscillator and an associated switch is merely illustrative.FIG.8shows another embodiment of a reduced-bandwidth quadratic phase generator50that does not include switch98and numerically controlled oscillator100. Such type of quadratic phase generator50can be used when the following conditions are met: |b*q−b*M|∈2{circumflex over ( )}M(4) |b*q+b*M|∈2{circumflex over ( )}M(5) Condition 4 is used for positive values of chirp count q, whereas condition 5 is used for negative values of chirp count q. If condition 4 or 5 is satisfied, then summing circuit102can always receive the product b*q, thereby obviating the need for a separate NCO integrator switch. This can help further minimize the circuit area of quadratic phase generator50. Operated in this way, the wraparound of the integrator stages performs the desired subtraction function. The remaining structure of quadratic phase generator50ofFIG.8may be similar to that already described with respect toFIG.4and need not be reiterated in detail to avoid obscuring the present embodiment. As described above, the chirp count q can be any positive or negative integer. The only restriction on the value of word length M is that the greatest common divisor between q and M should be equal to one.FIGS.9A-9Cplot the instantaneous frequency for a band-limited quadratic phase generator50having different q and M values.FIG.9Ais a diagram plotting instantaneous frequency for a band-limited quadratic phase generator with four chirps (e.g., q=4) and a word length M of 101. The greatest common divisor of 4 and 101 is one. As shown inFIG.9A, the sequence is divided into four chirps, each chirp exhibiting a slightly distinct linear instantaneous frequency response extending from around −0.625π to approximately +0.625π before wrapping back around. There are no frequency outliers between chirps. FIG.9Bis a diagram plotting instantaneous frequency for a band-limited quadratic phase generator with five chirps (e.g., q=5) and a word length M of 101. The greatest common divisor of 5 and 101 is one. As shown inFIG.9B, the sequence is divided into five chirps, each chirp exhibiting a slightly distinct linear instantaneous frequency response extending from around −0.625π to approximately +0.625π before wrapping back around. There are no frequency outliers between chirps. The examples ofFIGS.9A and9Bhaving a positive chirp count value q is merely illustrative.FIG.9Cis a diagram plotting instantaneous frequency for a band-limited quadratic phase generator with seven chirps (e.g., q=−7) and a word length M of 101. The greatest common divisor of 7 and 101 is one. As shown inFIG.9C, the sequence is divided into seven chirps, each chirp exhibiting a slightly distinct linear instantaneous frequency response extending from around −0.625π to approximately +0.625π before wrapping back around. There are no frequency outliers between successive chirps. ComparingFIG.9CwithFIGS.9A and9B, it is evident that a positive q produces an increasing linear instantaneous frequency response, whereas a negative q produces a decreasing linear instantaneous frequency response (e.g., the linear responses ofFIGS.9A and9Bexhibit an upward slope while the linear responses ofFIG.9Chave a downward slope). FIG.10Ais a diagram plotting the sorted instantaneous frequency for all chirp counts with a word length M of 101. In other words, sorting all the instantaneous frequency sample points fromFIG.9Awill yield the sorted plot ofFIG.10A. Similarly, sorting all the instantaneous frequency sample points fromFIG.9Bwill likewise yield the sorted plot of FIG.10A. Sorting all the instantaneous frequency sample points fromFIG.9Cwill similarly yield the sorted plot ofFIG.10A. In other words, the sorted instantaneous frequency samples output from a band-limited quadratic phase generator are independent of the magnitude of q. FIG.10Bis a diagram plotting the differences of the sorted instantaneous frequency again for all chirp counts. As shown inFIG.10B, the frequency step size between samples is a constant value that is independent of the magnitude of q. FIG.11is a flow chart of illustrative operations involved in controlling quadratic phase generator50. At block200, numerically controlled oscillator100may be initialized to some starting integrator value. At block202, numerically controlled oscillator100may, during each integrator cycle, increment the integrator value by the absolute value of q minus word length M minus the difference between the modulus of the absolute value of (q, two) and one. At block204, numerically controlled oscillator100may determine whether the integrator value is greater than word length M. In response to determining that the present integrator value is less than or equal to M, then oscillator may direct the corresponding switch98to output the higher value at the first switch input to adder102(see operations of block206). This value can then be accumulated by the first integrator stage, which can then be propagated to the second integrated stage after some delay. In response to determining that the present integrator value is greater than M (e.g., if an integrator overflow event has been detected), then oscillator100may direct the corresponding switch98to output the lower value at the second switch input to adder102(see operations of block208). This value can then be accumulated by the first integrator stage, which can then be propagated to the second integrated stage after some delay. The NCO value can then be decremented (e.g., by 2*M) to perform the wraparound. The operations ofFIG.11are merely illustrative. At least some of the described operations may be modified or omitted; some of the described operations may be performed in parallel; additional processes may be added or inserted between the described operations; the order of certain operations may be reversed or altered; the timing of the described operations may be adjusted so that they occur at slightly different times, or the described operations may be distributed in a system. FIG.12shows an example of wireless circuitry24implementing a frequency-modulated continuous-wave (FMCW) radar scheme that uses a quadratic phase generator50. As shown inFIG.12, wireless circuitry24may have a transmit path and a receive path. The transmit path may include a baseband (BB) transmitter such as baseband transmitter150, an IQ conversion circuit such as IQ converter152, a filtering circuit such as reconstruction filter154, an upsampling circuit such as upsampler156, a data converter such as digital-to-analog converter (DAC)158, and transmit antenna40-1. The baseband transmitter150may include a quadratic phase generator50(e.g., a full-bandwidth quadratic phase generator of the type shown inFIG.4, a reduced-bandwidth quadratic phase generator of the type shown inFIG.6, or a simplified quadratic phase generator of the type shown inFIG.8) for outputting an interpolated CAZAC sequence to the IQ converter. IQ converter152(sometimes referred to as an IQ modulator) may be configured to output corresponding in-phase (I) signals and quadrature (Q) signals. Reconstruction filter154may be interposed between IQ converter150and upsampler156. In particular, reconstruction filter154can have the same bandwidth matching that of quadratic phase generator50and is sometimes referred to as a channel filter. In the scenario where quadratic phase generator50exhibits full bandwidth, filter154should have a bandwidth matching the full bandwidth of generator50. In the scenario where quadratic phase generator50exhibits a reduced bandwidth, filter154should have relatively smaller bandwidth matching the smaller bandwidth of generator50. Signals output from reconstruction filter154may be fed to digital-to-analog converter158via upsampling circuit156. Upsampling circuit156may perform up-sampling or up-conversion operations. Digital-to-analog converter158may output a corresponding analog signal that can be fed to antenna40-1for transmission. This example in which at least an IQ converter, a reconstruction filter, an upsampling circuit, and a digital-to-analog converter are included in the transmit path is merely illustrative. If desired, the transmit path may include additional digital components coupled to or inserted before DAC30, fewer digital components, additional analog front-end components coupled to or inserted after DAC30(e.g., one or more bandpass filters, one or more power amplifiers, one or more mixers, etc.), fewer analog front-end components, and/or additional filter, switching, or coupling circuitry. The receive path may include antenna40-2, a data converter such as analog-to-digital converter160, an offset correction circuit such as offset correction circuit162, a downsampling circuit such as downsampler164, a filtering circuit such as reconstruction filter166, an IQ conversion circuit such as IQ converter168, and a baseband receiver170. This example in which wireless circuitry24performs spatial ranging operations by transmitting radio-frequency signals using a first antenna40-1and receiving corresponding reflected radio-frequency signals using a second different antenna40-2is merely illustrative. In other embodiments, the transmit and receive paths may be coupled to one or more of the same antennas40. Continuing with the example ofFIG.12, radio-frequency signals received by antenna40-2may be fed to analog-to-digital converter (ADC)160for conversion. ADC160may convert the analog radio-frequency signals into their digital equivalent. Offset correction circuit162may be interposed between ADC1600and downsampler164. Offset correction circuit162may provide fine delay adjustment to help mitigate any potential offset or smearing that can occur prior to the downsampling operation. Downsampler164may perform down-sampling or down-conversion operations. Reconstruction filter166may be interposed between downsampler164and IQ converter168. In particular, reconstruction filter166can have the same bandwidth matching that of quadratic phase generator50and is also sometimes referred to as a channel filter. In the scenario where quadratic phase generator50exhibits full bandwidth, filter166should have a bandwidth matching the full bandwidth of generator50. In the scenario where quadratic phase generator50exhibits a reduced or limited bandwidth, filter154should have relatively smaller bandwidth matching the limited bandwidth of generator50. Signals output from reconstruction filter166may be fed to IQ converter168. IQ converter168(sometimes referred to as an IQ demodulator) converts in-phase (I) and quadrature (Q) signals into baseband signals that can be received and processed by baseband receiver170. This example in which at least an IQ converter, a reconstruction filter, a downsampling circuit, an offset correction circuit (e.g., a delay adjustment circuit), and an analog-to-digital converter are included in the receive path is merely illustrative. If desired, the receive path may include additional analog front-end components coupled to or inserted before ADC160(e.g., additional filter, switching, or coupling circuitry), fewer analog front-end components, additional digital components coupled to or inserted after ADC160, and/or fewer digital components. A delay circuit172may be coupled between baseband transmitter150and baseband receiver170. Delay circuit172may be configured to provide a fixed or adjustable delay amount to help compensate for any internal delay between the transmit and receive path. Delay circuit172can therefore sometimes be referred to as an internal delay compensation circuit. Baseband transmitter150and baseband receiver170are sometimes referred to collectively as a baseband processor that can be considered as being part of wireless circuitry24and processing circuitry18(see, e.g.,FIG.1). The methods and operations described above in connection withFIGS.1-12may be performed by the components of device10using software, firmware, and/or hardware (e.g., dedicated circuitry or hardware). Software code for performing these operations may be stored on non-transitory computer readable storage media (e.g., tangible computer readable storage media) stored on one or more of the components of device10(e.g., storage circuitry16ofFIG.1). The software code may sometimes be referred to as software, data, instructions, program instructions, or code. The non-transitory computer readable storage media may include drives, non-volatile memory such as non-volatile random-access memory (NVRAM), removable flash drives or other removable media, other types of random-access memory, etc. Software stored on the non-transitory computer readable storage media may be executed by processing circuitry on one or more of the components of device10(e.g., processing circuitry18ofFIG.1, etc.). The processing circuitry may include microprocessors, central processing units (CPUs), application-specific integrated circuits with processing circuitry, or other processing circuitry. The components ofFIGS.2,3,4,6,8, and12may be implemented using hardware (e.g., circuit components, digital logic gates, etc.) and/or using software where applicable. The foregoing is merely illustrative and various modifications can be made to the described embodiments. The foregoing embodiments may be implemented individually or in any combination. | 45,966 |
11863226 | DETAILED DESCRIPTION The present disclosure relates generally to an electronic device, and more particularly, to a communication device and an operating including a reception processor. In some embodiments, the present disclosure provides a reception processor to detect a frequency spectrum an out-of-band blocker and may adjust the reception characteristics of a radio frequency chain. Receivers that communicate based frequency division duplexing (FFD) often ensure some level of signal to noise ratio (SNR) declination due to interference. For example, TX leakage and an out-of-band blockers may be are IP3 indicators that cause interference. As the power (current) of the receivers increases, feedback in the system can improve the IP3 indicators. Alternatively, when the amount of power increases for the out-of-band blocker, the efficiency of power consumption of the receiver decreases. Therefore, the present disclosure provides a communication device and an operating method including an antenna, a transmission processor, a radio frequency chain, and a reception processor. The transmission processor outputs a second transmission input signal with the same average power as the average power of a first transmission input signal and a second amplitude greater than a first amplitude of the first transmission input signal. The RF chain outputs an RF output signal to be transmitted through the antenna, based on a transmission input signal, and outputs a reception input signal based on a signal received through the antenna. The reception processor checks an out-of-band blocker by detecting a peaked frequency spectrum based on the reception input signal and to adjust a reception characteristic parameter of the RF chain based on an amplitude of the peaked frequency spectrum. Hereinafter, embodiments will be described in detail with reference to the accompanying drawings. Embodiments of the inventive concept are provided so that the disclosure will be thorough and complete, and will fully convey the concept of the inventive concept to one of ordinary skill in the art. The inventive concept may have various modified embodiments, and preferred embodiments are illustrated in the drawings and are described in the detailed description of the inventive concept. However, the inventive concept within specific embodiments is not limited and it should be understood that the inventive concept covers all the modifications, equivalents, and replacements within the idea and technical scope of the inventive concept. Like reference numerals refer to like elements throughout. Herein, “first” and “second” are merely used for differentiating the terms, and the inventive concept is not limited thereto. FIG.1is a diagram for describing a communication device100according to an embodiment. Referring toFIG.1, the communication device100according to an embodiment may include an antenna110. The number of antennas110may be one or more. The communication device100may transmit or receive a signal through the antenna110Therefore, the communication device100may communicate with another communication device or a base station. A wireless communication system performed by the communication device100may be, for example, a wireless communication system using a cellular network such as a 5th generation (5G) wireless system, a long term evolution (LTE) system, an LTE-advanced system, a code division multiple access (CDMA) system, or a global system for mobile communication (GSM) system. Alternatively, the wireless communication system may be, for example, a wireless local area network (WLAN) system. However, the inventive concept is not limited thereto. In an embodiment, the communication device100may perform communication based on a frequency division duplexing (FDD) scheme. However, the inventive concept is not limited thereto. In an embodiment, the communication device100may further include a radio frequency (RF) chain120and a modem130. The RF chain120may amplify a signal generated by the modem130, or may remove noise of the signal. Additionally, or alternatively, the RF chain120may output the amplified or noise-removed signal to the antenna110. The amplified or noise-removed signal may be transmitted to the outside through the antenna110. A signal to be transmitted to the outside through the antenna110may be referred to as an RF output signal. The RF chain120may amplify a signal received from the outside through the antenna110or may remove noise of the received signal. Additionally, or alternatively, the RF chain120may output the amplified or noise-removed signal to the modem130. A signal provided to the modem130may be referred to as a reception input signal. The RF chain120may be implemented with a plurality of circuits, to perform the operations described above. The number of RF chains120may be one or more. When each of the number of antennas110and the number of RF chains120is in plurality, the number of RF chains120may be less than the number of antennas110. In this case, the communication device100may select an antenna corresponding to the number of RF chains. The modem130may process a transmission/reception signal of a baseband. For example, the modem130may generate a baseband signal for a transmission signal path of the RF chain120and may process the baseband signal received through a reception signal path of the RF chain120. In an embodiment, the modem130may include a transmission processor131that outputs the transmission input signal, and a reception processor132that processes the reception input signal. In an embodiment, the transmission processor131may output a first transmission input signal while the communication device100is performing a communication operation. Additionally, or alternatively, the transmission processor131may output a second transmission input signal in a period where the communication device100prepares for a communication operation. Average power of the first transmission input signal may be the same as average power of the second transmission input signal. Average power may be calculated as an integral value of a frequency in a frequency spectrum of the first transmission input signal. An amplitude of the first transmission input signal may be lower than an amplitude of the second transmission input signal. For example, a frequency spectrum of the second transmission input signal may have a relatively narrow bandwidth and a relatively large amplitude compared to a frequency spectrum of the first transmission input signal. A period where a communication operation is prepared may be a period where the communication device100does not communicate with another communication device and stands by. Alternatively, a period where a communication operation is prepared may be a period where a communication characteristic between the communication device100and another communication device is set, and for example, may be a period corresponding to a length of a cyclic prefix (CP). Accordingly, a communication state may be maintained. Therefore, a degradation in communication quality may be prevented. In another embodiment, when a degradation in a signal to noise ratio (SNR) occurs, the second transmission input signal may be output. Even in this case, the transmission processor131may output the second transmission input signal at a period where the communication device100prepares for a communication operation. SNR degradation may be a state where an SNR used in a current communication operation is lower than a criterion used by the communication device100. SNR degradation may occur due to various causes such as multiple path propagation, the weather (particularly, cloudy weather), and an obstacle (or a shadow of an obstacle) adversely affecting propagation. In an embodiment, the reception processor132may check SNR degradation. For example, the reception processor132may calculate an SNR based on the reception input signal received from the RF chain120and may compare the calculated SNR with a predetermined reference SNR. When the calculated SNR is less than the reference SNR, the reception processor132may determine that SNR degradation occurs. At this time, the reception processor132may provide the transmission processor131with a comparison result signal representing a comparison result between the calculated SNR and the reference SNR. When an SNR is greater than or equal to the reference SNR, the transmission processor131may output the first transmission input signal to the RF chain120. Alternatively, when the SNR is less than or equal to the reference SNR, the transmission processor131may output the second transmission input signal to the RF chain120. The second transmission input signal, as described above, may be output at a period where a communication operation is prepared. In another embodiment, the reception processor132may calculate an SNR based on the reception input signal. The comparing with a frequency spectrum of the first transmission input signal may then compare the calculated SNR with the reference SNR. When the calculated SNR is less than the reference SNR, the reception processor132may provide the transmission processor131with a flag signal representing the occurrence of SNR degradation. The transmission processor131may change the first transmission input signal to the second transmission input signal in response to the flag signal. The second transmission input signal, as described above, may be output at a period where a communication operation is prepared. In an embodiment, while the second transmission input signal output, the reception input signal may be provided to the reception processor132. The reception processor132may detect a peaked frequency spectrum based on the reception input signal to check an out-of-band blocker. The peaked frequency spectrum may be a frequency spectrum of noise in the reception input signal. The noise in the reception input signal may include, for example, an out-of-band blocker and a leaked transmission input signal. The out-of-band blocker may be noise outside a band of a desired reception frequency. Herein, the leaked transmission input signal may be referred to as a leakage transmission input signal. The reception processor132may adjust a reception characteristic parameter of the RF chain120based on an amplitude of the peaked frequency spectrum. The reception characteristic parameter of the RF chain120may be a parameter set for satisfying a linearity. An indicator for determining linearity may be, for example, a third order intercept point (IP3). FIG.2is a diagram for describing a time-frequency resource provided by a communication device according to an embodiment. Referring toFIG.2, the abscissa axis may represent a time domain TM, and the ordinate axis may represent a frequency domain FREQ. A minimum unit of resource allocation in the time domain TM may be an orthogonal frequency division multiplexing (OFDM) symbol, Nsymb (202) number of OFDM symbols may configure one slot206, and N (where N is an integer of 1 or more) number of slots may configure one sub-frame205. Additionally, or alternatively, one radio frame214may be one unit of the time domain TM including ten sub-frames205. A minimum unit of resource allocation in the frequency domain FREQ may be a subcarrier, and a total bandwidth of a communication system may be configured with total NBW (204) number of subcarriers. A basic unit of a resource in a time-frequency domain may be a resource element (RE) 212 and may be represented as an OFDM symbol index and a subcarrier index. A resource block (RB)208may be defined as Nsymb (202) number of continuous OFDM symbols in the time domain TM and NRB (210) number of continuous subcarriers in the frequency domain FREQ. Therefore, one RB208may be configured with (Nsymb*NRB) number of REs212, and a size of the RB208may correspond to the number of REs212. As a plurality of numerologies are supported in communication based on an NR network, a length of each of a subcarrier spacing (SCS) and a slot206may vary, and the number of slots configuring one sub-frame205of 1 ms may be determined by a numerology of a wireless communication system. For example, referring toFIG.4, a length of the slot206may be 0.5 ms, two (N=2) slots may configure one sub-frame205, and one slot may be configured with fourteen (Nsymb=14) OFDM symbols. This is merely an example, and one slot may be configured with twelve (Nsymb=12) OFDM symbols. The inventive concept described above may be applied to a wireless communication system that supports another numerology. Additionally, or alternatively, a physical downlink control channel (PDCCH) and a downlink channel including a physical downlink shared channel (PDSCH) and the like may be transmitted from a base station to a terminal in a wireless communication system. In an embodiment, the first transmission input signal may include a first RB, and the second transmission input signal may include a second RB. A size of the first RB may be greater than that of the second RB. Referring toFIG.2, for example, the second RB may include one RB208, and the first RB may include 100 RBs208. However, the inventive concept is not limited thereto. In another embodiment, the first transmission input signal may include the first RB, and the second transmission input signal may be a continuous wave with a single frequency. FIGS.3A and3Bare diagrams for describing embodiments of a communication device. Referring toFIG.3A, a communication device300amay include an antenna310, an RF chain320, a transmission processor330, a digital filter340, and a reception processor350. The RF chain320may include a transmission/reception duplexer321, a transmitter322, a receiver323, and an oscillation circuit module324. The transmission/reception duplexer321may provide a signal, received through the antenna310, as an RF input signal RFIN to the receiver323. Additionally, or alternatively, the transmission/reception duplexer321may provide the antenna310with an RF output signal RFOUT received from the transmitter322. The transmitter322may process a transmission input signal TXIN received from the transmission processor330to generate the RF output signal RFOUT. The receiver323may process the RF input signal RFIN to generate a reception input signal RXIN and may provide the reception input signal RXIN to the digital filter340. The oscillation circuit module324may generate a reference clock with a frequency for sampling the reception input signal RXIN and the RF input signal RFIN. Additionally, or alternatively, the oscillation circuit module324may provide the reference clock to each of the transmitter322and the receiver323. For example, the oscillation circuit module324may provide the transmitter322with a transmission reference clock TXLO for sampling the transmission input signal TXIN and may provide the receiver323with a reception reference clock RXLO for sampling the RF input signal RFIN. A center frequency of the RF input signal RFIN may be greater than a center frequency of the reception input signal RXIN. Additionally, or alternatively, the center frequency of the reception input signal RXIN may be included in a baseband. Therefore, the reception input signal RXIN may be a signal obtained through down-conversion of the RF input signal RFIN by the receiver323. The transmission processor330may output the transmission input signal TXIN to the transmitter322. The transmission input signal TXIN may be the first transmission input signal or the second transmission input signal described above with reference toFIG.1. For example, while a communication operation is being performed, the transmission processor330may output the first transmission input signal to the transmitter322. Additionally, or alternatively, the transmission processor330may output the second transmission input signal to the transmitter322in a period where a communication operation is prepared, in response to the flag signal received from the reception processor350and described above with reference toFIG.1. The digital filter340may filter the reception input signal RXIN to provide a filtered reception input signal FRXIN to the reception processor350. The reception processor350may calculate an SNR based on the filtered reception input signal FRXIN. Additionally, or alternatively, the reception processor350may check the occurrence or not of SNR degradation. This is as described above with reference toFIG.1. When SNR degradation occurs, the reception processor350may output the flag signal to the transmission processor330. The reception processor350may detect a peaked frequency spectrum based on the filtered reception input signal FRXIN when the RF input signal RFIN is received by the receiver323while the second transmission input signal is being output to the RF chain32. For example, the reception processor350may perform a Fourier transform on the filtered reception input signal FRXIN. The Fourier transform may include a fast Fourier transform (FFT). When the FFT is performed, a frequency spectrum of the filtered reception input signal FRXIN and a frequency spectrum of noise may be generated. The reception processor350may detect, as a peaked frequency spectrum, a frequency spectrum corresponding to a predetermined condition among frequency spectrums. This will be described below with reference toFIGS.9A and9B. The reception processor350may adjust a reception characteristic parameter based on an amplitude of the detected peaked frequency spectrum. For example, the reception processor350may provide the receiver323with a parameter control signal PAC that controls the reception characteristic parameter, based on the amplitude of the peaked frequency spectrum. Here, the reception characteristic parameter may be, for example, a parameter of the receiver323. For example, the reception characteristic parameter may include a current used by the receiver323, a gain of the receiver323, and a filter characteristic of the receiver323. In another embodiment, a circuit implemented independently from the reception processor350may detect the peaked frequency spectrum described above and an operation of adjusting the reception characteristic parameter. Referring toFIG.3B, a communication device300bmay further include a parameter controller360. The parameter controller360may receive the filtered reception input signal FRXIN from the reception processor350. The parameter controller360may detect the peaked frequency spectrum in the filtered reception input signal FRXIN and an operation of adjusting the reception characteristic parameter. The parameter controller360may provide the parameter control signal PAC to the receiver323. FIG.4is a diagram for describing in detail an embodiment of the communication device illustrated inFIG.3A, andFIG.5is a diagram for describing an out-of-band blocker and a leakage transmission input signal. Referring toFIG.4, a communication device400may include an antenna410, a transmission/reception duplexer420, a transmitter430, a transmission processor440, a receiver450, a digital filter460, a reception processor470, and an oscillation circuit module480. The antenna410, the transmission/reception duplexer420, the transmission processor440, and the digital filter460may be the same as the antenna310, the transmission/reception duplexer321, the transmission processor330, and the digital filter340each described above with reference toFIG.3A. In an embodiment, the transmitter430may include a plurality of transmission blocks. The plurality of transmission blocks are dependently connected (cascaded) to one another. Referring toFIG.4, for example, the transmitter430may include a digital-to-analog converter431, a transmission analog filter432, a transmission mixer433, and a power amplifier434. The digital-to-analog converter431may convert a transmission input signal TXIN, which is a digital signal, into an analog signal. The transmission analog filter432may remove noise of the analog signal. The transmission mixer433may change a frequency of a noise-removed analog signal based on a frequency of a transmission reference clock TXLO. The power amplifier434may amplify power of a frequency-changed analog signal and may output a power-amplified analog signal to an RF output signal RFOUT. In an embodiment, the receiver450may include a dependently connected (cascaded) plurality of reception blocks. Referring toFIG.4, for example, the receiver450may include a low noise amplifier451, a reception mixer452, a reception analog filter453, and an analog-to-digital converter454. The low noise amplifier451may amplify an RF input signal RFIN. The reception mixer452may change a frequency of the RF input signal RFIN based on a frequency of a reception reference clock RXLO. The reception analog filter453may remove noise of the RF input signal RFIN. The analog-to-digital converter454may convert the RF input signal RFIN, which is an analog signal, into the reception input signal RXIN, which is a digital signal, and may output the transmission input signal TXIN to the digital filter460. The reception processor470may adjust a reception characteristic parameter of at least one of the low noise amplifier451, the reception mixer452, the reception analog filter453, and the analog-to-digital converter454. The oscillation circuit module480may include a phase locked loop circuit481and an oscillator482. Although not shown, the communication device400may further include an external low noise amplifier. According to the above description, an out-of-band blocker may be detected even without a front-end to digital path and digital blocks. Therefore, operation performance may be improved and the manufacturing cost may be reduced. Referring toFIG.5, when the transmitter430outputs the RF output signal RFOUT based on the transmission input signal TXIN, a leakage transmission input signal TXL may be generated. The leakage transmission input signal TXL may be input to the low noise amplifier451through the transmission/reception duplexer420. Additionally, or alternatively, an out-of-band blocker OOB may be additionally input to the low noise amplifier451through the antenna410. In this case, to improve linearity, reception characteristic parameters of a plurality of reception blocks included in the receiver450may be adjusted. FIG.6is a diagram for describing a characteristic parameter and blocks included in an RF chain according to an embodiment.FIG.6shows a plurality of reception blocks and a reception characteristic parameter of a reception block. Referring toFIG.6, in an embodiment, the RF chain may include a plurality of reception blocks. For example, the RF chain may include first to third blocks BLK1to BLK3. The first to third blocks BLK1to BLK3may be reception blocks included in a receiver (for example,450illustrated inFIG.4). The first to third blocks BLK1to BLK3may be, for example, the low noise amplifier451, the reception mixer452, and the reception analog filter453. However, the inventive concept is not limited thereto. Each of the first to third blocks BLK1to BLK3may have at least one characteristic parameter. For example, when each of the first to third blocks BLK1to BLK3is a reception block, the first block BLK1may have a first current C1, a first gain G1, and a first feedback factor IP3_1. In this case, a reception characteristic parameter of the first block BLK1may have at least one of the first current C1, the first gain G1, and the first feedback factor IP3_1. Likewise, the second block BLK2may have a second current C2, a second gain G2, and a second feedback factor IP3_2, and the third block BLK3may have a third current C3, a third gain G3, and a third feedback factor IP3_3. Here, a current of a block may be a current used by a corresponding block, or may be a current consumed by a corresponding block. For example, the first current C1may be a current used by the first block BLK1. The first to third blocks BLK1to BLK3may be cascaded. Therefore, total IP3 may be calculated as an indicator representing linearity as expressed in the following Equation 1. 1IP3_total=G1IP3_1+G2IP3_2+G3IP3_3(1) In this case, IP3_total may denote total IP3. Additionally, or alternatively, the reception processor470may adjust characteristic parameters of the first to third blocks BLK1to BLK3. For example, when the characteristic parameters of the first to third blocks BLK1to BLK3are changed, the total IP3 may be calculated as expressed in the following Equation 2. 1IP3_total=1IP3_1+G1IP3_2+G1G2IP3_3(2) FIG.7is a diagram for describing a frequency spectrum of each of a transmission input signal, an out-of-band blocker, and a reception input signal. Referring toFIGS.5and7, the abscissa axis may represent a frequency, and the ordinate axis may represent an amplitude (or power). A unit of a frequency may be Hz, and a unit of an amplitude may be dB. However, the inventive concept is not limited thereto. A transmission frequency spectrum TXFS may be a frequency spectrum of a transmission input signal TXIN. The transmission frequency spectrum TXFS may have a transmission bandwidth TX BAND and a transmission amplitude TXP. A blocker frequency spectrum OOBFS may be a blocker frequency spectrum obtained by performing an FFT on a frequency spectrum of an out-of-band blocker OOB. The out-of-band blocker OOB may be intermittently input by the receiver450unlike the leakage transmission input signal TXL. Additionally, or alternatively, the out-of-band blocker OOB may be a continuous wave with a single frequency. The blocker frequency spectrum OOBFS may have a blocker frequency OOBF, which is a single frequency, and a blocker amplitude OOBP. A reception frequency spectrum RXFS may be a frequency spectrum of an RF input signal RFIN amplified by the low noise amplifier451. The reception frequency spectrum RXFS may have a reception bandwidth RX BAND and a reception amplitude RXP. The out-of-band blocker OOB and the leakage transmission input signal TXL based on the transmission input signal TXIN may be input to the receiver450. Therefore, the transmission frequency spectrum TXFS and the blocker frequency spectrum OOBFS may be reflected in the reception frequency spectrum RXFS as in-channel noise. In this case, a noise frequency spectrum NFS with a bandwidth such as the reception bandwidth RX BAND of the reception frequency bandwidth RXFS may be generated. Based on the noise frequency spectrum NFS, the transmission amplitude RXP may increase by a noise amplitude NP. FIG.8is a diagram for describing down-converted frequency spectrums. Referring toFIGS.5,7, and8, a down noise frequency spectrum DCNFS, a down reception frequency spectrum DCRXFS, a down blocker frequency spectrum DCRXFS, a down blocker frequency spectrum DCOOBFS, and a down leakage transmission frequency spectrum DCTXLFS. The down noise frequency spectrum DCNFS and the down reception frequency spectrum DCRXFS may be included in a reception analog baseband RX ABB. Additionally, or alternatively, the down blocker frequency spectrum DCOOBFS may be further included in the reception analog baseband RX ABB. The down leakage transmission frequency spectrum DCTXLFS may be filtered by the reception analog filter453. Additionally, or alternatively, the down blocker frequency spectrum DCOOBFS may be filtered by the reception analog filter453. For example, referring toFIG.8, a filter FLT for filtering may be a low pass filter, but the inventive concept is not limited thereto. When a bandwidth of the down noise frequency spectrum DCNFS is the same as a bandwidth of the down reception frequency spectrum DCRXFS, an amplitude of the down noise frequency spectrum DCNFS may be relatively low. Therefore, detecting the down noise frequency spectrum DCNFS may be difficult. To more easily detect the down noise frequency spectrum DCNFS, the down noise frequency spectrum DCNFS with a relatively narrow bandwidth and a relatively large amplitude may be used. FIGS.9A and9Bare diagrams for describing embodiments that detect a peaked frequency spectrum. Referring toFIGS.4,8, and9A, the transmission processor440may output the second transmission input signal. The second transmission input signal may be a signal for sharply implementing a shape of the down noise frequency spectrum DCNFS. The second transmission input signal, as described above, may have average power of the first transmission input signal and may have a second amplitude greater than a first amplitude of the first transmission input signal. Accordingly, a communication state may be maintained. Therefore, communication quality may not be reduced. In an embodiment, the second transmission input signal may include a second resource block. In this case, a size of the second resource block may be less than that of the first resource block of the first transmission input signal. In an embodiment, the reception processor470may perform an FFT on the reception input signal RXIN. Referring toFIG.9A, for example, a shape of the down noise frequency spectrum DCNFS may be changed to a shape of a first peaked frequency spectrum PFS1. The first peaked frequency spectrum PFS1may have a bandwidth TB and an amplitude. In an embodiment, the reception processor470may detect, as a peaked frequency spectrum, a frequency spectrum, greater than or equal to at least one predetermined reference amplitude, among frequency spectrums generated as a result of an FFT. The reference amplitude, as illustrated inFIG.9A, may be a single reference amplitude TH. Referring toFIG.9A, for example, the first peaked frequency spectrum PFS1among frequency spectrums PFS1, DCRXFS, DCOOBFS, DCTXLFS' generated as a result of an FFT may be greater than the single reference amplitude TH. In another embodiment, the reception processor470may compare an amplitude of the first peaked frequency spectrum PFS1with the at least one predetermined reference amplitude. When the first peaked frequency spectrum PFS1is detected, the reception processor470may adjust a reception characteristic parameter. Alternatively, the reception processor470may adjust the reception characteristic parameter based on a comparison result between an amplitude of the first peaked frequency spectrum PFS1and the single reference amplitude TH. The reception processor470may repeatedly adjust the reception characteristic parameter until the amplitude of the first peaked frequency spectrum PFS1is less than the single reference amplitude TH. Accordingly, an out-of-band blocker that is intermittently input may be easily detected. Therefore, linearity may be improved. Additionally, or alternatively, when the second transmission input signal has a continuous wave with a single frequency, a shape of the down noise frequency spectrum DCNFS may be sharply modified. Referring toFIGS.4,8, and9B, as described above, the second transmission input signal may have an average power of the first transmission input signal and may have the second amplitude greater than the first amplitude of the first transmission input signal. In another embodiment, the second transmission input signal may be a continuous wave. A shape of the down noise frequency spectrum DCNFS may be changed to a shape of a second peaked frequency spectrum PFS2. The second peaked frequency spectrum PFS2may have a single transmission frequency CWTXF and an amplitude. An amplitude of the second peaked frequency spectrum PFS2may be greater than that of the first peaked frequency spectrum PFS1. The reception processor470may detect the second peaked frequency spectrum PFS2greater than or equal to the single reference amplitude TH, among the frequency spectrums PFS1, DCRXFS, DCOOBFS, DCTXLFS' generated as the result of the FFT. Alternatively, the reception processor470may compare an amplitude of the second peaked frequency spectrum PFS2with the single reference amplitude TH. Accordingly, an out-of-band blocker that is intermittently input may be more easily detected. Therefore, linearity may be improved. Additionally, or alternatively, in a case where an amplitude of a peaked frequency spectrum and the single reference amplitude TH are used for adjusting an out-of-band blocker and a reception characteristic parameter, an amplitude of the peaked frequency spectrum may not be accurately seen. FIG.10is a diagram for describing an embodiment to change the amount of adjustment of a reception characteristic parameter based on a peaked frequency spectrum. Referring toFIGS.4,8, and10, a reference amplitude may be provided in plurality, and the plurality of reference amplitudes may differ. Referring toFIG.10, for example, the plurality of reference amplitudes may include a first reference amplitude TH1and a second reference amplitude TH2higher than the first reference amplitude TH1. However, the inventive concept is not limited thereto. As the number of reference amplitudes increases, an amplitude of the peaked frequency spectrum PFS may be more accurately detected. The peaked frequency spectrum PFS illustrated inFIG.10may be the first peaked frequency spectrum PFS1illustrated inFIG.9A, but is not limited thereto and may be the second peaked frequency spectrum PFS2illustrated inFIG.9B. In an embodiment, the reception processor470may compare the amplitude of the peaked frequency spectrum PFS with the first and second reference amplitudes TH1and TH2to detect the amplitude of the peaked frequency spectrum PFS and may adjust the amount of adjustment of a reception characteristic parameter based on the detected amplitude. Referring toFIG.10, for example, when the amplitude of the peaked frequency spectrum PFS is greater than or equal to the second reference amplitude TH2, the reception processor470may set a second adjustment amount greater than a first adjustment amount. As another example, when the amplitude of the peaked frequency spectrum PFS is greater than or equal to the first reference amplitude TH1and less than the second reference amplitude TH2, the reception processor470may set the first adjustment amount. According to the above description, because the amount of adjustment of a reception characteristic parameter is set based on an amplitude of a peaked frequency spectrum, an operation load (or a working load) may be reduced, and moreover, linearity may be more improved. FIG.11is a flowchart of an operating method of a communication device100, according to an embodiment. Referring toFIG.11, in operation S100, the communication device100checks a degradation in an SNR. For example, the reception processor132may check SNR degradation based on a reception input signal generated by the RF chain120. In operation S200, the communication device100changes a transmission input signal. For example, the transmission processor131may change a first transmission input signal to a second transmission input signal in response to SNR degradation being checked. The second transmission input signal may have the same average power as that of the first transmission input signal and may have a second amplitude greater than a first amplitude of the first transmission input signal. In an embodiment, the first transmission input signal may include a first resource block and may include a second resource block with a size less than that of the first resource block. In another embodiment, the first transmission input signal may include the first resource block, and the second transmission input signal may be a continuous wave. In operation S300, the communication device100detects an out-of-band blocker. For example, the reception processor132may detect a peaked frequency spectrum based on a reception input signal to detect an out-of-band blocker. The peaked frequency spectrum may be greater than or equal to at least one predetermined reference amplitude (for example, the first reference amplitude TH1illustrated inFIG.10). In operation S400, the communication device400adjusts the reception characteristic parameter. For example, the reception processor132may adjust a reception characteristic parameter of the RF chain120based on an amplitude of the peaked frequency spectrum. FIG.12is a block diagram illustrating a base station1200according to an embodiment. Referring toFIG.12, the base station1200may include a modem and a radio frequency integrated circuit (RFIC)1260, and the modem may include an application specific integrated circuit (ASIC)1210, an application specific instruction set processor (ASIP)1230, a memory1250, a main processor1270, and a main memory1290. The RFIC1260may be connected to an antenna Ant and may receive a signal from the outside or may transmit a signal to the outside by using a wireless communication network. The ASIP1230may be an integrated circuit that is customized for certain utility, supports a dedicated instruction set for a certain application, and may execute an instruction included in the instruction set. The memory1250may be a non-transitory storage device, may communicate with the ASIP1230, and may store a plurality of instructions executed by the ASIP1230. For example, as a non-transitory example, the memory1250may include an arbitrary-type memory accessible by the ASIP1230like random access memory (RAM), read only memory (ROM), tape, a magnetic disk, an optical disk, a volatile memory, a non-volatile memory, and a combination thereof. The main processor1270may execute a plurality of instructions to control the base station1200. For example, the main processor1270may control the ASIC1210and the ASIP1230and may process data received over the wireless communication network. The main memory1290, a non-transitory storage device, may communicate with the main processor1270and may store the plurality of instructions executed by the main processor1270. For example, as a non-transitory example, the main memory1290may include an arbitrary-type memory accessible by the main processor1270like RAM, ROM, tape, a magnetic disk, an optical disk, a volatile memory, a non-volatile memory, and a combination thereof. FIG.13is a block diagram illustrating a computing system1400according to an embodiment. Referring toFIG.13, the computing system1400may include a stationary computing system like a desktop computer, a workstation, and a server, or may include a portable computing system like a laptop computer. Additionally, or alternatively, the computing system1400may include a semiconductor device implemented with a semiconductor. The computing system1400may include a processor1410, a memory1420, a plurality of input/output devices1430, a storage device1440, a network interface1450, and a modem1460. The processor1410, the memory1420, the input/output devices1430, the storage device1440, the network interface1450, and the modem1460may be connected to a bus1470and may communicate with one another through the bus1470. The processor1410may be referred to as a processing unit, and for example, may include at least one core for executing an arbitrary instruction set (for example, Intel Architecture-32 (IA-32), 64-bit extension IA-32, x86-64, PowerPC, Sparc, MIPS, ARM, IA-64, etc.) like a micro-processor, an application processor (AP), a digital signal processor (DSP), and a graphics processing unit (GPU). For example, the processor1410may access the memory1420through the bus1470and may execute instructions stored in RAM or ROM. The memory1420may include dynamic RAM (DRAM) and a volatile memory (e.g., RAM), or may include flash memory and a non-volatile memory (e.g., ROM). The input/output devices1430may include an input device such as a keyboard or a pointing device and may include an output device such as a printer. The storage device1440may store data to be processed by the processor1410, or may store data obtained through processing by the processor1410. For example, the processor1410may process data stored in the storage device1440to generate data and may store the generated data in the storage device1440. The network interface1450may provide access corresponding to a network outside the computing system1400. For example, the network may include a plurality of computing systems and a plurality of communication links, and the communication links may include wired links, optical links, wireless links, or arbitrary links of a different type. The modem1460may perform wireless communication or wired communication with an external device. For example, the modem1460may perform Ethernet communication, near field communication (NFC), radio frequency identification (RFID) communication, mobile communication, memory card communication, and universal serial bus (USB) communication, but is not limited thereto. While the inventive concept has been particularly shown and described with reference to embodiments thereof, it will be understood that various changes in form and details may be made therein without departing from the spirit and scope of the following claims. | 41,107 |
11863227 | DETAILED DESCRIPTION OF EMBODIMENTS The following detailed description of embodiments presents various descriptions of specific embodiments of the invention. In this description, reference is made to the drawings in which like reference numerals may indicate identical or functionally similar elements. It will be understood that elements illustrated in the figures are not necessarily drawn to scale. Moreover, it will be understood that certain embodiments can include more elements than illustrated in a drawing and/or a subset of the elements illustrated in a drawing. Further, some embodiments can incorporate any suitable combination of features from two or more drawings. A radio frequency (RF) communication system communicates by wirelessly transmitting and receiving RF signals. Such RF communication systems can include one or more RF switches to provide control over routing of RF signals, connectivity between components or circuits, and/or to provide various other switching functions. Examples of RF communication systems with one or more RF switches include, but are not limited to, base stations, mobile devices (for instance, smartphones or handsets), laptop computers, tablets, Internet of Things (IoT) devices, and/or wearable electronics. Certain RF switching circuits include a field-effect transistor (FET) switch and a switch bias circuit that controls a gate voltage of the switch to thereby change a channel impedance of the switch and modulate the switch's conductivity. For example, the switch bias circuit can control the gate voltage to a first level to turn off the FET switch such that the channel impedance is high and the RF signal does not pass through the FET switch. Additionally, the switch bias circuit can control the gate voltage to a second level to turn on the FET switch such that the channel impedance is low and the RF signal passes through the FET switch. Thus, the switch bias circuit is used to turn the FET switch on or off to control passage of the RF signal. An RF signal can couple onto the gate of the FET switch via a parasitic gate-to-drain capacitance (Cgd) and/or a parasitic gate-to-source capacitance (Cgs) of the FET switch. To provide isolation, a gate resistor can be included between an output of the switch bias circuit and the gate of the FET switch. Several benefits are provided by a large resistance value of the gate resistor, such as low loss and/or low cutoff frequency to provide wideband operation. However, making the resistance value of the gate resistor large also undesirably lengthens the turn-on time and turn-off time of the FET switch. For example, when the switch bias circuit changes the gate voltage of the FET switch, there is an undesirable switching delay based on a resistor-capacitor (RC) time constant arising from a resistance of the gate resistor and a gate capacitance of the FET switch. The delay in switching leads to an increase in turn-on time and turn-off time of the switch. As used herein, the speed of a RF switch may refer to the turn-on time and/or turn-off time of the switch. Thus, although implementing the gate resistor with a high resistance provides a number of benefits, it also degrades the switching performance of the FET switch. To achieve short switching time, an RF system can include an RF switch having a control input that controls an impedance of the RF switch, a multi-level buffer configured to receive a control signal for selectively activating the RF switch, and a resistor electrically connected between an output of the multi-level buffer and the control input to the RF switch. Additionally, the multi-level buffer generates a switch control voltage at the output, and pulses the switch control voltage in response to a transition of the control signal to thereby shorten a delay in switching the RF switch. Thus, rather than directly transitioning the switch control voltage from an ON voltage to an OFF voltage, or vice versa, the switch control voltage is pulsed before being controlled to a steady-state voltage level. By pulsing the switch control voltage in this manner, charging or discharging at the control input of RF switch occurs faster, which shortens the switching delay of the RF switch. Embodiments of multi-level buffers for driving RF switches are provided herein. Further embodiments relate to at least temporarily boosting the voltage applied to the gate resistor, thereby increasing the speed at which the gate reaches the desired voltage level. The boosted voltage may only need to be applied at the transitions between on and off (and vice versa) and the typical ON voltage or OFF voltage can be applied to the gate during a steady-state. In certain implementations, the multi-level buffer controls the RF switch not only with a steady-state switch ON voltage (for instance, a power high supply voltage) and a steady-state switch OFF voltage (for instance, a power low supply voltage or ground voltage), but also with a high voltage greater than the steady-state switch ON voltage and a low voltage lower than the steady-state switch OFF voltage. In certain embodiments, the high voltage and the low voltage may be produced using boost circuit(s), as described herein. For example, when turning on an n-type field-effect transistor (NFET) switch, the multi-level buffer uses the high voltage to control the gate voltage of the NFET switch for a portion of time, and thereafter controls the gate voltage with the steady-state switch ON voltage. Additionally, when turning off the NFET switch, the multi-level buffer uses the low voltage to control the gate voltage of the NFET switch for a portion of time, and thereafter controls the gate voltage with the steady-state switch OFF voltage. The result is a speed-up of the turn-on and turn-off times of the NFET switch. In certain implementations, the voltage level of the pulse is beyond a breakdown voltage at which the switch can reliably operate, for example, in excess of a maximum gate-to-source voltage for FET switches. Thus, the pulse voltage level would damage the switch due to reliability considerations if used to control the switch in the steady-state. However, by applying the pulse via the resistor or other impedance, the voltage directly at the control input of the switch remains within a voltage range for reliable operation. Thus, the benefits of fast switching time are achieved without damaging the switch. In certain implementations, the multi-level buffer is also implemented using standard voltage FETs that cannot reliably handle the full voltage of the pulse. Pulsing the switch control voltage provides an enhancement to switching speed with little to no impact on other performance characteristics of the RF switch, such as linearity, power handling capability, and/or insertion loss. For example, the switch control voltage can be pulsed without needing to include additional circuitry along the RF signal path through the switch or at the control input of the RF switch. Thus, the switching speed is improved without needing to add circuitry such as resistor bypass switches that could degrade performance by parasitically loading the RF switch. FIG.1is a schematic diagram of one example of an RF communication system10that can include one or more RF switching circuits in accordance with the teachings herein. Although, the RF communication system10illustrates one example of an electronic system that can include one or more RF switching circuits, the RF switching circuits described herein can be used in other configurations of electronic systems. Furthermore, although a particular configuration of components is illustrated inFIG.1, the RF communication system10can be adapted and modified in a wide variety of ways. For example, the RF communication system10can include more or fewer receive paths and/or transmit paths. Additionally, the RF communication system10can be modified to include more or fewer components and/or a different arrangement of components, including, for example, a different arrangement of RF switching circuits. In the illustrated configuration, the RF communication system10includes a baseband processor1, an I/Q modulator2, an I/Q demodulator3, a first digital step attenuator4a, a second digital step attenuator4b, a filter5, a power amplifier6, an antenna switch module7, a low noise amplifier8, and an antenna9. As shown inFIG.1, baseband processor1generates an in-phase (I) transit signal and a quadrature-phase (Q) transmit signal, which are provided to the I/Q modulator2. Additionally, the baseband processor1receives an I receive signal and a Q receive signal from the I/Q demodulator3. The I and Q transmit signals correspond to signal components of a transmit signal of a particular amplitude, frequency, and phase. For example, the I transmit signal and Q transmit signal represent an in-phase sinusoidal component and quadrature-phase sinusoidal component, respectively, and can be an equivalent representation of the transmit signal. Additionally, the I and Q receive signals correspond to signal components of a receive signal of a particular amplitude, frequency, and phase. In certain implementations, the I transmit signal, the Q transmit signal, the I receive signal, and the Q receive signal are digital signals. Additionally, the baseband processor1can include a digital signal processor, a microprocessor, or a combination thereof, used for processing the digital signals. The I/Q modulator2receives the I and Q transmit signals from the baseband processor1and processes them to generate a modulated RF signal. In certain configurations, the I/Q modulator2can include DACs configured to convert the I and Q transmit signals into an analog format, mixers for upconverting the I and Q transit signals to radio frequency, and a signal combiner for combining the upconverted I and Q signals into the modulated RF signal. The first digital step attenuator4areceives the modulated RF signal, and attenuates the modulated RF signal to generate an attenuated RF signal. The first digital step attenuator4acan aid in obtaining a desired gain and/or power level associated with transmission. In the illustrated configuration, the first digital step attenuator4aincludes a first RF switching circuit20a. The first digital step attenuator4aillustrates one example of a circuit that can include one or more RF switching circuits in accordance with the teachings herein. For example, the first digital step attenuator4acan include a cascade of attenuator stages, each of which can be bypassed using an RF switching circuit to aid in providing a digitally adjustable amount of attenuation. The filter5receives the attenuated RF signal from the first digital step attenuator4a, and provides a filtered RF signal to an input of the power amplifier6. In certain configurations, the filter5can be a band pass filter configured to provide band filtering. However, the filter5can be a low pass filter, a band pass filter, a notch filter, a high pass filter, or a combination thereof, depending on the application. The power amplifier6can amplify the filtered RF signal to generate an amplified RF signal, which is provided to the antenna switch module7. The antenna switch module7is further electrically connected to the antenna9and to an input of the low noise amplifier8. The antenna switch module7can be used to selectively connect the antenna9to the output of the power amplifier6or to the input of the low noise amplifier8. In certain implementations, the antenna switch module7can provide a number of other functionalities, including, but not limited to, band switching, switching between transmit and receive, and/or switching between different power modes. In the illustrated configuration, the antenna switch module7includes a second RF switching circuit20b. The antenna switch module7illustrates another example of a circuit that can include one or more RF switching circuits in accordance with the teachings here. For example, the antenna switch module7can include an RF switching circuit implemented as a single pole multi-throw switch. AlthoughFIG.1illustrates a configuration in which the antenna switch module7operates as a single pole double throw switch, the antenna switch module7can be adapted to include additional poles and/or throws. The LNA8receives an antenna receive signal from the antenna switch module7, and generates an amplified antenna receive signal that is provided to the second digital step attenuator4b. The second digital step attenuator4bcan attenuate the amplified antenna receive signal by a digitally-controllable amount of attenuation. As shown inFIG.1, the second digital step attenuator4bgenerates an attenuated receive signal, which is provided to the I/Q demodulator3. Including the second digital step attenuator4bcan aid in providing the I/Q demodulator3with a signal that has a desired amplitude and/or power level. In the illustrated configuration, the second digital step attenuator4bincludes a third RF switching circuit20c. The second digital step attenuator4billustrates another example of a circuit that can include one or more RF switching circuits in accordance with the teachings herein. The I/Q demodulator3can be used to generate the I receive signal and the Q receive signal, as was descried earlier. In certain configurations, the I/Q demodulator3can include a pair of mixers for mixing the attenuated receive signal with a pair of clock signals that are about ninety degrees out of phase. Additionally, the mixers can generate downconverted signals, which can be provided to ADCs used to generate the I and Q receive signals. The RF switching circuits20a-20ccan be used for handling RF signals using a variety of communication standards, including, for example, Global System for Mobile Communications (GSM), Code Division Multiple Access (CDMA), wideband CDMA (W-CDMA), Long Term Evolution (LTE), Enhanced Data Rates for GSM Evolution (EDGE), 3G, 4G, and/or 5G, as well as other proprietary and non-proprietary communications standards. Moreover, the RF switching circuits20a-20ccan control switching of signals of a variety of frequencies, including not only RF signals between 100 MHz and 7 GHz, but also to higher frequencies, such as those in the X band (about 7 GHz to 12 GHz), the Kuband (about 12 GHz to 18 GHz), the K band (about 18 GHz to 27 GHz), the Kaband (about 27 GHz to 40 GHz), the V band (about 40 GHz to 75 GHz), and/or the W band (about 75 GHz to 110 GHz). Accordingly, the teachings herein are applicable to a wide variety of RF communication systems, including microwave communication systems. Providing an RF switch in a transmit or receive path of an RF communication system can impact the system's performance. For example, not only can the RF switch's linearity, power handling capability, and insertion loss impact operations of the system, but also transient performance characteristics such as turn-on time, turn-off time, and/or settling time can provide a performance impact. The teachings herein can be used improve transient performance characteristics of an RF switch, with little to no impact on other performance characteristics of the RF switch, such as linearity, power handling capability, and/or insertion loss. FIG.2Ais a circuit diagram of an RF switching circuit20according to one embodiment. The RF switching circuit20includes an n-type field effect transistor (NFET) switch21, a multi-level buffer22(also referred to herein as a switch bias circuit), and a gate bias resistor31. As shown inFIG.2A, the multi-level buffer22receives a control signal CTL for indicating whether the NFET switch21should be turned on or turned off by the multi-level buffer22. Thus, the control signal CTL is used to selectively activate the NFET switch21. The multi-level buffer22also receives a power high supply voltage VDD, a ground or power low supply voltage VSS, a high voltage VHIGHgreater than the power high supply voltage VDD, and a low voltage VLOWlower than the power low supply voltage VSS. The power high supply voltage VDDis also referred to herein as +VDD, and the power low supply voltage VSSis also referred to herein as −VSS. AlthoughFIG.2Aillustrates a configuration in which the multi-level buffer22is used to control one FET switch, the multi-level buffer22can be configured to bias one or more additional FET switches. In such configurations, the multi-level buffer22can include additional switch control inputs, such as a control signal associated with each FET switch. However, other configurations are possible, such as implementations in which a control signal is used to control multiple FET switches. For example, in certain implementations, a multi-level buffer is used to control a series FET switch as well as a shunt FET switch. Additionally, the series FET switch and the shunt FET switch can controlled by a common control signal such that when the series FET switch is turned on the shunt FET switch is turned off, and vice versa. In the illustrated configuration, a source of the NFET switch21is electrically connected to the RF input RFIN, and a drain of the NFET switch21is electrically connected to the RF output RFOUT. Although an example in which an RF switch is connected between an RF input terminal and an RF output terminal, other configurations are possible, such as implementations in which the RF switch is connected between a first RF terminal and a second RF terminal that are bidirectional. As shown inFIG.2A, the gate bias resistor31is electrically connected between a gate bias output of the multi-level buffer22and a gate of the NFET switch21. The gate bias resistor31can enhance isolation between the gate bias output of the multi-level buffer22and the gate of the NFET switch21. For example, high frequency signal components can be coupled onto the gate of the NFET switch21via parasitic gain-to-drain and/or gate-to-source capacitances, and the gate bias resistor31can provide resistance that impedes the high frequency signal components from reaching the gate bias output of the multi-level buffer22. AlthoughFIG.2Aillustrates the output of the multi-level buffer22being connected to the gate of the NFET switch21via the resistor31, any suitable impedance can be connected between the output of the multi-level buffer22and the gate of the NFET switch21. For example, in another embodiment, an inductor or a combination of an inductor and a resistor are connected between the output of a multi-level buffer and a control input to an RF switch. The NFET switch21can be implemented in a variety of ways. In one embodiment, the NFET switch21is implemented as a silicon-on-insulator (SOI) metal oxide semiconductor (MOS) transistor including a body that is electrically floating. As used herein and as persons having ordinary skill in the art will appreciate, MOS transistors can have gates made out of materials that are not metals, such as poly silicon, and can have dielectric regions implemented not just with silicon oxide, but with other dielectrics, such as high-k dielectrics. AlthoughFIG.2Aillustrates a configuration using n-type transistors, the teachings herein are applicable to configurations using p-type transistors or a combination of n-type and p-type transistors. Furthermore, the teachings herein are applicable to other types of RF switches that include a control input for controlling the RF switch's impedance. The RF switching circuit20can be used in a wide variety of configurations within an electronic system. For example, the NFET switch21can operate in a transmit signal path or a receive signal path of an RF communication system, such as the RF communication system10ofFIG.1. With continuing reference toFIG.2A, the multi-level buffer22receives the control signal CTL for selectively activating the NFET switch21. In particular, the control signal CTL indicates whether the multi-level buffer22should turn on the NFET switch21or turn off the NFET switch21. The multi-level buffer22generates a switch control voltage at the gate bias output, and pulses the switch control voltage in response to a transition of the control signal CTL to thereby shorten a delay in switching the NFET switch21. Thus, rather than directly transitioning the switch control voltage from an ON voltage to an OFF voltage, or vice versa, the switch control voltage is temporarily pulsed before being controlled to a steady-state voltage level. By pulsing the switch control voltage in this manner, the switching delay of the NFET switch21is shortened. The pulsing can be applied when turning on the NFET switch21to improve turn-on speed and/or when turning off the NFET switch21to improve turn-off speed. Although various embodiments herein provide a pulse for both an ON to OFF transition and for an OFF to ON transition, the teachings herein are also applicable to implementations in which a pulse is only provided for an ON to OFF transition or only provided for an OFF to ON transition. In such implementations, a multi-level buffer can include corresponding circuitry for providing the desired pulse, while omitting other circuitry not needed for providing the desired pulse. In certain implementations, when switching the NFET switch21from an OFF state to an ON state, the multi-level buffer22first changes the switch control voltage from a steady-state switch OFF voltage (for instance, VSS) to the high voltage VHIGH, and then from the high voltage VHIGHto a steady-state switch ON voltage (for instance, VDD). Thus, the multi-level buffer22pulses the switch control voltage when turning on the NFET switch21. By pulsing the switch control voltage in this manner, the turn-on time of the NFET switch21is shortened. In certain implementations, when switching the NFET switch21from an ON state to an OFF state, the multi-level buffer22first changes the switch control voltage from a steady-state switch ON voltage (for instance, VDD) to the low voltage VLOW, and then from the low voltage VLOWto a steady-state switch OFF voltage (for instance, VSS). Thus, the multi-level buffer22pulses the switch control voltage when turning off the NFET switch21to thereby shorten turn-off time. The duration of the pulse can be controlled in a wide variety of ways, including by logic circuitry configured to generate clock signal phases for timing the multi-level buffer22based on delaying an edge of the control single CTL. The high voltage VHIGHand/or the low voltage VLOWcan be provided in a wide variety of ways, including, but not limited to, receiving the voltage on a pin or generated using charge pumps or other voltage regulators. Likewise, VDDand/or VSScan be provided in a wide variety of ways, including, but not limited to, receiving the voltage on the pin or generated from other voltages (for instance, from VHIGHand/or VLOW). In certain implementations, at least one of the high voltage VHIGHor the low voltage VLOWis beyond a breakdown voltage for transistor reliability considerations, for example, beyond a maximum or below a minimum gate-to-source voltage permitted by the processing technology used to fabricate the NFET switch21. Thus, controlling the gate of the NFET switch21with the high voltage VHIGHand/or the low voltage VLOWin the steady-state would potentially damage the RF switch21due to transistor reliability limitations. However, by applying the pulsed switch control voltage to an opposite end of the gate resistor31as the end connected to the gate of the NFET switch21, the voltage directly at the gate remains within a range of voltages acceptable for reliable operation of the NFET switch21. Thus, the benefits of fast switching time are achieved while operating within voltage constraints or limitations of the NFET switch21. FIG.2Bis one example of a timing diagram for the RF switching circuit20ofFIG.2A. The timing diagram includes a first plot11of switch control voltage outputted by the multi-level buffer22versus time, and a second plot12of gate voltage of the NFET switch21versus time. The timing diagram includes a first time t1at which the control signal CTL transitions to turn the NFET switch21from an OFF state to an ON state, and a second time t2at which the control signal CTL transitions to turn the NFET switch21from the ON state to the OFF state. As shown inFIG.2B, the multi-level buffer22pulses the switch control voltage when turning on the switch as well as when turning off the switch, in this embodiment. For example, when the NFET switch21is turned on at time t1, the multi-level buffer22generates a turn-on pulse13associated with first transitioning the switch control voltage from VSSto VHIGH, and thereafter from VHIGHto VDD. Additionally, when the NFET switch21is turned off at time t2, the multi-level buffer22generates a turn-off pulse14associated with first transitioning the switch control voltage from VDDto VLOW, and thereafter from VLOWto VSS. The duration15of the turn-on pulse13and the duration16of the turn-off pulse14can be controlled in a wide variety of ways. In a first example, the multi-level buffer22includes logic circuitry that performs logical operations on an input control signal and delayed versions thereof to generate clock signal phases that set the duration15and the duration16. As shown inFIG.2B, the turn-on pulse13and the turn-off pulse14have opposite polarity. For example, the turn-on pulse13has increased voltage level before settling to a lower voltage level, while the turn-off pulse14has decreased voltage level before settling to a higher voltage level. In certain implementations, a multi-level buffer generates a first pulse in response to an ON to OFF transition of an RF switch, and generates a second pulse in response to an OFF to ON transition of the RF switch, and the first and second pulse have opposite polarity. FIG.2Cis another example of an RF switch50for use in an RF switching circuit. The RF switch50includes a first NFET switch21a, a second NFET switch21b, a third NFET switch21c, a first gate bias resistor31a, a second gate bias resistor31b, a third gate bias resistor31c, a first channel biasing resistor32a, a second channel biasing resistor32b, a third channel biasing resistor32c, a fourth channel biasing resistor32d, a first DC blocking capacitor41, and a second DC blocking capacitor42. Although one embodiment of an RF switch is shown inFIG.2C, the teachings herein are applicable to RF switches implemented in a wide variety of ways. In the illustrated embodiment, the first NFET switch21a, the second NFET switch21b, and the third NFET switch21care in series with one another. Including multiple FET switch components in series can increase a power handling capability of an RF switch. Although an example with three FET switches in series is shown, more or fewer FET switches can be included to achieve desired performance characteristics. As shown inFIG.2C, the gate bias resistors31a-31care electrically connected between a gate bias terminal GATEBIAS(which is driven by a multi-level buffer) and the gates of the NFET switches21a-21c, respectively. The channel biasing resistors32a-32dcollectively operate to control a bias voltage of the sources and drains of the NFET switches21a-21c, thereby helping to control gate-to-source and gate-to-drain biasing characteristics of the transistors. Although one example of channel biasing is shown, other implementations of channel biasing are possible, including, but not limited to, implementations using resistors in parallel with the channels of one or more NFET switches. The first DC blocking capacitor41and second DC blocking capacitor42provide DC blocking to allow the sources and drains of the NFET switches21a-21cto operate at a different DC voltage levels than the RF input RFINand the RF output RFOUT. However, the teachings herein are also applicable to RF switches that operate without DC blocking capacitors. Although one example of an RF switch with NFETs switches is shown, RF switches can also be implemented using p-type FET (PFET) switches or a combination of NFET switches and PFET switches. Fast Switching Circuits for Improved RF Switching Speeds FIG.3is a circuit diagram of one example of an RF switching circuit70according to one embodiment. The RF switching circuit70includes an RF switch60, and a digital buffer80. The digital buffer80is configured to receive a control signal CTL for indicating whether the RF switch60should be turned on or turned off by the digital buffer80. Thus, the control signal CTL is used to selectively activate the RF switch60. The digital buffer80also receives a power high supply voltage VDDand a ground or power low supply voltage VSS. As shown inFIG.3, the RF switch60includes a first NFET switch21a, a second NFET switch21b, a third NFET switch21c, . . . , and an n-th NFET switch21d; a first gate bias resistor31a, a second gate bias resistor31b, a third gate bias resistor31c. . . , and an n-th gate bias resistor31d. The gate resistors31a-31dare also be referred to as bootstrapping gate resistors herein. Although one embodiment of an RF switch60is shown inFIG.3, the teachings herein are applicable to RF switches60implemented and/or biased in a wide variety of ways, such as the RF switch50ofFIG.2C. In the illustrated embodiment, the first NFET switch21a, the second NFET switch21b, the third NFET switch21c, and the n-th NFET switch21dare in series with one another. Including multiple FET switch components in series can increase a power handling capability of an RF switch60. Although an example with four FET switches in series is shown, more or fewer FET switches can be included to achieve desired performance characteristics. As shown inFIG.3, the gate bias resistors31a-31dare electrically connected between the output node OUT of the digital buffer80and the gates of the NFET switches21a-21d, respectively. Although one example of an RF switch with NFETs switches is shown, RF switches can also be implemented using p-type FET (PFET) switches or a combination of NFET switches and PFET switches. Aspects of this disclosure relate to architectures and techniques for improving the switching speed of RF switches without affecting the insertion loss of the RF switch. In various embodiments, this can be accomplished without decreasing the resistance of the gate resistors (for example, resistors31a-31d) or decreasing the size of the RF switch (for example, small transistor widths for transistors21a-21d), but rather by increasing the gate voltage bias at a predetermined time so that the gate(s) of the RF switch are driven faster. One important design specification for an RF switch is the insertion loss specification of the switch. The insertion loss of an RF switch is related to the size of the RF switch. For example, a smaller sized RF switch will have a larger RONresistance compared to a larger sized RF switch. The increased RONresistance which in turn increases the insertion loss of the RF switch. By increasing the size of the RF switch to improve insertion loss, the size of the transistor(s) are increased, which results in a higher capacitance for the transistor(s) (e.g., the gate-source capacitance CGSof the transistor(s)). This type of larger RF switch also typically includes bootstrapping gate resistors, which can also negatively affect the insertion loss of the RF switch. The contribution of the resistances of the bootstrapping gate resistors to the insertion loss can be reduced by increasing the bootstrapping gate resistances. While the combination of a comparatively large bootstrapping gate resistance and a larger sized RF switch can reduce the insertion loss of the RF switch, this combination limits the RF switching speed due to the resistor-capacitor (RxC or RC) time constant introduced by these elements. While the RF switching speed of the RF switch can be increased by reducing the bootstrapping gate resistance, this results in an increase in the insertion loss of the RF switch. Alternatively, the RF switching speed of the RF switch can be increased by reducing the gate-source capacitance CGSof the transistor(s), however, this increases the ON Resistance of the RF switch, thus also leads to an increase in the insertion loss. In view of the above, aspects of this disclosure relate to architectures and techniques for increasing the switching speed of an RF switch without introducing additional insertion loss. Aspects of this disclosure also relate to the use of one or more series switch blocks (e.g., positive and negative side switch stacks) between a multiplied voltage (for example, a high voltage above VDDor a low voltage below VSS) and the RF switch gate. These series switch blocks can be high voltage capable and have self-opening characteristics during high voltage switching events without any control voltage. With this high voltage capability, aspects of this disclosure achieve much faster switching than the traditional approaches. FIG.4is a circuit diagram of an RF switching circuit72including a fast switching circuit90according to one embodiment. In particular, the RF switching circuit72includes a digital buffer80, a fast switching circuit90, and an RF switch60. The digital buffer and the RF switch60may be implemented similarly to the embodiment ofFIG.3. The fast switching circuit90is configured to increase the gate voltages of the FET switches21a-21dwithin the RF switch60to increase the switching speed of the RF switch60. The fast switching circuit90can be configured to boost the voltage internally (for example, generate a high voltage greater than VDDand a low voltage less than VSS) and does not require any external additional supply. For example, the fast switching circuit90can be configured to boost the voltage using the same power high supply voltage VDDand power low supply voltage VSSprovided to the digital buffer80. Advantageously, the fast switching circuit can boost the voltage without using any clock or clock signal generator, thereby preventing spurs from being introduced due to clock periodicity. The fast switching circuit90can include one or more charge pumps (also referred to as a voltage multiplier or boost circuit) configured to boost the supply voltages VDDand VSSof the digital buffer, thereby boosting the node OUT voltage during the switching state. With respect to VDDsuch charge pumps are positive charge pumps that generate a high voltage greater than VDD, while with respect to VSSsuch charge pumps are negative charge pumps that generate a low voltage less than VSS. In some implementations, the fast switching circuit90may not have an error free steady-state and/or lack sufficient drive strength for steady-state operation, and thus, the digital buffer80can be used to provide the steady-state (e.g., states other than the switching state) voltage to the output node OUT. Thus, the digital buffer80can fix the output node OUT voltage to the supply voltage VDDand VSSvalues, adding stability to the RF switching circuit72. Using a clock within the RF switching circuit72may generate spurs, which is not desirable for many applications. For example, a free-running oscillator can generate spurious frequency tones that can couple into RF signal paths and degrade RF signal purity. Providing a spur-free design is an advantage for these applications. Thus, certain embodiments of this disclosure provide charge pumps configured to boost the output node OUT voltage during transitions without using any oscillators or clock signals, thereby increasing switching speed without introducing spurs. The fast switching circuit90can be configured to operate only during or in response to switching events (e.g., low to high transitions and high to low transitions) and does not significantly affect the system during steady-state (e.g., between transitions). In some implementations, the fast switching circuit90may include a relatively small loading capacitor at the output node OUT. As is described herein, implementations of the fast switching circuit90may have a charge pump structure without a clock, and thus, the fast switching circuit90does not generate spurs. Depending on the implementation, the fast switching circuit90can be configured to boost the output voltage at the node OUT by multiplying the voltage to 2×, 3×, or any other integer multiplication of the output voltage at the node OUT during transitions. However, this disclosure is not limited thereto and in some implementations, the fast switching circuit90can be configured to boost the output voltage at the node OUT by fractional multiples of the output voltage (e.g., 1.5×, 2.5×, etc.). In certain implementations, the duration for which the boosted output voltage is applied can be set by a user of the fast switching circuit90using an interface and/or any other suitable mechanism for user programmability. FIG.5illustrates the timing of the control signal CTL and the combined output node OUT voltage of the fast switching circuit90and digital buffer80in accordance with aspects of this disclosure. As is illustrated inFIG.5, the output node OUT voltage is boosted to a higher voltage during the transition from low to high and boosted to a lower voltage during the transition from high to low. With reference toFIG.5, the control signal CTL may alternate between a first, low level (e.g., 0V) and a second, high level (e.g., 3.3V). The particular voltage illustrated inFIG.5are merely example and other values may be used without departing from aspects of this disclosure. The control signal CTL transitions between the low level and the high level or vice versa at times t1, t3, t5, and t7. The fast switching circuit90is configured to boost the output node OUT voltage for a defined length of time after each switching event (e.g., transition from low to high or from high to low). For example, the fast switching circuit90is configured to boost the output node OUT voltage to 6.6V between times t1 and t2, and then return to a voltage of 3.3V from time t2 to t3. Similarly, the fast switching circuit90is configured to boost the output node OUT voltage to-6.6V between times t3 and t4, and then return to a voltage of −3.3V from time t4 to t5. Thus, between times t1-t2, t3-t4, t5-t6, and t7-t8, the fast switching circuit90is configured to boost the output node OUT voltage in order to drive the transistor gates of the RF switch60with a larger voltage. In some embodiments, the timing of the above-indicated intervals can be set by the user of the fast switching circuit90in any suitable manner. With continued reference toFIG.5, the fast switching circuit90is configured to drive the output node OUT voltage to two times VDDduring low to high transition (e.g., in the t1-t2 and t5-t6 intervals) and two times VSSduring the high to low transition (e.g., in the t3-t4 and t7-t8 intervals). In the intervals defined between t2-t3, t4-t5, t6-t7 and after t8, the digital buffer circuit80is configured to drive the output node OUT voltage in order to provide the 3.3V and −3.3V voltages accurately. The fast switching circuit90can be configured to operate only during the above-identified switching intervals, and thus the fast switching circuit90has little to no impact on the performance during steady-state. FIG.6Ais a circuit diagram of a fast switching circuit92according to one embodiment. The fast switching circuit92includes a positive side100configured to boost the output node OUT voltage when the output node OUT voltage is positive and a negative side120configured to boost the output node OUT voltage when the output node OUT voltage is negative. The positive side100includes a first set of three transistors Q1, Q2, and Q3(Q1-Q3), a first capacitor102, and a positive side switch stack104. Similarly, the negative side120includes a second set of three transistors Q8, Q9, and Q10(Q8-Q10), a second capacitor122, and a negative side switch stack124. The first set of transistors Q1-Q3together with the first capacitor102form a first clockless charge pump circuit configured to boost the power high supply voltage VDDto a voltage higher than the power high supply voltage VDDbased on a first precharge signal Precharge_p and a first discharge signal Discharge_p, which can be generated in any suitable way, such as digital logic processing of the control signal CTL using a logic circuit123. The boosted voltage is then provided to the positive side switch stack104. On the negative side120, the second set of transistors Q8-Q10together with the second capacitor122form a second clockless charge pump circuit configured to boost the power low supply voltage VSSbased on a second precharge signal Precharge_n and a second discharge signal Discharge_n, which can be generated in any suitable way, such as digital logic processing of the control signal CTL using the logic circuit123. The boosted voltage is then provided to the negative side switch stack124. In some embodiments, the power high supply voltage VDDmay be about +3.3V, and the first clockless charge pump doubles the voltage to about +6.6V and the power low supply voltage VSSmay be about −3.3V, and the second clockless charge pump doubles the voltage to about −6.6V. In some implementations, there may be some inefficiencies in the charge pump circuits such that the boosted signals are not fully double the power high/low supply voltages VDDand VSS. Depending on the implementation the first and second charge pump circuits may be configured to boost the respective power high/low supply voltages VDDand VSSto greater than 1.5 times, greater than 1.8 times, and/or greater than 1.9 times the power high/low supply voltages VDDand VSS. As described in connection withFIG.7below, the first and second charge pump circuits may be configured to boost the respective power high/low supply voltages VDDand VSSto double the respective power high/low supply voltages VDDand VSSminus the gate-source voltage of one of the NFETs (e.g., VgsQ1or VgsQ8) in the respective charge pump circuits. As shown inFIG.6Athe positive side switch stack104is controlled by a first control signal Pside_Gate and the negative side switch stack124is controlled by a second control signal Nside_gate. In certain implementations, the first control signal Pside_Gate and the second control signal Nside_gate (as well as other depicted control signals) are generated by the logic circuit123, which can be implemented without any clock signals or oscillators. Rather, the logic circuit123can generate the depicted control signals by delaying the control signal CTL (with or without polarity inversion) and performing logic operations (for instance, Boolean logic) thereon. FIGS.6B and6Cillustrate two embodiments of the negative side switch stack in accordance with aspects of this disclosure. In particular,FIG.6Bshows the same implementation fromFIG.6AwhileFIG.6Cillustrates another implementation for the negative side switch stack. Although embodiments of the negative side switch stack are illustrated inFIGS.6B and6C, skilled artisans will readily appreciate that the positive side switch stack may be implemented in a similar manner with minor modifications (e.g., using PFETs in place of the NFETs). With reference toFIG.6B, the negative side switch stack124is configured to receive a negative side switch stack control signal Nside_Gate. The negative side switch stack124includes a plurality of NFETs Q11, Q12, Q13, and Q14(Q11-Q14), and a plurality of gate resistors R5, R6, R7, and R8(R5-R8) respectively connected to the NFETs Q11-Q14. The NFETs Q11-Q14are biased with the negative side switch stack control signal Nside_Gate (e.g., which may be 3.3V in certain embodiments) during the peaking event such that the NFETs Q11-Q14form high voltage capable self-opening transistors during the peaking events. With reference toFIG.6C, the illustrated embodiment of the negative side switch stack126includes NFETs Q11-Q14and gate resistors R5-R8, similar to theFIG.6Bembodiment, and further includes a first plurality of stacked diodes128, a second plurality of stacked diodes130, and a resistor rdsfor each NFET Q11-Q14(e.g., six diodes and one resistor may be included for each of the NFETs Q11-Q14). The stacked diodes128are connected in series between a drain and a gate of each NFET, while the stacked diodes130are connected in series between the drain and a source of each NFET. Additionally, the resistor rdsis in parallel with the stacked diodes130. The stacked NFET Q11-Q14are protected by the diodes128, and the diodes130, and the resistors rds. When the output node OUT voltage is boosted (e.g., double VSSfor the negative side switch stack124or double VDDfor the positive side switch stack104), the NFETs Q11-Q14are protected by the diodes128and130as well as the stacking of the NFETs Q11-Q14. Depending on the embodiment, the number NFETs Q11-Q14can be increased or reduced, for example, depending on the amount of boost provided by the charge pump circuits and/or transistor reliability considerations. The resistors rdsmay be implemented with a relatively large resistance in order to have a substantially equal drain-to-source voltage Vas when the NFETs Q11-Q14are OFF. The RF switch may also include one or more delay elements (not illustrated) and one or more logic gates (not illustrated) configured to asynchronously generate the precharge and discharge signals Precharge_p, Discharge_p, Precharge_n, and Discharge_n. The delay element(s) and/or logic gate(s) may also be configured to asynchronously generate the positive side switch stack control signal Pside_Gate and the negative side switch stack control signal Nside_Gate. Such delay elements and logic gates can be included in the logic circuit123ofFIG.6A. Referring back toFIG.6A, during a low to high transition, the positive side100is configured boost the output node OUT voltage to two times VDD, while the stacking of the NFETs Q11-Q14on the negative side120protects the NFETs Q11-Q14from the large voltage differential between the node C and the output node OUT. The NFETs Q11-Q14are biased with Nside_Gate (e.g., which may be 3.3V) such that the NFETs Q11-Q14form high voltage capable self-opening during the peaking events. During high to low transitions, the negative side120is configured boost the output node OUT voltage to two times VSS, while the stacking of the PFETs Q4-Q7on the positive side100protects the PFETs Q4-Q7from the large voltage differential between the node C and the output node OUT. The PFETs Q4-Q7are biased with Pside_Gate (e.g., which may be −3.3V in this operating scenario) such that the PFETs Q4-Q7form high voltage capable self-opening during the peaking events. FIG.7illustrates the timing of the control signals and output node for the fast switching circuit92ofFIG.6A. With reference toFIGS.6A and7, between times 0 to t1, the RF switch92is in a low state. In this low state, the control signals Precharge_p and Precharge_n turn the switches Q3and Q10ON. The node A is close to 3.3V (e.g., node A may have a voltage of about 3.3V-VgsQ1) and node B may have a ground GND voltage (or another third supply voltage between the power high and power low supply voltages). The first capacitor102between noes A and B is charged to about 3.3V in this state. The node C is close to −3.3V (e.g., node C may have a voltage of about −3.3V+VgsQ8) and node D may have a GND voltage. The second capacitor122between nodes C and D is changed to about −3.3V in this state. The control signal Pside_Gate may be held at 3.3V, so PFETs Q4-Q7are OFF. The node OUT is not set by the positive side100during this period. The control signal Nside_Gate may be held at −3.3V, so NFETs Q11-Q14are OFF. The node OUT is also not set by the negative side120during this period. Thus, the node OUT is not set by the fast switching circuit92, but rather by the digital buffer80(seeFIG.4). Between times t1 to t2, the RF switch92receives a low to high transition signal at t1, and the control signal Precharge_p sets the switch Q3to OFF and the control signal Discharge_p sets the switch Q2to ON. In response to the changes to the states of switches Q2and Q3, the node B becomes about 3.3V (e.g., goes from 0V to 3.3V) and due to charge conservation node A becomes 6.6V-VgsQ1. Since the control signal Pside_Gate is always 3.3V in this example, the PFETs Q4-Q7are turned ON due to the voltage at node A. Accordingly, the node OUT is set by the positive side100and initially the node OUT voltage becomes 6.6V-VgsQ1. During this time interval, since the node OUT voltage is 6.6V-VgsQ1and node C is −3.3V during this interval, the four stacked NFETs Q11-Q14protect the components on the negative side120from breakdown. Also, since the gates of NFETs Q11-Q14are at −3.3V, the bootstrapping resistors R5-R9protect the NFETs Q11-Q14. In the t1 to t2 interval, the node OUT voltage initially becomes 6.6V-VgsQ1and the charge on the first capacitor102is shared by the node OUT capacitor and the driven gate capacitors of the RF transistors (e.g., transistors21a=21dofFIG.4). Thus, the voltage on node A and node OUT decay as shown inFIG.7. The boosting phase ends at time t2, and thus between times t2 and t3, both switches Q2and Q3are OFF. Node A returns to 3.3V and PFETs Q4-Q7are turned OFF. During this interval, node OUT is set by the digital buffer80. In the interval at t3 to t4, the RF switch92receives a high to low transition signal at t3, and the control signal Precharge_n sets the switch Q10to OFF and the control signal Discharge_p sets the switch Q9to ON. Node D thus becomes −3.3V (e.g., from 0V to −3.3V) and due to charge conservation node C becomes −6.6+VgsQ8. Since the control Nside_Gate is always −3.3V in this example, NFETs Q11-Q14are turned ON. Accordingly, the node OUT is set by the negative side120and initially node OUT becomes −6.6V+VgsQ8. During this time interval, since node OUT is −6.6V+VgsQ8and node A is +3.3V, the four stacked PFETs Q4-Q7protect the components on the positive side100from breakdown. Also, since the gates of PFETs Q4-Q7are at +3.3V, the bootstrapping resistors R1-R4protect PFETs Q4-Q7. In the t3 to t4 interval, initially node OUT becomes −6.6V+VgsQ8and the charge on the second capacitor122is shared by the node OUT capacitor and the driven RF transistor gate capacitor. Thus, the voltage on node A and node OUT decay. After t4, the boosting phase is over and both switches Q9and Q10are turned OFF. Node C returns to −3.3V and NFETs Q11-Q14are turned OFF. During the interval between t4 and t5, the node OUT is set by the digital buffer80. FIG.8Ais a circuit diagram of a fast switching circuit94according to another embodiment. The fast switching circuit94includes several components similar to those of the fast switching circuit92described in connection withFIG.6A, and thus, a discussion of these similar components may be omitted in the following discussion ofFIG.8A. The fast switching circuit94includes a positive side140configured to boost the output node OUT voltage when the output node OUT voltage is positive and a negative side160configured to boost the output node OUT voltage when the output node OUT voltage is negative. The positive side140includes a first diode D1, two pairs of stacked transistors142and144, a first capacitor146, and a positive side switch stack148. Similarly, the negative side160includes a second diode D2, two pairs of stacked transistors162and164, a second capacitor166, and a negative side switch stack168. The positive side switch stack148and the negative side switch stack168may be implemented in accordance with any one ofFIGS.6A-6Cor using any other suitable arrangement. Each of the pairs of stacked transistors142,144,162, and164may be implemented with two transistors (e.g., either PFET or NFET as illustrated inFIG.8A) and a pair of bootstrapping resistors respectively coupled to the gates of the transistors. Each of the pairs of stacked transistors142,144,162, and164is configured to receive a corresponding control signal Discharge_p, Precharge_p, Discharge_n, and Precharge_n. Although an example with two transistors and corresponding components is shown, additional transistors can be included to allow handling of larger voltage differences with reliability (for example, without exceeding maximum-rated transistor junction voltage specifications). The fast switching circuit94is configured to boost the output node OUT voltage to about three times each of the power high supply voltage VDDand the power low supply voltage VSS. In particular, the first and second capacitors146and166are each configured to be toggled between the power high supply voltage VDDand the power low supply voltage VSS(e.g., rather than between ground and one of the supply voltages VDDand VSSas inFIG.6A). When the power high supply voltage VDDis not equal in magnitude to the power low supply voltage VSS, the amount of boost may be less than or greater than three times each of the power high supply voltage VDDand the power low supply voltage VSS. Toggling the first and second capacitors146and166between these values allows for higher voltage peaking, enabling the nearly tripling of the supply voltages VDDand VSS. However, in order to enable this higher voltage peaking the switch structures include stacked transistor pairs142,144,162, and164so that each transistor maintains its on/off condition without exceeding safe operating voltages. In some implementations, there may be some inefficiencies in the charge pump circuits such that the boosted signals are not fully three time the power high/low supply voltages VDDand VSS. Depending on the implementation the first and second charge pump circuits may be configured to boost the respective power high/low supply voltages VDDand VSSto greater than 2.5 times, greater than 2.8 times, and/or greater than 2.9 times the power high/low supply voltages VDDand VSS. The positive side switch stack148transistors are biased with the control signal Pside_Gate (e.g., VDD) so that they form high voltage capable self-opening switches during the peaking events. The negative side switch stack168transistors are biased with the control signal Nside_Gate (e.g., VSS) so that they form high voltage capable self-opening switches during the peaking events. FIGS.8B and8Cillustrate two embodiments of the negative side switch stack168in accordance with aspects of this disclosure. In particular,FIG.8Bshows the same implementation fromFIG.8AwhileFIG.8Cillustrates another implementation for the negative side switch stack. The switch structure ofFIG.8Ccan also be used in other applications in need of a DC switch with high-voltage capability. Although embodiments of the negative side switch stack are illustrated inFIGS.8B and8C, the positive side switch stack may be implemented in a similar manner with minor modifications (e.g., using PFETs in place of the NFETs). The embodiment ofFIG.8Cprovides a high-voltage capable DC switch that can be used to provide high DC voltage handling using a diode based bias structure. In particular, a diode loop (for example, a pair of anti-parallel diode chains D2) may be formed on the gate side of each of the NFETs, and a diode network D3 and a resistor Rbmay be formed on the body side of the NFETs. In this example, the drain-to-source resistor of each NFET is portioned into two resistors R1and R2connected at an intermediate node, and the pair of anti-parallel diodes chains is connected between the gate of the NFET and the intermediate node. Additionally, the diode network connects D3 between the intermediate node and the bulk (body) of the NFET, while the resistor Rbconnects between the body of the NFET and a bulk voltage VBULK(which can be equal to VSS). In addition, three gate resistors RgDC, Rg1, and Rg2are arranged in series between the gate voltage Gate and the gates of each of the NFETs. A capacitor C1is arranged in parallel with the gate resistor RgDCand a diode D1is arranged in parallel with the gate resistor Rg2. The diode D1is configured to bypass the gate resistor Rg2, during negative peaking voltage which reduces self-turn on time of the NFETs. However, the diode D1does not turn on during positive peaking events occurring at the drain terminals of the NFETs. This presents a relatively large gate resistance (e.g., the combined value of the gate resistors RgDC, Rg1, and Rg2) during the off-state of the NFETs. Accordingly, the stack switch structure maintains a high off-resistance. In addition, the DC portion of the gate resistance can be increased adding the gate resistor RgDCand the capacitor C1. The gate resistor RgDCand the capacitor C1provide a relatively low impedance during the switching transitions and provide a relatively high impedance during DC steady state, thereby reducing DC leakage. Accordingly, the gate resistor R g pc and the capacitor C1have substantially no effect during transitions but reduce DC leakage during steady state. FIG.9illustrates the timing of the control signals and output node for the fast switching circuit94ofFIG.8A. The timing diagram is substantially the same as the diagram ofFIG.7, with the difference being the value of the boosted voltage at the output node OUT. FIG.10is a circuit diagram of a fast switching circuit96according to yet another embodiment. The fast switching circuit96is simplified in certain aspects compared to the embodiment ofFIG.6Awithout including separate positive and negative sides. The fast switching circuit96includes a positive pair of stacked transistors162, a negative pair of stacked transistors164, a capacitor166, first and second diodes168and170, a positive switch stack172, and a negative switch stack174. The fast switching circuit96ofFIG.10is configured to boost the node OUT voltage to about three times each of VDDand VSS. One advantage to the configuration illustrated inFIG.10is the use of only a single capacitor166and lower number of transistors compared to other implementations. Thus, the layout area of this implementation can be 50% smaller compared to other topologies. The trade-off may be a smaller improvement to the switching speed compared to other embodiments, due to the exponential decaying behavior of the node OUT. FIG.11illustrates the timing of the control signals and output node for the fast switching circuit96ofFIG.10. With reference toFIGS.10and11, in the interval between times 0 and t1, node A and node OUT are −3.3V. The control signal Nside_Gate is configured to turn on the transistors in the negative switch stack174and the control signal Pside_Gate is configured to turn off the transistors in the positive switch stack172. The negative pair of stacked transistors164is turned on via the control signal Ncharge transistors are ON and the positive pair of stacked transistors162are turned off by the control signal Pcharge. In the interval between t1 and t1+Δt, (in some implementations, Δt is as small as couple of ns), the control signal Pside_Gate is configured to turn on the transistors in the positive switch stack172and the control signal Nside_Gate is configured to turn off the transistors in the negative switch stack174. Accordingly, the node OUT voltage becomes 3.3V-VDiodeNode A has the same voltage as the previous interval at −3.3V. During this interval, the capacitor166is charged to 6.6V. In the interval between t1+Δt and t2, the control signal Pcharge is set to turn on the positive pair of stacked transistors162and the control signal Ncharge is set to turn off the negative pair of stacked transistors164, such that node A jumps to a voltage of 3.3V. This in turn jumps the node OUT to a voltage of 9.9V-VDiodeAfter this jump at node OUT, the node OUT voltage decrease to 3.3V without the need of any extra timing since node OUT charges the RF switch by using the capacitor166charge. Due to the charging using the capacitor166, the node OUT voltage decays exponentially from 9.9V-VDiodeto 3.3V. The value of the capacitor166can be selected similar to the RF Switch gate capacitor. The negative jump/boost of the voltage from t2 to t3 is similar to the positive jump. FIG.12is a circuit diagram of a fast switching circuit98according to still yet another embodiment. The fast switching circuit98includes an architecture similar to that ofFIG.8Ain which each of the positive and negative sides200and220in which a plurality of charge pump circuits202a,202n,222a,222nare cascaded. By cascading a plurality of charge pump circuits202a,202n,222a,222n, the supply voltages VDDand VSScan be boosted to higher than three times the supply voltage VDDand VSSvalues. Any number of charge pump circuits can be cascaded in accordance with the teachings herein. The fast switching circuit98includes several components similar to those of the fast switching circuit94described in connection withFIG.8A, and thus, a discussion of these similar components has been omitted. Applications Devices employing the above described schemes can be implemented into various electronic devices. Examples of electronic devices include, but are not limited to, RF communication systems, consumer electronic products, electronic test equipment, communication infrastructure, etc. For instance, RF switches with fast switching can be used in a wide range of RF communication systems, including, but not limited to, base stations, mobile devices (for instance, smartphones or handsets), laptop computers, tablets, Internet of Things (IoT) devices, and/or wearable electronics. The teachings herein are applicable to RF communication systems operating over a wide range of frequencies and bands, including those using time division duplexing (TDD) and/or frequency division duplexing (FDD). CONCLUSION The foregoing description may refer to elements or features as being “connected” or “coupled” together. As used herein, unless expressly stated otherwise, “connected” means that one element/feature is directly or indirectly connected to another element/feature, and not necessarily mechanically. Likewise, unless expressly stated otherwise, “coupled” means that one element/feature is directly or indirectly coupled to another element/feature, and not necessarily mechanically. Thus, although the various schematics shown in the figures depict example arrangements of elements and components, additional intervening elements, devices, features, or components may be present in an actual embodiment (assuming that the functionality of the depicted circuits is not adversely affected). Although this invention has been described in terms of certain embodiments, other embodiments that are apparent to those of ordinary skill in the art, including embodiments that do not provide all of the features and advantages set forth herein, are also within the scope of this invention. Moreover, the various embodiments described above can be combined to provide further embodiments. In addition, certain features shown in the context of one embodiment can be incorporated into other embodiments as well. Accordingly, the scope of the present invention is defined only by reference to the appended claims. | 63,220 |
11863228 | BEST MODE FOR CARRYING OUT THE INVENTION Overview FIG.1schematically illustrates a mobile (cellular) telecommunication system1in which users of mobile telephones3-0,3-1, and3-2in a first cell4-1can communicate with other users (not shown) via a first base station5-1and a telephone network7and in which users of mobile telephones3-3,3-4, and3-5in a second cell4-2can communicate with other users (not shown) via a second base station5-2and the telephone network7. In this exemplary embodiment, the base stations5use an orthogonal frequency division multiple access (OFDMA) transmission technique for the downlink (from base stations5to the mobile telephones3) and a L-DMA+FH transmission technique for the uplink (from the mobile telephones3to the base stations5). The use of frequency hopping for the uplink has been chosen because it provides service quality improvements through interference averaging and frequency diversity. In this exemplary embodiment, the frequency hopping scheme is chosen so that the following requirements are preferably met:No collision between hopping mobile telephones3in the same cell4;Different hopping patterns in neighbouring cells4to reduce inter-cell interference;High degree of frequency diversity for one mobile telephone3throughout the hopping pattern for subsequent transmissions;Preserve the single carrier property of L-FDMA (in which the allocated frequency resources are provided as a single contiguous block of frequency resources);Minimise the signalling overhead for informing the mobile telephones3of the hopping sequence; andFrequency hopping designed for use by persistently scheduled mobile telephones3that are using, for example, services such as VoIP, as well as mobile telephones3that are dynamically scheduled on a TTI by TTI basis. Time/Frequency Resources In this exemplary embodiment, the available transmission band is divided into a number of sub-bands, each of which comprises a number of contiguous sub-carriers arranged in contiguous blocks. Different mobile telephones3are allocated different resources block(s) (sub-carriers) within a sub-band at different times for transmitting their data.FIG.2illustrates E-UTRA's latest definition of the transmission channel as comprising a sequence of 1 ms Transmission Time Intervals (TTIs)11-1,11-2, each of which consists of two 0.5 ms slots13-1and13-2. As shown, the available transmission bandwidth is divided into S sub-bands15-1to15-sand each mobile telephone3is scheduled to transmit its uplink data in selected slots13and in selected sub-bands15, in accordance with the agreed frequency hopping sequence. Two different types of frequency hopping can be applied—Inter TTI frequency hopping and Intra TTI frequency hopping. Inter TTI frequency hopping is when the allocated frequency resource is changed from one TTI11to the next and intra TTI frequency hopping is where the allocated frequency resource is changed from one slot13to the next. The technique to be described below is applicable to both Inter and Intra TTI frequency hopping, although the description will refer mainly to Inter TTI frequency hopping. Proposed Frequency Hopping Scheme The frequency hopping scheme used in this exemplary embodiment relies on each mobile telephone3being given an initial allocation of resource blocks (one or more contiguous blocks of sub-carriers) within one of the sub-bands. These initial allocations are assigned by the base station5, and so it can make sure that there are no collisions between the initial allocations for the mobile telephones3within its cell4. These initial allocations are then changed in accordance with a hopping sequence allocated to the cell4. The change applied at any point in time is an integer multiple of the number of resources in each sub-band. As a result, the frequency hopped resources that are allocated to a mobile telephone3will also be a contiguous block of resources in a single sub-band. This is beneficial as it allows the power amplifier (not shown) used by the mobile telephones3to be more efficient than would be the case if the resources used are not contiguous and are not in the same sub-band. It follows that, to maintain this advantage, the largest allowable contiguous allocation for a hopping mobile telephone3corresponds to the number of resource blocks in a sub-band. Mathematically, the frequency hopping scheme used in this exemplary embodiment can be defined as follows: y={x+a(t)N}modNRBEquation 1, whereNRB is the total number of resource blocks in the transmission band;N is the number of contiguous resource blocks in each sub-band;x is the initial resource block allocated to the mobile telephone;y is the frequency hopped resource block;t is a TTI (or slot) counter synchronised between the base station5and the mobile telephone3;a(t) is the current frequency hopping shift and is an integer value from the set {0, 1, . . . ,S−1}; andS is the number of sub-hands. FIG.3illustrates a shift register used for generating a pseudo random binary sequence for controlling the frequency hopping to be used by each user mobile telephone. The shift register ofFIG.3will later be described. FIG.4illustrates a hopping pattern that can be generated in the above manner for four mobile telephones (MTs), where MT1to MT3are assigned one resource block each while MT4is assigned two resource blocks. In this example, a(t) has values of 0, 2, S−1 and 1 for TTI #0, TTI #1, TTI #2 and TTI #n respectively. The way in which contiguous resource blocks can be allocated for the uplink and signalled to User Equipment (such as the mobile telephones3) has already been proposed in TSG-RAN R1-070364, “Uplink Resource Allocation for EUTRA” NEC Group, NTT DoCoMo, the contents of which are incorporated herein by reference. As those skilled in the art will appreciate, if a mobile telephone3is assigned more than one resource block (x), then the calculation above is performed for each assigned resource block. In this exemplary embodiment, NRB, N and S are system semi-static constants and are programmed into the mobile telephones3and the base stations5in advance. At any given time, the allocated resource block, x, is different for each of the mobile telephones3in the same cell4. However, the value of a(t) at any point in time is common for all mobile telephones3in the same cell4and the value is changed in accordance with a predetermined hopping sequence. The hopping sequence preferably has the following properties:1. It should be different in different cells4in order to randomise inter-cell interference;2. It should be simple to generate (to minimise computational load in the base stations5and the mobile telephones3);3. It should be defined by a small number of parameters (to minimise signalling load); and4. It should be periodic with a period, T, that is much longer than the transmission interval of persistently scheduled users (otherwise there is a risk that the transmission interval is equal to the period of a(t), in which case there would be no frequency diversity). In the event that some TTIs are set aside for hopping mobile telephones3, the hopping shift a(t) would only be applied in those TTIs. Dynamically scheduled mobile telephones3may still be scheduled in such ‘hopping TTIs’ in any resource blocks which are not occupied by the hopping mobile telephones3. There are a number of different ways of generating a(t) in the mobile telephones3and the base station5. One possibility is use a pseudo-random sequence, resetting the sequence every T TTIs (or slots). A large number of sequences could easily be generated in this way and the sequence number could be signalled efficiently. For example, consider the shift register arrangement17shown inFIG.3, which produces a length2047pseudo-random binary sequence (PRBS). The state of the shift register17is updated each TTI (or slot). If the 11 bit shift register value at time t is represented by m(t), then a pseudo-random value in the range 0 to S−1 can be calculated, for example, as follows: a(t)=floor[(m(t)·S)/20481] Equation 2, where floor[r] is the floor function, ie the largest integer not greater than r. This calculation is easy to perform using a multiplication and bit shift. By resetting the shift register every T=256 TTIs (or slots), eight different sequences can be produced using different initial states. More specifically, the shift register17shown inFIG.3cycles through 2047 states that we can label s(0) to s(2046). As the registers are being reset every 256 TTIs (or slots), the register will only cycle through 256 of its 2047 possible states. Therefore, it is possible to use the same shift register17to generate different a(t) sequences, simply by starting the shift register17at different initial states. For example, a first a(t) sequence can be defined by setting the shift register17into initial state s(0); a second a(t) sequence can be defined by setting the shift registers17into initial state s(256); a third a(t) sequence can be defined by setting the shift registers17into initial state s(512) etc. Different sequences can then be assigned to the base station5and the mobile telephones3in the different cells4, thereby avoiding the possibility that two mobile telephones3in different cells4could be following exactly the same frequency hopping pattern and therefore colliding 100% of the time. The mobile telephones3in a given cell4may be signalled with the initial state, but this would require eleven bits of signalling overhead. Therefore, in this exemplary embodiment, all the initial states are pre-programmed into the mobile telephones3and the appropriate one to be used by the mobile telephones3in a cell are signalled to the mobile telephones3using an associated sequence identifier (which would be a 3-bit identifier for the above example having eight different sequences). Base Station FIG.5is a block diagram illustrating the main components of each of the base stations5used in this exemplary embodiment. As shown, each base station5includes a transceiver circuit21which is operable to transmit signals to and to receive signals from the mobile telephones3via one or more antennae23and which is operable to transmit signals to and to receive signals from the telephone network7via a network interface25. The operation of the transceiver circuit21is controlled by a controller27in accordance with software stored in memory29. In this exemplary embodiment, the software in memory29includes, among other things, an operating system31, a resource allocation module33and a resource determination module34(which modules may form part of the operating system31). The resource allocation module33is operable for allocating initial resource blocks (x) to be used by each of the mobile telephones3in their communications with the base station5. This initial resource allocation depends on the type and quantity of data to be transmitted by the user device. For users subscribing to services where regular but small amounts of data are to be transmitted, the resource allocation module33allocates appropriate resource blocks on a recurring or periodic basis. For a VoIP service, for example, this may result in the user being allocated resource blocks every 20 ms. This type of allocation is referred to as persistent allocation. For users with larger volumes of data to be transmitted, the resource allocation module will allocate the appropriate resource blocks on dynamic basis, taking into account the current channel conditions between the user's mobile telephone3and the base station5. This type of allocation is referred to as dynamic allocation. The resource determination module34is provided for determining the actual frequency resources that each mobile telephone3will use to transmit its data to the base station5. The resource determination module34uses the determined frequency resources to control the operation of the transceiver circuit21so that the data received from each mobile telephone3can be recovered and forwarded as appropriate to the telephone network7. To achieve this, the resource determination module34implements the above described shift register17-5and TTI (or slot) counter (t)35(although these could be implemented as hardware in the controller27), so that it can keep track of which resource block or blocks will actually be used by each mobile telephone3at each point in time, using equations 1 and 2 above and the initial allocations made by the resource allocation module33. In this exemplary embodiment, the resource determination module34receives a sequence identifier from the telephone network7identifying the initial state to be applied to its shift register17-5. The resource determination module34uses the sequence identifier to retrieve the corresponding initial state from memory which it then uses to set the initial state of the shift register17-5. The resource determination module34also signals the received sequence identifier to all the mobile telephones3in its cell4. The resource determination module34also transmits synchronisation data to synchronise the TTI (or slot) counters in the mobile telephones3with its own TTI (or slot) counter35, so that the base station5and the mobile telephones3can maintain synchronisation in applying the frequency hopping sequence (a(t)). Mobile Telephone FIG.6schematically illustrates the main components of each of the mobile telephones3shown inFIG.1. As shown, the mobile telephones3include a transceiver circuit71which is operable to transmit signals to and to receive signals from the base station5via one or more antennae73. As shown, the mobile telephone3also includes a controller75which controls the operation of the mobile telephone3and which is connected to the transceiver circuit71and to a loudspeaker77, a microphone79, a display81, and a keypad83. The controller75operates in accordance with software instructions stored within memory85. As shown, these software instructions include, among other things, an operating system87and a resource determination module89. In this exemplary embodiment, the resource determination module89includes the above described 11-bit shift register17-3and a TTI (or slot) counter91. In operation, the resource determination module89receives the sequence identifier for the cell4transmitted by the base station5in a common signalling channel. The resource determination module89uses this sequence identifier to retrieve the corresponding initial state to be applied to the shift register17-3from memory. The resource determination module89also receives the synchronisation data for synchronising its TTI (or slot) counter91with the corresponding counter35in the base station5. In this exemplary embodiment, the mobile telephone3receives this information at the time that it first associates with the base station5. The resource determination module89also receives resource allocation data identifying the initially allocated resources, x, as well as the TTI11and/or the slot13in which those resources have been allocated to that mobile telephone3. For persistently scheduled mobile telephones3, this resource allocation data may define a period between allocated TTIs or slots, such that the mobile telephone3is allocated resource block x every Y TTIs (or slots). In this case, the resource allocation data only has to be transmitted once or whenever the allocation is to be changed. For dynamically scheduled users, the resource allocation data must be transmitted before each scheduled transmission. Once the resource determination module89has received the data to initialise the shift register17-3and the counter91as well as the resource allocation data, it uses equations 1 and 2 to determine the actual resource block(s) to use for its uplink transmissions in the scheduled TTI (or slot). This information is then used to control the operation of the transceiver circuit71accordingly. Modifications and Alternatives A detailed exemplary embodiment has been described above. As those skilled in the art will appreciate, a number of modifications and alternatives can be made to the above exemplary embodiment whilst still benefiting from the inventions embodied therein. By way of illustration only a number of these alternatives and modifications will now be described. In the above exemplary embodiment, equation 2 was used to generate the value of a(t) to be used in equation 1. If required, this calculation could be modified slightly to ensure that successive values of a(t) are always different, as follows: a(t)={a(t−1)+1+floor[(m(t)·(S−1))/2M]}modSEquation 3, where a(0)=0 and M is the number of registers in the shift register17. Another possibility is to generate a(t) by cyclic sampling of the sequence 0, 1, S−1 as follows: a(t)=ktmodS t=0toT−1, where k is an integer co-prime to S. In this case, different values of k yield different sequences. However, since the resulting sequence will be periodic with period S, it is unlikely to meet the desired requirement that its period is much longer than the transmission interval of persistently scheduled users. In the above exemplary embodiment, the base station5received the sequence identifier from the telephone network7which identified the initialisation state to be applied to its shift register17-5. This allocation of the initialisation states may be fixed for the network or it may be changed on a regular or periodic basis. If it is changed, the base station5preferably broadcasts the new initialisation state (or state identifier) in a common signalling channel so that the mobile telephones3can update their shift registers17-3accordingly. In one exemplary embodiment, the base stations5may be arranged to randomly select an initialisation state to use. In this case it is possible that two neighbouring cells4could end up using the same hopping sequence, but by changing the sequence regularly or periodically it is possible to ensure that any resulting inter-cell interference will be short lived. In the above exemplary embodiment, 11-bit shift registers were used in generating the appropriate frequency hopping sequence. As those skilled in the art will appreciate, longer or shorter length shift registers could be used instead. Similarly, the number of different sequences that can be obtained from the shift register can also be varied—it does not have to be eight. As those skilled in the art will appreciate, for a given length of shift register, there is a tradeoff between the number of sequences that can be derived from it and the periodicity (T) of those sequences. The length of the sequence is preferably at least 5 times and more preferably more than 10 times longer than the transmission interval of any persistently scheduled users. To ensure maximum frequency diversity for all users, the length of the sequence should correspond to the length of the maximum transmission interval multiplied by the number of sub-bands (S). In the above exemplary embodiment, a mobile telephone based telecommunication system was described in which the above described frequency hopping techniques were employed. As those skilled in the art will appreciate, many of these frequency hopping techniques can be employed in any communication system that uses a plurality of resource blocks. In particular, many of these frequency hopping techniques can be used in wire or wireless based communication systems which either use electromagnetic signals or acoustic signals to carry the data. In the general case, the base stations and the mobile telephones can be considered as communications nodes which communicate with each other. The frequency hopping techniques described above may be used just for uplink data, just for downlink data or for both downlink and uplink data. Other communications nodes may include user devices such as, for example, personal digital assistants, laptop computers, web browsers, etc. In the above exemplary embodiments, a number of software modules were described. As those skilled will appreciate, the software modules may be provided in compiled or un-compiled form and may be supplied to the base station or to the mobile telephone as a signal over a computer network, or on a recording medium. Further, the functionality performed by part or all of this software may be performed using one or more dedicated hardware circuits. However, the use of software modules is preferred as it facilitates the updating of base station5and the mobile telephones3in order to update their functionalities. In the above exemplary embodiments, certain system constants such as the total number of resource blocks in the communication channel, the number of sub-bands and the number of resource blocks in each sub-band were programmed into the mobile telephones and the base stations. This information may be programmed directly into the software instructions run on these devices or may be software inputs that can be varied from time to time. In either case, the mobile telephones and the base station will include data (software or inputs) that define these system constants either directly or indirectly. For example, data may be stored that directly defines the values of NRB and S together with data defining how N can be derived from these two. The following is a detailed description of the way in which the present inventions may be implemented in the currently proposed 3GPP LTE standard. Whilst various features are described as being essential or necessary, this may only be the case for the proposed 3GPP LTE standard, for example due to other requirements imposed by the standard. These statements should not, therefore, be construed as limiting the present invention in any way. The following description will use the nomenclature used in the Long Term Evolution (LTE) of UTRAN. For example, a base station is referred to as eNodeB and a user device is referred to as a UE. 1 Introduction During TSG-RAN WG1 #46bis discussions, it was decided that Localised FDMA (L-FDMA) with inter and intra TTI frequency hopping (L-FDMA+FH) would be used for EUTRA Uplink However, there was not any discussion about what kind of frequency hopping pattern can be supported by EUTRA Uplink. In this contribution, we collect some requirements that can be used for the selection of an efficient hopping pattern for L-FDMA uplink and propose a suitable frequency hopping scheme for the uplink. 2 Requirements for Frequency Hopping Pattern It is well-known that frequency hopping provides service quality improvement through interference averaging and frequency diversity. However, frequency hopping needs to be tailored for each system. The following requirements are applicable to the LTE system [5-6];No collision between hopping UEs in the same cell;Different hopping patterns in neighbouring cells to reduce inter-cell interference;High degree of frequency diversity for one UE throughout hopping pattern for the subsequent transmissions;Preserve the single carrier property of the L-FDMA;Signalling overhead for informing UEs of a specific or common hopping sequence should be kept as small as possible;Frequency hopping should be designed for small sized packets intended to persistent scheduled UEs (e.g. VoIP service) as well as high speed UEs. 3 Frequency Hopping Scheme Let NRB be the total number of Resource Blocks (RBs) in the whole bandwidth. Let the bandwidth be divided into S sub-bands of N=NRB/S contiguous RBs each. If a UE is assigned a RB x, it is understood that the RB actually used for transmission in TTI (or slot) number t is y=x+a(t)NmodNRB, wheret is a TTI (or slot) counter synchronised between the eNodeB and UE; anda(t) is a value from the set {0, 1, . . . , S−1}. If a UE is assigned more than one RB, then the calculation above is performed for each assigned RB. Provided that all the assigned RBs are contiguous and contained within one of the S sub-bands, the single carrier property is retained even after applying the frequency hopping shift a(f). It follows that the largest allowable contiguous allocation for a hopping UE is N RBs. The signalling of the assigned contiguous resource allocations have already been proposed in [7]. The periodic sequence a(t) is common for all UEs in the cell, and should have the following properties.5. It should be different in different cells in order to randomise inter-cell interference.6. It should be simple to generate (to minimise computational load in the eNodeB and UE).7. It should be defined by a small number of parameters (to minimise signalling load).8. Its period, T, should be much longer than the transmission interval of persistently scheduled users (otherwise there is a risk that the transmission interval is equal to the period of a(t), in which case there would be no frequency diversity). In the case that some TTIs are set aside for hopping UEs, the hopping shift a(t) would only be applied in those TTIs. Dynamically scheduled UEs may still be scheduled in such ‘hopping TTIs’ in any RBs which are not occupied by hopping UEs. One possibility is to generate a(t) using a pseudo-random sequence, resetting the sequence every T TTIs (or slots). A large number of sequences could easily be generated in this way and the sequence number could be signalled efficiently. For example, consider the shift register arrangement which is shown inFIG.7and which produces a length2047pseudo-random binary (PRBS) sequence. The shift register state is updated each TTI (or slot). Let m(t) represent the 11-bit shift register value at time t. A pseudo-random value in the range 0 to S-1 can be obtained as follows: a(t)=floor[(m(t)·S)/2048]. This calculation is easy to perform using a multiplication and bit shift. By resetting the shift register every T=256 TTIs (or slots),8different sequences can be produced using different initial states. Obviously a longer shift register could produce more sequences, and/or a larger period T. These different sequences can also be assigned into different cells. If required, the calculation above could be modified slightly to ensure that successive values of a(t) are always different, as follows: a(t)={a(t−1)+1+floor[(m(t)·(S−1))/2048]}modS, wherea(0)=0. FIG.8shows a hopping pattern for four UEs where UE1to UE3are assigned1RB each while UE4is assigned2RBs. In this example, a(t) has values of 0, 2, S−1 and 1 for TTI #0, TTI #1, TTI #2 and TTI #n respectively. 4 Conclusions This contribution outlines some requirements for the selection of an efficient hopping pattern for L-FDMA uplink In addition, a method for generating hopping patterns has been described for L-FDMA which avoids collision between hopping UEs and at the same time mitigates other cell interference. Hence, we propose such frequency hopping scheme to be adopted for E-UTRA Uplink. 5 References [1] TSG-RAN WG1 #47, R1-063319 “Persistent Scheduling in E-UTRA”, NTT DoCoMo, NEC Group.[2] TSG-RAN WG1 LTE AdHoc, R1-060099, “Persistent Scheduling for E-UTRA” Ericsson.[3] TSG-RAN WG1 #47, R1-063275, “Discussion on control signalling for persistent scheduling of VoIP”, Samsung.[4] TSG-RAN WG1 #44, R1-060604 “Performance Comparison of Distributed FDMA and Localised FDMA with Frequency Hopping for EUTRA Uplink”, NEC Group.[5] TSG-RAN WG1 #46Bis, R1-062761 “Performance of D-FDMA and L-FDMA with Frequency Hopping for EUTRA Uplink”, NEC Group, NTT DoCoMo.[6] TSG-RAN WG1 #46Bis, R1-062851 “Frequency hopping for E-UTRA uplink”, Ericsson.[7] R1-070364, “Uplink Resource Allocation for EUTRA” NEC Group, NTT DoCoMo. While the invention has been particularly shown and described with reference to exemplary embodiments thereof, the invention is not limited to these embodiments. It will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the claims. | 27,941 |
11863230 | DESCRIPTION OF EMBODIMENTS First Embodiment Hereinafter, a first embodiment of the present invention will be described with reference to the drawings. [Overall Configuration of Communication System] FIG.1is an overall configuration diagram of a communication system1according to the first embodiment of the present invention. The communication system1shown inFIG.1is a 10G-EPON (10 Gigabit-Ethernet Passive Optical Network) system. As shown in the drawing, the communication system1is constituted by including a plurality of ONUs100, a plurality of user terminals200respectively communicably connected to the ONUs100, an OLT300, and an optical splitter400. The communication system1is a system in which one OLT300and a plurality of ONUs100are communicably connected in a Point-to-Multipoint manner via the optical splitter400. However, the communication system1may also be a system in which one each of the OLTs300and the ONUs100are connected to each other in a Point-to-Point manner. The user terminal200is, for example, an information processing apparatus such as a personal computer or a home gateway. [Configuration of ONU] FIG.2is a schematic block diagram showing a functional configuration of the ONU100according to the first embodiment of the present invention. As shown inFIG.2, the ONU100includes a device110a, a device110b, an optical power receiving unit120, a UNI (User Network Interface)130, and a power source unit140. Note that inFIG.2, the solid-line arrows indicate a communication line through which the main signal flows. Also, the broken-line arrows indicate a control signal line through which the control signal flows. The device110ais constituted by including a main signal processing unit111aand a control unit/apparatus monitoring unit112a. Also, the device110bis constituted by including a main signal processing unit111band a control unit/apparatus monitoring unit112b. In this manner, the device110aand the device110bhave the same configuration. Note that if it is not necessary to particularly distinguish between the device110aand the device110b, the term “device110” will simply be used below. Also, if it is not necessary to particularly distinguish between the main signal processing unit111aand the main signal processing unit111b, the term “main signal processing unit111” will simply be used below. Also, if it is not necessary to particularly distinguish between the control unit/apparatus monitoring unit112aand the control unit/apparatus monitoring unit112b, the term “control unit/apparatus monitoring unit112” will simply be used below. The main signal processing unit111performs processing such as mutual conversion between an optical signal and an electric signal on the main signal flowing between the OLT (Optical Line Terminal)300and the user terminal200. The control unit/apparatus monitoring unit112ais constituted by including a processor such as a CPU (Central Processing Unit), for example. The control unit/apparatus monitoring unit112acontrols the operation of each functional unit included in the ONU100. Also, the control unit/apparatus monitoring unit112adetects an error that occurs in the main signal by monitoring the main signal flowing through the main signal processing unit111a. Also, the control unit/apparatus monitoring unit112aexecutes alive monitoring of the other device110(i.e., the device110b) via the control signal line. Also, if the control unit/apparatus monitoring unit112adetects runaway or stopping of operation of the other device110, the control unit/apparatus monitoring unit112aoutputs a reset instruction to the other device110via the control signal line. Alternatively, if the control unit/apparatus monitoring unit112adetects runaway or stopping of operation of the other device110, the control unit/apparatus monitoring unit112aoutputs a power source reset instruction to the power source unit140via the control signal line. Also, if the control unit/apparatus monitoring unit112aacquires a reset instruction from the control unit/apparatus monitoring unit112(i.e., the control unit/apparatus monitoring unit112b) of the other device110via the control signal line, the control unit/apparatus monitoring unit112aperforms reset processing for resetting the operating state of device110a. The control unit/apparatus monitoring unit112bis constituted by including, for example, a processor such as a CPU. The control unit/apparatus monitoring unit112bcontrols the operation of each functional unit included in the ONU100. Also, the control unit/apparatus monitoring unit112bdetects an error that occurs in the main signal by monitoring the main signal flowing through the main signal processing unit111b. Also, the control unit/apparatus monitoring unit112bexecutes alive monitoring of the other device110(i.e., the device110a) via the control signal line. Also, if the control unit/apparatus monitoring unit112bdetects runaway or stopping of operation of the other device110, the control unit/apparatus monitoring unit112boutputs a reset instruction to the other device110via the control signal line. Alternatively, if the control unit/apparatus monitoring unit112bdetects runaway or stopping of operation of the other device110, the control unit/apparatus monitoring unit112boutputs a power source reset instruction to the power source unit140via the control signal line. Also, if the control unit/apparatus monitoring unit112bobtains a reset instruction from the control unit/apparatus monitoring unit112(i.e., the control unit/apparatus monitoring unit112a) of the other device110via the control signal line, the control unit/apparatus monitoring unit112bperforms reset processing for resetting the operating state of device110b. The optical power receiving unit120receives the optical signal transmitted from the OLT300and outputs it to the main signal processing unit111. Also, the optical power receiving unit120transmits the optical signal output from the main signal processing unit111to the OLT300. The UNI130transmits the electric signal output from the main signal processing unit111to the user terminal200. Also, the UNI130outputs an electric signal transmitted from the user terminal200to the main signal processing unit11. The power source unit140supplies power to each functional unit included in the ONU100. Also, if the power source unit140acquires a reset instruction from the control unit/apparatus monitoring unit112via the control signal line, after temporarily stopping the supply of power to the entire ONU100(i.e., after the power is turned off), the power source unit140executes power source reset processing for resuming the supply of power to the entire ONU100(i.e., turns on the power source). Note that any method can be used for resetting the device110and resetting the power source of the entire ONU100. [Operations of Device] FIG.3is a flowchart showing operations of the device110according to the first embodiment of the present invention. The flowchart shown inFIG.3starts when an error occurs in the other device110. Note that in the following description, the operations of the device110awill be described as an example, but the operations of the device110bare also the same. The control unit/apparatus monitoring unit112aof the device110adetects an error that has occurred in the other device110(device110b) (step S001). As described above, the error referred to here is, for example, runaway or stopping of operation of the device110. Next, the control unit/apparatus monitoring unit112aoutputs a reset instruction to the other device110(device110b) via the control signal line (step S002). Next, if the control unit/apparatus monitoring unit112adetects that the other device110(device110b) has been restored due to reset processing (step S003, Yes), the operations of the device110ashown in the flowchart ofFIG.3end. On the other hand, if the control unit/apparatus monitoring unit112adetects that the other device110(device110b) has not been restored (step S003, No), the control unit/apparatus monitoring unit112aoutputs a power source reset instruction to the power source unit140via the control signal line (step S004). This completes the operations of the device110ashown in the flowchart ofFIG.3. As described above, in the ONU100(communication apparatus) according to the first embodiment, a plurality of devices110(communication processing units) in the communication apparatus mutually perform monitoring. Then, if an error occurs in one device110and the one device110undergoes runaway, stops operating, or the like, the ONU100resets the operating state of the one device110using the other device110. Alternatively, if the device110undergoes runaway, stops operating, or the like, the ONU100resets the power source of the entire communication apparatus (ONU)100. Note that in the conventional communication apparatus, if a soft error such as bit inversion occurs, for example, it is assumed that the device inside the communication apparatus detects and corrects the error. However, in the conventional communication apparatus, if, for example, a soft error that causes runaway, stopping of operation, or the like of a device for monitoring the communication apparatus occurs in that device, the soft error cannot be detected. In contrast to this, the ONU100according to the first embodiment has the above configuration, and thereby even if an error occurs in the device110for monitoring the communication apparatus, the ONU100can detect the error and restore itself. Note that in the first embodiment, a configuration is used in which if the control unit/apparatus monitoring unit112of one device110detects an error that occurs in the other device110, the control unit/apparatus monitoring unit112first instructs a reset of the other device110, and if the other device110is not restored, the control unit/apparatus monitoring unit112instructs a power source reset of the entire communication apparatus (ONU100). However, there is no limitation to this kind of configuration, and it is also possible to use a configuration in which if the control unit/apparatus monitoring unit112of one device110detects an error that occurs in another device110, the control unit/apparatus monitoring unit112first instructs a reset of the other device110, and if the other device110is not restored even if a plurality of instances of the reset are attempted, the control unit/apparatus monitoring unit112instructs a power source reset of the entire communication apparatus (ONU100). It is also possible to use a configuration in which the control unit/apparatus monitoring unit112performs only the former processing or only the latter processing. That is, for example, if the control unit/apparatus monitoring unit112of one device110detects an error that has occurred in the other device110, the control unit/apparatus monitoring unit112may also perform only instruction of a reset of the other device110. Alternatively, for example, if the control unit/apparatus monitoring unit112of one device110detects an error that has occurred in the other device110, the control unit/apparatus monitoring unit112may also instruct a power source reset of the entire communication apparatus (ONU100) without attempting a reset of the other device110. Note that in the first embodiment, the ONU100is configured to include two devices110(device110aand device110b), but may also be configured to include N (N being an integer that is 3 or more) devices110. In this case, for example, if the probability of an error occurring in each device110is 1/X, the probability of an error occurring simultaneously in N devices is (1/X)N. For this reason, the likelihood that the ONU100cannot be restored due to an error occurring at the same time in all the devices110becomes exponentially lower the greater the number of devices110included in the ONU100is. In this manner, according to the first embodiment, the robustness of the device can be improved without complicating the device configuration. Note that the configuration of the ONU100according to the first embodiment described above is merely an example. For example, it is also possible to use a configuration such as a modified example of the first embodiment described below. The communication apparatus according to the modified example described below includes a plurality of devices capable of mutually performing alive monitoring, similarly to the ONU100according to the first embodiment described above. Modified Example [Operations of Device] Hereinafter, an example of operations of a communication apparatus according to a modified example of the first embodiment will be described. FIG.4is a flowchart showing operations of a device included in the communication apparatus according to the modified example of the first embodiment of the present invention. This flowchart starts when some kind of error occurs in the communication apparatus. Note that in the following description, the operations of any one device among the plurality of devices included in the communication apparatus will be described. Note that in the following description, the one device is referred to as “one device”, and one of the other devices is referred to as “another device”. Note that each device can be executed in two operation modes, namely a “power source reset mode” and a “device reset mode”. The power source reset mode is an operation mode for instructing a reset of the power source of the entire communication apparatus when it is detected that an error has occurred in the communication apparatus. On the other hand, the device reset mode is an operation mode in which when it is detected that an error has occurred in the communication apparatus, if the location where the error occurs is the other device, which is a monitoring target, a reset of the other device is instructed in some cases. Note that the other device, which is a monitoring target, is a device that can instruct a reset of the other device if the one device detects an error that occurs in the other device. Note that the operation mode is set in advance for each device by, for example, an operation manager or the like. As shown inFIG.4, first, the one device detects an error that has occurred in the communication apparatus in which the one device is included (step S101). If the one device is operating in the power source reset mode (step S102, Yes), the one device outputs a power source reset instruction to the power source unit via a control signal (step S103). This completes the operation of the device shown in the flowchart ofFIG.4. If the one device is operating in the device reset mode (step S102, No), the one device determines whether or not the error detected in step S101is an error that has occurred in the device that is the monitoring target. If the detected error is not an error that has occurred in the device that is the monitoring target (step S104, No), the one device outputs a power source reset instruction to the power source unit via the control signal (step S103). This completes the operations of the device shown in the flowchart ofFIG.4. When the detected error is an error that has occurred in the device that is the monitoring target (step S104, Yes), if the one device has a chain of command (step S105, Yes), the one device outputs a power source reset instruction to the power source unit via the control signal line (step S103). This completes the operations of the device shown in the flowchart ofFIG.4. If the one device does not have a chain of command (step S105, No), the one device outputs a reset instruction to the other device in which the error has occurred via the control signal line (step S106). This completes the operations of the device shown in the flowchart ofFIG.4. In this manner, the communication apparatus according to the modified example of the first embodiment has a configuration in which the operation differs depending on what the operation mode is, whether the device is the monitoring target, and whether the device has a chain of command. Hereinafter, three configuration examples of the functional configuration of the communication apparatus according to the modified example of the first embodiment will be described. First Configuration Example Hereinafter, a functional configuration of a communication apparatus600paccording to a first configuration example will be described. FIG.5is a schematic block diagram showing a configuration of the communication apparatus600paccording to the modified example of the first embodiment of the present invention. As shown inFIG.5, the communication apparatus600pincludes a device610a, a device610b, and a power source unit640. Note that inFIG.5, the broken-line arrows represent the control signal line through which the control signal flows. The device610ais constituted by including a chain-of-command unit611a, a monitoring unit612a, and a communication processing unit613a. Also, the device610bis constituted by including a chain-of-command unit611b, a monitoring unit612b, and a communication processing unit613b. In this manner, in the communication apparatus600paccording to the first configuration example, both the device610aand the device610bare configured to include a chain-of-command unit. The chain-of-command unit is constituted by a processor such as a CPU, for example. The chain-of-command unit611aand the chain-of-command unit611bcan cause the power source unit640to perform power source resetting of the entire communication apparatus600pby outputting a power source reset instruction to the power source unit640. The monitoring unit612aof the device610acan detect that an error has occurred in the communication processing unit613bof the device610b. If the monitoring unit612adetects that an error has occurred in the communication processing unit613b, the chain-of-command unit611aof the device610aoutputs a power source reset instruction to the power source unit640. The monitoring unit612bof the device610bcan detect that an error has occurred in the communication processing unit613aof the device610a. If the monitoring unit612bdetects that an error has occurred in the communication processing unit613a, the chain-of-command unit611bof the device610boutputs a power source reset instruction to the power source unit640. In this manner, the communication apparatus600paccording to the first configuration example is configured to cause the power source unit640to perform a power source reset if one device detects that an error has occurred in the communication processing unit of another device. Second Configuration Example Hereinafter, a functional configuration of a communication apparatus600qaccording to a second configuration example will be described. FIG.6is a schematic block diagram showing a configuration of the communication apparatus600qaccording to the modified example of the first embodiment of the present invention. As shown inFIG.6, the communication apparatus600qincludes a device610a, a device610b, and a power source unit640. Note that inFIG.6, the broken line arrows represent the control signal line through which the control signal flows. The device610ais constituted by including a chain-of-command unit611a, a monitoring unit612a, and a communication processing unit613a. Also, the device610bis constituted by including a monitoring unit612band a communication processing unit613b. In this manner, in the communication apparatus600paccording to the second configuration example, the device610ahas a chain-of-command unit, but the device610bdoes not have a chain-of-command unit. The chain-of-command unit611aof the device610acan cause the power source unit640to perform a power source reset of the entire communication apparatus600qby outputting a power source reset instruction to the power source unit640. The monitoring unit612bof the device610bcan cause the device610ato perform resetting of the device610aby outputting a reset instruction to the device610a. The monitoring unit612aof the device610acan detect that an error has occurred in the communication processing unit613bof the device610b. If the monitoring unit612adetects that an error has occurred in the communication processing unit613b, the chain-of-command unit611aof the device610aoutputs a power source reset instruction to the power source unit640. The monitoring unit612bof the device610bcan detect that an error has occurred in the communication processing unit613aof the device610a. If the monitoring unit612bdetects that an error has occurred in the communication processing unit613a, the monitoring unit612boutputs a reset instruction to the device610a. In this manner, the communication apparatus600paccording to the second configuration example is configured such that when one device detects that an error has occurred in the communication processing unit of another device, if the one device has a chain-of-command unit, the one device causes the power source unit640to perform a power source reset, and if the one device does not have a chain-of-command unit, the other device is reset. Third Configuration Example Hereinafter, a functional configuration of a communication apparatus600raccording to a third configuration example will be described. FIG.7is a schematic block diagram showing a configuration of the communication apparatus600raccording to a modified example of the first embodiment of the present invention. As shown inFIG.7, the communication apparatus600rincludes a device610a, a device610b, a device610c, and a power source unit640. Note that inFIG.7, the broken line arrow represents the control signal line through which the control signal flows. The device610ais constituted by including a chain-of-command unit611a, a monitoring unit612a, and a communication processing unit613a. Also, the device610bis constituted by including a chain-of-command unit611b, a monitoring unit612b, and a communication processing unit613b. Also, the device610cis constituted by including a monitoring unit612cand a communication processing unit613c. In this manner, in the communication apparatus600raccording to the third configuration example, the device610aand the device610binclude a chain-of-command unit, but the device610cdoes not include a chain-of-command unit. The chain-of-command unit611aof the device610aand the chain-of-command unit611bof the device610bcan cause the power source unit640to perform a power source reset of the entire communication apparatus600pby outputting a power source reset instruction to the power source unit640. The monitoring unit612cof the device610ccan cause the device610ato perform a reset of the device610aby outputting a reset instruction to the device610a. The monitoring unit612aof the device610acan detect that an error has occurred in the communication processing unit613bof the device610band the communication processing unit613cof the device610c. If the monitoring unit612adetects that an error has occurred in the communication processing unit613bor the communication processing unit613b, the chain-of-command unit611aof the device610aoutputs a power source reset instruction to the power source unit640. The monitoring unit612bof the device610bcan detect that an error has occurred in the communication processing unit613aof the device610a. If the monitoring unit612bdetects that an error has occurred in the communication processing unit613a, the chain-of-command unit611bof the device610boutputs a power source reset instruction to the power source unit640. The monitoring unit612cof the device610ccan detect that an error has occurred in the communication processing unit613aof the device610a. If the monitoring unit612bdetects that an error has occurred in the communication processing unit613a, the monitoring unit612boutputs a reset instruction to the device610a. In this manner, the communication apparatus600raccording to the third configuration example is configured such that when one device detects that an error has occurred in the communication processing unit of another device, if the one device has a chain-of-command unit, the one device causes the power source unit640to perform a power source reset, and if the one device does not have a chain-of-command unit, the other device is reset. Second Embodiment Hereinafter, a second embodiment of the present invention will be described with reference to the drawings. Note that the overall configuration diagram of the communication system1and the schematic block diagram showing the functional configuration of the ONU100in the second embodiment described below are the same as those in the first embodiment (i.e., the same asFIGS.1and2, respectively), and therefore description thereof will be omitted. [Operations of Device] FIG.8is a flowchart showing operations of the device110according to the second embodiment of the present invention. The flowchart shown inFIG.8starts when an error occurs in the ONU100. Note that the error referred to here may also include not only an error that occurs in another device110, but also an error that occurs in the device110and an error that occurs in another member (a member other than the device110) in the ONU100. The control unit/apparatus monitoring unit112initializes a variable M, which indicates a counter for counting the number of instances of outputting the power source reset instruction, by substituting 0 for the value of the variable M (step S201). Note that the leftward arrows shown in steps S201and S205of the flowchart ofFIG.8mean operations of substituting the value on the right side for the variable on the left side. The value of the variable M is temporarily stored in, for example, a storage medium (not shown) included in the control unit/apparatus monitoring unit112. The storage medium referred to here is, for example, a cache memory mounted in a CPU or the like. Next, the control unit/apparatus monitoring unit112of the device110detects an error that has occurred in the communication apparatus (ONU100) in which the control unit/apparatus monitoring unit112is included (step S202). As described above, the error referred to here is an error that causes, for example, runaway or stopping of operation of the device110. The control unit/apparatus monitoring unit112determines whether or not the value of the variable M is less than a predetermined value j (step S203). Note that the predetermined value j is a value indicating the maximum number of instances of attempting power source reset processing. The predetermined value j is, for example, a value determined in advance by a person in charge of operation and management, or the like. If the value of the variable M is less than the predetermined value j (step S203, Yes), the control unit/apparatus monitoring unit112outputs a reset instruction to the other device110or a power source reset instruction to the power source unit140via the control signal line (step S204). Note that the operations for the control unit/apparatus monitoring unit112to output the reset instruction to the other device110or to give the power source reset instruction to the power source unit140are performed according to, for example, the above-described flowchart shown inFIG.4. Next, the control unit/apparatus monitoring unit112adds 1 to the value of the variable M (step S205). Next, if the control unit/apparatus monitoring unit112detects that the ONU100has been restored due to the reset processing in the other device110or the power source reset processing performed by the power source unit140(step S206, Yes), the operations of the device110indicated by the flowchart ofFIG.8end. On the other hand, if the control unit/apparatus monitoring unit112detects that the ONU100has not been restored (step S206, No), the operations of step S203and onward described above are repeated. On the other hand, if the value of the variable M reaches the predetermined value j (step S203, No), the control unit/apparatus monitoring unit112outputs an operation stopping instruction for the communication apparatus (ONU100) (step S207). Note that any method can be used as a method for stopping the operation of the ONU100. For example, the control unit/apparatus monitoring unit112may stop the operation of the ONU100by outputting an operation stopping instruction, which is an instruction to stop the power supply to the entire ONU100, to the power source unit140via the control signal line. Next, the control unit/apparatus monitoring unit112outputs an illumination instruction indicating an instruction to illuminate a lamp (not shown) provided in the ONU100(step S208). This completes the operations of the device110shown in the flowchart ofFIG.8. Note that any method can be used as a method for illuminating the lamp. For example, the control unit/apparatus monitoring unit112starts power supply from the power source unit140to the lamp by outputting an illumination instruction to the power source unit140via the control signal line, and illuminates the lamp. By turning on the lamp in this manner, the user, the person in charge of operation and maintenance, or the like can recognize that the ONU100is in the operation-stopped state (abnormal state). When the user, the person in charge of operation and maintenance, or the like recognizes that the ONU100is in the operation-stopped state, the user, the person in charge of operation and maintenance, or the like manually restores the operating state of the ONU100. For example, the user, the person in charge of operation and maintenance, or the like restores the operating state of the ONU100by unplugging and plugging in a power plug (not shown) included in the ONU100into an outlet (not shown). Note that a method other than the method of illuminating the lamp may also be used as long as it is a method according to which the user, the person in charge of operation and maintenance, or the like can be notified that the ONU100is in the operation-stopped state. For example, the control unit/apparatus monitoring unit112may use a speaker (not shown) provided in the ONU100to notify the user, the person in charge of operation and maintenance, or the like by audio. Alternatively, for example, the control unit/apparatus monitoring unit112may also display information indicating that the ONU100is in the operation-stopped state on a display device (not shown) such as a liquid crystal display (LCD) included in the communication apparatus (ONU100) in which the control unit/apparatus monitoring unit112is included, or an external device. As described above, the ONU100(communication apparatus) according to the second embodiment monitors itself. Then, if an error that causes, for example, runaway or stopping of operation occurs in itself, the ONU100executes a power source reset for resetting the supply of power to the entire communication apparatus. If the communication apparatus is still not restored, the ONU100repeatedly performs the power source reset. If the communication apparatus is still not restored even if the power source reset is attempted until the predetermined number of instances is reached, the ONU100stops the operation of itself. Then, the ONU100illuminates the lamp in order to cause the user, the person in charge of operation and maintenance, or the like to recognize that the communication apparatus is in the operation-stopped state. As a result, the ONU100can prompt manual restoration of itself. The ONU100waits until restoration of itself is performed manually by the user, the person in charge of operation and maintenance, or the like. By including the above configuration, the ONU100according to the second embodiment can autonomously attempt restoration of itself if an error occurs in itself. If the ONU100detects an error from which restoration is possible through, for example, a power source reset (or reconfiguration), the ONU100can autonomously reset and restore itself. As a result, with the ONU100according to the second embodiment, the frequency of a manual restoration task is reduced, and therefore the operating cost of a communication apparatus that can handle errors that occur in the communication apparatus can be reduced. Note that according to the second embodiment, for example, it is also expected that resistance to soft errors caused by neutron rays originating from cosmic rays will be improved. Note that in the second embodiment, if the control unit/apparatus monitoring unit112of the device110detects an error that occurs in the ONU100, first, the control unit/apparatus monitoring unit112instructs a power source reset of the entire ONU100, and if the communication apparatus still is not restored, the power source reset is repeatedly instructed until the predetermined number of instances is reached. If restoration still does not occur even if the number of instances of attempting the power source reset reaches the predetermined number of instances, the control unit/apparatus monitoring unit112stops the operation of the ONU100. Also, the control unit/apparatus monitoring unit112is configured to illuminate the lamp provided in the ONU100. However, there is no limitation to such a configuration, and the control unit/apparatus monitoring unit112may also be configured to perform only the former processing or only the latter processing. That is, for example, the control unit/apparatus monitoring unit112may only stop the operation of the communication apparatus if the communication apparatus still is not restored even if the number of instances of attempting the power source reset reaches a predetermined number of instances (i.e., it is also possible to use a configuration in which notification through illumination of a lamp or the like is not performed). Alternatively, for example, if the communication apparatus is not restored even if the control unit/apparatus monitoring unit112instructs a power source reset of the entire ONU100, the control unit/apparatus monitoring unit112may also illuminate the lamp without repeatedly instructing the power source reset. Modified Example of Second Embodiment Hereinafter, a modified example of the second embodiment of the present invention will be described with reference to the drawings. Note that the overall configuration diagram of the communication system1and the schematic block diagram showing the functional configuration of the ONU100in the modified example of the second embodiment described below are the same as those of the first embodiment (i.e., the same asFIGS.1and2, respectively), and therefore description thereof will be omitted. In the second embodiment described above, a configuration was used in which if the ONU100detects an error that occurs in itself, the ONU100first attempts a power source reset for resetting the power supply to the entire communication apparatus, and if the communication apparatus still is not restored, operation of the communication apparatus is stopped. In contrast to this, in the modified example of the second embodiment described below, if a device110of the ONU100detects an error that occurs in another device110included in the ONU100, the device110of the ONU100first attempts a reset of the other device110in which the error occurred. If the device110still is not restored, the device110attempts a power source reset. Then, if the device110still is not restored, the device110stops the operation of the communication apparatus. [Operations of Device] FIG.9is a flowchart showing the operation of the device110according to the modified example of the second embodiment of the present invention. The flowchart shown inFIG.9starts when an error occurs in the other device110. Note that in the following description, the operations of the device110awill be described as an example, but the operations of the device110bare also the same. The control unit/apparatus monitoring unit112aof the device110adetects an error that occurs in the other device110(device110b) (step S301). As described above, the error referred to here is an error that causes, for example, runaway or stopping of operation of the device110. Next, the control unit/apparatus monitoring unit112ainitializes a variable N, which indicates a counter for counting the number of instances of outputting a reset instruction, by substituting 0 for the value of the variable N (step S302). Note that the leftward arrows shown in step S302, step S304, step S307, and step S309in the flowchart ofFIG.9mean operations of substituting the value on the right side for the variable on the left side. The value of the variable N is temporarily stored in, for example, a storage medium (not shown) included in the control unit/apparatus monitoring unit112a. Next, the control unit/apparatus monitoring unit112aoutputs a reset instruction to the device110bvia the control signal line (step S303). Next, the control unit/apparatus monitoring unit112aadds 1 to the value of the variable N (step S304). Next, if the control unit/apparatus monitoring unit112adetects that the device110bhas been restored due to the reset (step S305, Yes), the operations of the device110ashown in the flowchart ofFIG.9end. On the other hand, if the control unit/apparatus monitoring unit112adetects that the device110bhas not been restored (step S305, No), the control unit/apparatus monitoring unit112adetermines whether or not the value of the variable N is less than the predetermined value k (step S306). Note that the predetermined value k is a value indicating the maximum number of instances of attempting the reset processing of the device110. The predetermined value k is, for example, a value determined in advance by a person in charge of operation and maintenance, or the like. If the value of the variable N is less than the predetermined value k (step S306, No), the control unit/apparatus monitoring unit112arepeats the above-described operations of step S303and onward. On the other hand, if the value of the variable N has reached the predetermined value k (step S306, Yes), the control unit/apparatus monitoring unit112aperforms the operations of step S307and onward. Note that since the operations of step S307and onward shown inFIG.9are the same as the operations of step S202and onward shown inFIG.8, description thereof will be omitted. Note that it is also possible to use a configuration in which the values of the variable N and the predetermined value k used in step S306above are used as the values of the variable M and the predetermined value j in step S311. That is, a common variable and a common predetermined value may be used for the maximum number of instances of attempting reset processing for the device110and the maximum number of instances of attempting power source reset processing performed by the power source unit140for the entire ONU100. As described above, the device110of the ONU100(communication apparatus) according to the modified example of the second embodiment monitors the communication apparatus. Then, the device110resets the operating state of the other device110if an error that causes runaway or stopping of operation occurs in the other device110. If the other device110still is not restored, the device110repeatedly resets the operating state of the other device110. If the other device110still is not restored even if reset is attempted until the predetermined number of instances is reached, the device110executes a power source reset for resetting the power source of the entire communication apparatus. If the communication apparatus still is not restored, the device110repeatedly performs the power source reset. If the communication apparatus still is not restored even if the power source reset is attempted until a predetermined number of instances is reached, the device110stops the operation of the communication apparatus. Then, the device110illuminates the lamp in order to cause the user, the person in charge of operation and maintenance, or the like to recognize that the communication apparatus is in the operation-stopped state. The ONU100waits until the user, the person in charge of operation and maintenance, or the like manually restores the communication apparatus. By including the above configuration, the ONU100according to the modified example of the second embodiment can autonomously attempt restoration of itself if an error occurs in the communication apparatus. If the ONU100detects an error from which restoration is possible through, for example, a power source reset (or reconfiguration), the ONU100can autonomously reset and restore itself. Note that in each of the above-described embodiments, as an example, the ONU100is configured to detect an error that occurs in itself and perform restoration. However, the device to which the present invention can be applied is not limited to the ONU100, and can be applied to other devices as well. The other device referred to here is, for example, a communication apparatus in a communication system other than OLT300and 10G-EPON, and a device other than a communication apparatus. Some or all of the ONUS100in the above-described embodiment may also be realized by a computer. In that case, the program for realizing this function may also be recorded on a computer-readable recording medium, and the program recorded on the recording medium may be loaded to the computer system and executed. Note that it is assumed that the term “computer system” herein includes an OS and hardware of peripheral devices. Also, the “computer-readable recording medium” refers to a portable medium such as a flexible disk, a magneto-optical disk, a ROM, or a CD-ROM, or a recording device such as a hard disk built in a computer system. Furthermore, a “computer-readable recording medium” may also include a computer-readable recording medium that dynamically holds a program for a short amount of time, such as a communication line used in the case of transmitting a program via a network such as the Internet or a communication line such as a telephone line, and a computer-readable recording medium in which a program is held for a certain amount of time, such as a volatile memory inside a computer system that serves as a server or a client in such a case. Also, the above-described program may also be for realizing some of the above-mentioned functions, may further be capable of realizing the above-described functions in combination with a program already recorded in the computer system, and may also be realized using a programmable logic device such as an FPGA (Field Programmable Gate Array). Although the embodiments of the present invention have been described above in detail with reference to the drawings, the specific configuration is not limited to these embodiments, and designs and the like within a range that does not deviate from the gist of the present invention are also encompassed therein. REFERENCE SIGNS LIST 1Communication system100ONU110(110a,110b) Device111(111a,111b) Main signal processing unit112(112a,112b) Control unit/apparatus monitoring unit120Optical power receiving unit130UNI140Power source unit200User terminal300OLT400Optical splitter | 43,190 |
11863231 | DETAILED DESCRIPTION Use of ordinal terms such as “first” and “second” does not by itself connote any priority, precedence, or order of one element over another or the chronological sequence in which acts of a method are performed, but are used merely as labels to distinguish one element having a certain name from another element having the same name. Different technical features described in the following embodiments may be mixed or combined in various ways if they are not conflict to each other. In order to achieve routing optimization, network planning, and rapid failure recovery, the present invention produce accurate estimates for network performance key performance indicator(s) (KPI) by means of neural network(s) and attentional mechanism(s). Please refer toFIG.1.FIG.1is a schematic diagram of a system10according to an embodiment of the present invention. The system10may include a user equipment (UE)100, a radio access network (RAN)120, a mesh network140and an optimizer160. The radio access network120is configured to provide a communication connection between the UE100and the mesh network140. The mesh network140may belong to a data plane190D. The mesh network140may be an optical mesh network, and may include multiple nodes140N and links140L. One link140L is connected between two nodes140N; multiple links140L may form a path. One node140N may include a transponder, a multiplexer (MUX), an amplifier (AMP), a reconfigurable optical add/drop multiplexer (ROADM), demultiplexer (DEMUX), or other optical device(s). In the knowledge plane190K, there is the optimizer160. The optimizer160may collect network state(s)170sfrom the data plane190D, and may obtain timely statistics such as traffic volume (for instance, a traffic matrix150f). The network state170smay include (average) packet loss, (average) jitter, or (average) delay/latency of each path. The traffic matrix150fmay be defined as the bandwidth between two nodes140N in the mesh network140. The optimizer160may leverage the estimate of network performance key performance indicator (s)150kmade by a network model150to realize routing optimization, network planning (for instance, to select the optimal link configuration/placement), and rapid failure recovery. Specifically, the network model150may analyze the relationship between topology150g, routing150r, the traffic matrix150fcorresponding to the input traffic and a power matrix150P. Besides, the network model150may be tasked to accurately predict/estimate the network performance key performance indicator (s)150k(for instance, delay, jitter, or packet loss) for a specific configuration170c. The power matrix150pmay be defined as the optical powers corresponding to multiple links140L of the mesh network140. By adjusting the optical power of the optical device(s) of the node140N of the mesh network140, the embodiment(s) of the present invention may optimize the network performance key performance indicator (s)150k. It is worth noting that the network model150may take the topology150g, the scheme of the route150rfrom the source to the destination (such as the list of end-to-end paths), the traffic matrix150fand the power matrix150pas inputs, but they may be used as outputs (for example, based on the current network state170s) alternatively. In an embodiment, the network model150may leverage the ability of graph neural network(s) (GNN) and attention mechanism(s) to study/learn and model graph-structured information; therefore, the network model150is able to generalize over arbitrary topologies, routing schemes, traffic matrix corresponding to traffic intensity, and power matrix. Furthermore, the network model150is able to produce accurate estimates/predictions even in topologies, routing traffic matrices, and power matrices unseen in the training (for instance, the step S402of an optimization method40). For example, the internal architecture of the network model150may refer to the algorithm shown in Table 1. The network model150may run the algorithm shown in Table 1 (for instance, hill-climbing) to repeatedly iteratively explore the performance of different/candidate configurations170cuntil the network model150finds one configuration mode170cmeeting a target policy170p. The target policy170pmay include a plurality of optimization objectives and constraints. The optimization objectives may be, for example, to minimize end-end delay/latency. The constraints may be, for example, security policy. TABLE 1Input: xp, x1, ROutput: hpT, h1T, yp1foreach p∈R do hp0←[xp, 0..., 0];2foreach 1∈N do h10←[x1, 0..., 0];3for t=0 to T-1 do4foreach p∈R do5foreach 1∈p do6hpt←RNNt(hpt, h1t)7mp, 1t+1←hpt8end9hpt+1←hpt10end11foreach 1∈N do12h1t+1←Ut(h1t, Σp:k∈pmp, kt+1)13end14end15yp←Fp(hp) As shown in Table 1, the network model150may receive the initial link attribute/function/feature x1, the initial path attribute/function/feature xp, and the routing description R as inputs, and output the inferred feature metrics yp. The initial link attribute/function/feature x1may be related to the optical power. In the algorithm shown in Table 1, the loop from line 3 to line 14 may represent message-passing operations. In the message-passing operations, the information encoded (for instance, hidden states) among links and paths may be exchanged mutually. The network model150may repeat the message passing operations over the link hidden state vector h1of the link and the path hidden state vector hpof the path T times (for instance, loop from line 3 in the algorithm shown in Table 1), which make the initial link hidden state vector hl0of initial link and the path hidden state vector hp0converge. In the algorithm shown in Table 1, by directly mapping the routing description R (namely, the set of end-to-end paths) to the message passing operations among link and path, each path collects messages from all the links included in it (for instance, loop from line 5) and each link receives messages from all the paths including it (for instance, line 12). The recurrent function RNNtin line 6 may correspond to recurrent neural network (RNN), which may be well suited to capture dependence in sequences of variable size and thus may be utilized to model according to the sequential dependence of links included in the path(s). In an embodiment, the neural network layer corresponding to the recurrent function RNNtmay include a gated recurrent unit (GRU). In the algorithm shown in Table 1, lines 9 and 12 may represent update functions/operations, which encode the new collected information into the hidden states respectively for paths and links. The update function Utin line 12 may correspond to a trainable neural network. In an embodiment, the neural network layer corresponding to the update function Utmay include a gated recurrent unit. In the algorithm shown in Table 1, the readout function Fpin line 15 may predict the path-level feature metrics yp(for instance, delay/latency or packet loss) using the path hidden state vector hpas input. Alternatively, the readout function Fpmay infer link-level feature metrics ylusing the path hidden state vector hpas input. In an embodiment, the optimizer160may support optical power management. The optimizer160may execute the algorithm (for instance, the algorithm shown in Table 1) according to the target policy170pso as to decide/find out the power matrix150pwith the best/optimal network performance key performance indicator(s). Furthermore, the optimizer160may collect the network state170sof the mesh network140. When the optimizer160determines/finds out that the optical attenuation of the link140L included in a certain path of the mesh network140increases according to the network state170s, the optimizer160may decide/find out the best/optimal power matrix150pand provide the configurations170cat a predetermined time to adjust the mesh network140. In an embodiment, the optimizer160may support recovery. The optimizer160may execute the algorithm (for instance, the algorithm shown in Table 1) according to the target policy170pso as to decide/find out the path from the source to the destination with the best/optimal network performance key performance indicator(s). Furthermore, the optimizer160may collect the network state170sof the mesh network140. When the optimizer160determines/finds out a malfunction/breakdown/fault of the link140L or node140N included in a certain path of the mesh network140according to the network state170s, the optimizer160may modify/adjust the topology150gso as to decide/find out the best/optimal path from the source to the destination and provide the configurations170cto adjust the mesh network140. In an embodiment, the optimizer160may support bandwidth calendaring. The optimizer160may execute the algorithm (for instance, the algorithm shown in Table 1) according to the target policy170pso as to decide/find out the traffic matrix150fwith the best/optimal network performance key performance indicator(s). Furthermore, when it is expected/scheduled/planned to change/adjust the bandwidth of the path or the link140L of mesh network140, the optimizer160may decide/find out the best/optimal power matrix150pand provide the configurations170cat a predetermined time to adjust the mesh network140. FIG.2is a schematic diagram of a neural network20according to an embodiment of the present invention. The neural network20may correspond to the readout function Fpshown in Table 1. The neural network20may be a fully-connected neural network. The neural network20may include an input layer NL, hidden layers HL1, HL2, HL3, and an output layer TL. The input layer NL may include neurons x1, x2, . . . , and xa. The hidden layer HL1may include neurons z11, z12, . . . , and z1b. The hidden layer HL2may include neurons z21, z22, . . . , and z2c. The hidden layer HL3may include neurons z31, z32, . . . , and z3d. Here, a, b, c, and d are positive integers. The output layer TL may include neurons y1, y2, and y3, but is not limited thereto. The output layer TL may include more or less numbers of neurons. In an embodiment, the input values of the neurons x1to xa (also referred to as the input neurons x1to xa) of the input layer NL of the neural network20are related to the optical power. In order to achieve routing optimization, network planning, and rapid failure recovery, the present invention produces accurate estimates for network performance key performance indicator (s) (KPI) by means of neural network (s) and attentional mechanism(s). In an embodiment, the present invention may leverage the attention mechanism (s) to construct/develop/build/configure the neural network20. For example, the neurons z21to z2cof the hidden layer HL2may be respectively connected to the neuron y2, and respectively correspond to hyper-parameters p21y2, p22y2, . . . , and p2cy2. In other words, the degree(s) of constraint/restriction/enhancement/strengthening of the feature(s) of the hidden layer HL2may be determined/decided using the hyper-parameters p21y2to p2cy2. For example, an output value Ty2of the neuron y2may satisfy Ty2=g(W31y2*Tz31+W32y2*Tz32+ . . . W3dy2*Tz3d+p21y2*Tz21+p22y2*Tz22+ . . . +p2cy2*Tz2c). That is to say, the output value Ty2of the neuron y2may be equal to the sum of the output value(s) of the neuron(s) in the previous layer multiplied by its/their corresponding parameter(s) (for instance, the output value Tz31of the neuron z31multiplied by the parameter W31y2from the neuron z31to the neuron y2, the output value Tz32of the neuron z32multiplied by the parameter W32y2from the neuron z32to the neuron y2, . . . , and the output value Tz3dof the neuron z3dmultiplied by the parameter W3dy2from the neuron z3dto the neuron y2), the output value(s) of the neuron(s) in a certain layer multiplied by its/their corresponding hyper-parameter(s) (for instance, the output value Tz21of the neuron z21multiplied by the hyper-parameter p21y2from the neuron z21to the neuron y2, the output value Tz22of the neuron z22multiplied by the hyper-parameter p22y2from the neuron z22to the neuron y2, . . . , and the output value Tz2cof the neuron z2cmultiplied by the hyper-parameter p2cy2from the neuron z2cto the neuron y2), and/or bias, which is then transformed by the activation function g( ). In this case, the output values Tz21to Tz2cof the neurons z21to z2cmay serve as distance functions respectively. In an embodiment, the present invention may leverage the attention mechanism(s) to construct/develop/build/configure the hidden layer HL2. For example, the hidden layer HL2may include neurons k21, k22, . . . , and k2c, which are respectively connected to the neurons z21to z2c, and respectively correspond to hyper-parameters p21, p22, . . . , and p2c. In other words, the degree(s) to which the neurons k21to k2cconstrain/restrict/enhance/strengthen the feature(s) of the hidden layer HL2may be determined/decided using the hyper-parameters p21to p2c. For example, the output value Tz21of the neuron z21may satisfy Tz21=g(W1121*Tz11+W1221*Tz12+ . . . +W1b21*Tzlb+p21*Tk21). That is to say, the output value Tz21of the neuron z21may be equal to the sum of the output value(s) of the neuron(s) in the previous layer multiplied by its/their corresponding parameter(s) (for instance, the output value Tz11of the neuron z11multiplied by the parameter W1121from the neuron z11to the neuron z21, the output value Tz12of the neuron z12multiplied by the parameter W1221from the neuron z12to the neuron z21, . . . , and the output value Tz1bof the neuron z1bmultiplied by the parameter W1b21from the neuron z1bto the neuron z21), bias, and/or the output value Tk21of the neuron k21multiplied by the hyper-parameter p21from the neuron k21to the neuron z21, which is then transformed by the activation function g( ) In this case, the output value Tk21of the neuron k21may serve as a distance function. Accordingly, a neuron (for instance, the neuron z21) in the enhanced/strengthened neural network layer (for instance, the hidden layer HL2) in neural network20is directly connected to a neuron (also referred to as an output neuron) (for instance, the neuron y2) in the output layer TL and/or another neuron (also referred to as an auxiliary neuron) (for instance, the neuron k21). There is a hyper-parameter (for instance, the hyper-parameter p21) between the auxiliary neuron and the neuron. There is another hyper-parameter (for instance, the hyper-parameter p21y2) between the neuron and the output neuron. That is, the neural network20may use two-times attention mechanism(s) to enhance/strengthen the influence of the feature (s) of a certain neural network layer (for instance, the hidden layer HL2), which helps the neural network20to extract features, thereby improving the accuracy of inference. The values of the hyper-parameters p21to p2cand p21y2to p2cy2may be determined/decided according to different requirements. In an embodiment, the hyper-parameters p21to p2cand p21y2to p2cy2may be real numbers greater than or equal to zero. When the hyper-parameters p21to p2cand p21y2to p2cy2are large, hard constraint(s) is/are posed. When the hyper-parameters p21to p2cand p21y2to p2cy2are small, soft constraint(s) is/are posed. In an embodiment, by adjusting/modifying the hyper-parameters p21to p2cand p21y2to p2cy2, the degree of the influence of a certain neural network layer (for instance, the hidden layer HL2) or a certain neuron (for instance, the neuron k21) on the neuron(s) (for instance, the neuron y2) of the output layer TL or whether the neural network layer (for instance, the hidden layer HL2) or the neuron influences the neuron(s) (for instance, the neuron y2) of the output layer TL may be determined according to, for example, the open system interconnection model (OSI) with the seven-layer structure. In an embodiment, the output values Tk21to Tk2cof the neurons k21to k2cmay serve as distance functions respectively. In an embodiment, the output values Tk21to Tk2cof the neurons k21to k2cmay be real numbers in a range of 0 to 1. In an embodiment, the output values Tk21to Tk2cof the neurons k21to k2cmay be converted from logic “0” or logic “1” into a differentiable form. In an embodiment, by adjusting/modifying the output values Tk21to Tk2cof the neurons k21to k2c, the degree of the influence of a certain neural network layer (for instance, the hidden layer HL2) or a certain neuron (for instance, the neuron k21) on the neuron(s) (for instance, the neuron y2) of the output layer TL may be regulated/adjusted/changed according to, for example, the seven-layer structure of the open system interconnection model. It may be seen that the hidden layers of the neural network may respectively correspond to different layers of the open system interconnection model with the seven-layer structure. FIG.3is a schematic diagram of a neural network30according to an embodiment of the present invention. The structure of the neural network30shown inFIG.3is similar to that of the neural network20shown inFIG.2(to replace the neural network20with the neural network30), and hence the same numerals and notations denote the same components in the following description. The neural network30may include the hidden layers HL1, HL2, HL3, and a hidden layer HL4, which may respectively correspond to the physical layer, the data link layer, the network layer, and the transport layer of the open system interconnection model with the seven-layer structure. In an embodiment, the output value Ty1of the neuron yl may be related to (or equal to) or correspond to the delay/latency of the network performance key performance indicator(s). That is, the output value Ty1of the neuron y1may be a function of the delay/latency. Delay/latency may refer to the time it takes for a data packet to go from one place to another. The time that the data packet takes to get from the source to the destination is called end-to-end delay/latency. In an embodiment, delay/latency is mainly caused by the network layer, so the attention mechanism(s) may be introduced for the hidden layer HL3. For example, the neurons z31to z3dof the hidden layer HL3may be respectively connected to the neuron y1, and respectively correspond to hyper-parameters p31y1, p32y1, . . . , and p3dy1. In other words, the degree(s) of constraint/restriction/enhancement/strengthening of the feature(s) of the hidden layer HL3may be determined/decided using the hyper-parameters p31y1to p3dy1. As shown inFIG.3, the hidden layer HL4may include neurons z41, z42, . . . , and z4e, where e is a positive integer. Therefore, similar to the output value Ty2of the neuron y2, the output value Ty1of the neuron y1may satisfy Ty1=g(W41y1*Tz41+W42y1*Tz42+ . . . W4ey1*Tz4e+p31y1*Tz31+p32y1*Tz32+ . . . +p3dy1*Tz3d). In this case, the output values Tz31to Tz3dof the neurons z31to z3dmay serve as distance functions respectively. For example, the hidden layer HL3may include neurons k31, k32, . . . , and k3d, which are respectively connected to the neurons z31to z3d, and respectively correspond to hyper-parameters p31, p32, . . . , and p3d. In other words, the degree(s) to which the neurons z31to z3dconstrain/restrict/enhance/strengthen the feature(s) of the hidden layer HL3may be determined/decided using the hyper-parameters p31to p3d. Similar to the output value Tz21of the neuron z21, the output value Tz31of the neuron z31may, for example, satisfy Tz31=g(W2131*Tz21+W2231*Tz22+ . . . +W2c31*Tz2c+p31*Tk31). In this case, the output value Tk31of the neuron k31may serve as a distance function. In an embodiment, the output value Ty3of the neuron y3may be related to or correspond to the packet loss of the network performance key performance indicator(s). Packet loss occurs when data packet(s) travelling across a network fail to reach the destination. Inevitably, there may be data packet(s) accidentally lost in the network. In addition, as the number of data packets moving between in the network increases, the node(s) may not be able to process the data packets it/they face(s), causing some or all of the data packets to be rejected or discarded. In an embodiment, packet loss is mainly caused by the network layer, so the attention mechanism (s) may be introduced for the hidden layer HL3as well. That is, a neuron (also referred to as a third neuron) (for instance, the neuron z31) in the hidden layer HL3(which is the third layer) is directly connected to an output neuron (for instance, the neuron y3) and/or an auxiliary neuron (also referred to as a third auxiliary neuron) (for instance, the neuron k31). In an embodiment, the output value Ty2of the neuron y2may be related to or correspond to the jitter of the network performance key performance indicator(s). Jitter may be caused by congestion generated by multiple network connections (namely, multiple data packet trying to use the same network) all active simultaneously. In an embodiment, jitter is mainly caused by the data link layer, so the attention mechanism(s) may be introduced for the hidden layer HL2. That is, a neuron (also referred to as a second neuron) (for instance, the neuron z21) in the hidden layer HL2(which is the second layer) is directly connected to an output neuron (for instance, the neuron y2) and/or an auxiliary neuron (also referred to as a second auxiliary neuron) (for instance, the neuron k21). As set forth above, the hidden layers HL1to HL4of the neural network30may be respectively extract different features, and may correspond to different layers of the open system interconnection model with the seven-layer structure. Therefore, when the neural network30is utilized to predict the network performance key performance indicator (s), the influence of the feature (s) of a certain neural network layer (for instance, the hidden layer HL2or HL3) may be enhanced/strengthened by means of the attention mechanism(s) so as to improve the accuracy of inference. FIG.4is a flowchart of an optimization method40according to an embodiment of the present invention. The optimization method40may be compiled into a code, executed by a processing circuit (for instance, a processing circuit650shown inFIG.6) in the optimization method40, and stored in the storage circuit (for instance, a storage circuit670shown inFIG.6). The steps of the optimization method40shown inFIG.4are as follows: Step S400: Start. Step S402: Train a neural network. Step S404: Adjust at least one of a plurality of auxiliary output values of a plurality of auxiliary neurons of the neural network. Step S406: Perform inference using the neural network. Step S408: End. In the optimization method40, the neural network may be the neural network20or30. Alternatively, the neural network may include a plurality of neural network layers, for example, neural network layer(s) corresponding to the readout function Fp(such as the input layer NL, the hidden layers HL1to HL4, and/or the output layer TL), neural network layer (s) corresponding to the recurrent function RNNt, and/or neural network layer(s) corresponding to the update function Ut, but is not limited thereto. For example, Step S402may correspond to the training of the neural network. In Step S402, multiple (known) first data is put through the (untrained) neural network. The output value of the (untrained) neural network is compared with a (known) target of the (known) first data. Then the parameters (for instance, the parameters W1121to W1b21, W2131to W2c31, W31y2to W3dy2, and/or W41y1to W4ey1) may be re-evaluated/update and optimized to train the neural network, thereby improving the performance of the task(s) the neural network is learning. For example, using forward propagation, the output value(s) (for example, the output values Ty1to Ty3) of the neural network may be calculated from the (received) first data according/corresponding to different parameters. There is a total error between the output value of the neural network and the (known) target. All the parameters may be updated recursively/repeatedly using, for example, back propagation, such that the output value of the neural network gradually approaches the target to minimize the total error. The parameters may thus be optimized to complete the training. The first data may refer to data with a (known) target. For the neural network layer (s) (for instance, the neural network20or30) corresponding to the readout function Fp, the first data may be, for example, the path hidden state vector hpwhen the feature metrics ypis known. In Step S404, the auxiliary output value (s) (for instance, the output values Tk21to Tk2cand/or Tk31to Tk3d) of the auxiliary neuron(s) (for instance, the neurons k21to k2cand/or k31to k3d) of the (trained) neural network may be fine-tuned/adjusted. An auxiliary neuron may refer to a neuron directly connected to merely one neuron alone or directly connected to merely one neuron of one neural network layer alone. An auxiliary neuron may be utilized to introduce the attention mechanism (s) so as to constrain/restrict/enhance/strengthen the feature(s) of a certain neural network layer or a certain neuron. Step S406may correspond to the inference of the neural network, which applies/uses knowledge from the (trained) neural network to infer a result. In Step S402, when the (unknown) second data, which is to be interpreted/recognized, is input through the (trained) neural network, the (trained) neural network may perform inference on the second data according to the (optimized) parameters, to generate the output value (for instance, the feature metrics ypor the output values Ty1to Ty3). That is, the (trained) neural network outputs a prediction based on predictive accuracy of the (trained) neural network. The second data may refer to data to be interpreted/recognized. For the neural network layer(s) corresponding to the readout function Fp(for instance, the neural network20or30), the second data may be, for example, the path hidden state vector hpwhen the feature metrics ypis unknown. FIG.5is a flowchart of an optimization method50according to an embodiment of the present invention. The optimization method may replace the optimization method40. The steps of the optimization method50shown inFIG.5are as follows: Step S500: Start. Step S502: Set up a plurality of auxiliary output values of a plurality of auxiliary neurons of at least one neural network. Step S504: Set up at least one hyper-parameter of the at least one neural network. Step S506: Train the at least one neural network. Step S508: Perform an algorithm (sectionally or partially), wherein the at least one neural network corresponds to the algorithm. Step S510: Determine the relation between an output value of the at least one neural network and network performance key performance indicator(s). Step S512: Adjust at least one of the plurality of auxiliary output values of the plurality of auxiliary neurons of the neural network according to a type of the output value. Step S514: Adjust the at least one hyper-parameter of the at least one neural network according to the type of the output value. Step S516: Complete execution of the algorithm, wherein the execution of the algorithm involves performing inference withe the neural network. Step S518: End. In Step S502, the auxiliary output value(s) (for instance, the output values Tk21to Tk2cand/or Tk31to Tk3d) of the auxiliary neuron(s) (for instance, the neurons k21to k2cand/or k31to k3d) of the neural network(s) (for instance, the neural network20or30) may be set manually in an embodiment of the present invention. For example, the auxiliary output value(s) is/are set to zero. Therefore, the auxiliary neuron(s) will not affect the training of the neural network(s) in Step S506. In Step S504, the hyper-parameter(s) (for instance, the hyper-parameters p21to p2c, p31to p3d, p21y2to p2cy2, and/or p31y1to p3dy1) of the neural network(s) (for instance, the neural network20or30) may be set manually in an embodiment of the present invention. Therefore, the training of the neural network(s) in Step S506will not affect the hyper-parameter(s) of the neural network(s). In other words, the hyper-parameter(s) are untrained or untrainable. In Step S508, merely part of (or a section of) the algorithm (for instance, line 1 to line 14 in the algorithm shown in Table 1) is executed/run in an embodiment of the present invention. In an embodiment, the at least one neural network may (respectively) be, for example, neural network(s) (for instance, the neural network20or30) corresponding to the readout function Fpof the algorithm, neural network(s) corresponding to the recurrent function RNNtof the algorithm, and/or neural network(s) corresponding to the update function Utof the algorithm, but is not limited thereto. In Step S510, the present invention determines/decides whether the output value(s) (for instance, the output values Ty1to Ty3) of the neural network (s) is/are associated with the network performance key performance indicator(s) (for instance, delay, jitter, or packet loss). In an embodiment, when the output value(s) of the neural network(s) is/are related to delay or packet loss, the auxiliary output value(s) (for instance, the output values Tk21to Tk2c) of a part of the auxiliary neuron(s) (for instance, the neurons k21to k2c) of the neural network(s) (for instance, the neural network30) may be maintained in step S512. For example, the auxiliary output value(s) is remained at zero in step S512. Then, the auxiliary output value(s) (for instance, the output values Tk31to Tk3d) of another part of the auxiliary neuron(s) (for instance, the neurons k31to k3d) of the neural network(s) (for instance, the neural network30) may be adjusted/changed in step S512, such that the hidden layer HL3or certain neuron(s) (for example, the neuron k31) of the hidden layer HL3has/have an effect on the neuron(s) (for example, the neurons y1and/or y3) of the output layer TL. In an embodiment, when the output value(s) of the neural network(s) is/are related to jitter, the auxiliary output value(s) (for instance, the output values Tk31to Tk3d) of a part of the auxiliary neuron(s) (for instance, the neurons k31to k3d) of the neural network(s) (for instance, the neural network30) may be maintained in step S512. For example, the auxiliary output value(s) is still set to zero in step S512. Then, the auxiliary output value (s) (for instance, the output values Tk21to Tk2c) of another part of the auxiliary neuron(s) (for instance, the neurons k21to k2c) of the neural network(s) (for instance, the neural network30) may be adjusted/changed in step S512, such that the hidden layer HL2or certain neuron(s) (for example, the neuron k21) of the hidden layer HL2has/have an effect on the neuron(s) (for example, the neurons y2) of the output layer TL. As set forth above, Step S512determines whether to and how to adjust/change the auxiliary output value(s) (for instance, the output values Tk21to Tk2cor Tk31to Tk3d) of the auxiliary neuron(s) (for instance, the neurons k21to k2cor k31to k3d) of the neural network(s) according to how or whether the output value(s) (for instance, the output values Ty1to Ty3) of the neural network(s) (for instance, the neural network30) is related to network performance key performance indicator(s). In Step S516, the execution of the algorithm is completed. For example, line 15 in the algorithm shown in Table 1 is run/performed. The aforementioned are exemplary embodiments of the present invention, and those skilled in the art may readily make alterations and modifications. For example, in an embodiment, the order of Steps S502and S504may be interchangeable, and the order of Steps S512and S514may be interchangeable. In an embodiment, Step S514may be optional/omitted. In an embodiment, Step S508may be performed during Step S516. FIG.6is a schematic diagram of an optimizer60according to an embodiment of the present invention. The optimizer60may replace the optimizer160. The optimizer60may include the processing circuit650and the storage circuit670. The processing circuit650may be a central processing unit (CPU), a microprocessor, or an application-specific integrated circuit (ASIC), but is not limited thereto. The storage circuit670may be a subscriber identity module (SIM), a read-only memory (ROM), a flash memory, a random access memory (RAM), disc read-only memory (CD-ROM/DVD-ROM/BD-ROM), a magnetic tape, a hard disk, an optical data storage device, a non-volatile storage device, a non-transitory computer-readable medium, but is not limited thereto. In summary, the present invention predicts network performance key performance indicator (s) using neural network (s) and attentional mechanism(s) to achieve routing optimization, network planning, and rapid failure recovery. Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims. | 33,177 |
11863232 | Unless otherwise indicated, the drawings provided herein are meant to illustrate features of embodiments of this disclosure. These features are believed to be applicable in a wide variety of systems including one or more embodiments of this disclosure. As such, the drawings are not meant to include all conventional features known by those of ordinary skill in the art to be required for the practice of the embodiments disclosed herein. DETAILED DESCRIPTION In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. “Optional” or “optionally” means that the subsequently described event or circumstance may or may not occur, and that the description includes instances where the event occurs and instances where it does not. Approximating language, as used herein throughout the specification and claims, may be applied to modify any quantitative representation that could permissibly vary without resulting in a change in the basic function to which it is related. Accordingly, a value modified by a term or terms, such as “about,” “approximately,” and “substantially,” are not to be limited to the precise value specified. In at least some instances, the approximating language may correspond to the precision of an instrument for measuring the value. Here and throughout the specification and claims, range limitations may be combined and/or interchanged; such ranges are identified and include all the sub-ranges contained therein unless context or language indicates otherwise. As used herein, the terms “processor” and “computer” and related terms, e.g., “processing device”, “computing device”, and “controller” are not limited to just those integrated circuits referred to in the art as a computer, but broadly refers to a microcontroller, a microcomputer, a programmable logic controller (PLC), an application specific integrated circuit (ASIC), and other programmable circuits, and these terms are used interchangeably herein. In the embodiments described herein, memory may include, but is not limited to, a computer-readable medium, such as a random access memory (RAM), and a computer-readable non-volatile medium, such as flash memory. Alternatively, a floppy disk, a compact disc-read only memory (CD-ROM), a magneto-optical disk (MOD), and/or a digital versatile disc (DVD) may also be used. Also, in the embodiments described herein, additional input channels may be, but are not limited to, computer peripherals associated with an operator interface such as a mouse and a keyboard. Alternatively, other computer peripherals may also be used that may include, for example, but not be limited to, a scanner. Furthermore, in the exemplary embodiment, additional output channels may include, but not be limited to, an operator interface monitor. Further, as used herein, the terms “software” and “firmware” are interchangeable, and include computer program storage in memory for execution by personal computers, workstations, clients, and servers. As used herein, the term “non-transitory computer-readable media” is intended to be representative of any tangible computer-based device implemented in any method or technology for short-term and long-term storage of information, such as, computer-readable instructions, data structures, program modules and sub-modules, or other data in any device. Therefore, the methods described herein may be encoded as executable instructions embodied in a tangible, non-transitory, computer readable medium, including, without limitation, a storage device and a memory device. Such instructions, when executed by a processor, cause the processor to perform at least a portion of the methods described herein. Moreover, as used herein, the term “non-transitory computer-readable media” includes all tangible, computer-readable media, including, without limitation, non-transitory computer storage devices, including, without limitation, volatile and nonvolatile media, and removable and non-removable media such as a firmware, physical and virtual storage, CD-ROMs, DVDs, and any other digital source such as a network or the Internet, as well as yet to be developed digital means, with the sole exception being a transitory, propagating signal. Furthermore, as used herein, the term “real-time” refers to at least one of the time of occurrence of the associated events, the time of measurement and collection of predetermined data, the time for a computing device (e.g., a processor) to process the data, and the time of a system response to the events and the environment. In the embodiments described herein, these activities and events occur substantially instantaneously. As used herein, unless specified to the contrary, “modem termination system,” or “MTS’” may refer to one or more of a cable modem termination system (CMTS), an optical network terminal (ONT), an optical line terminal (OLT), a network termination unit, a satellite termination unit, and/or other termination devices and systems. Similarly, “modem” may refer to one or more of a cable modem (CM), an optical network unit (ONU), a digital subscriber line (DSL) unit/modem, a satellite modem, etc. According to the embodiments described herein, multiband delta-sigma digitization systems and methods enable carrier aggregation of multi-RATs in next generation heterogeneous MFH networks. The present multiband delta-sigma ADC techniques allow different RAT technologies, such as, 4G-LTE, Wi-Fi, and 5G-NR signals, to be aggregated and delivered together with shared MFH networks. The present embodiments advantageously enable the aggregation of heterogeneous wireless services from multi-RATs in the frequency domain, and then the digitization of the aggregated services simultaneously in an “as is” manner, that is, without frequency conversion. These advantageous configurations are thus able to circumvent clock rate compatibility and time synchronization problems arising from multi-RAT coexistence, while also eliminating the need of DAC and RF devices at remote cell cites (e.g., RRHs), thereby further enabling a low-cost, all-analog implementation of RRHs where desired. The present embodiments further significantly reduce the cost and complexity of 5G small cells, while also facilitating large-scale dense deployment of heterogeneous 5G MFH networks. The present systems and methods further provide an innovative digitization interface advantageously replaces CPRI, thereby realizing a significantly higher spectral efficiency, while also offering improved compatibility for multi-RAT coexistence in 5G heterogeneous MFH networks. FIG.4Ais a schematic illustration of a digital MFH network400. Network400is similar to networks200,FIG.2A,300,FIG.3Ain a number of respects, but represents an improved digitization interface for implementing multiband delta-sigma digitization. MFH network400includes at least one BBU402in operable communication with an RRH404over a transport medium406(e.g., an optical fiber). BBU402includes a baseband processor408, an RF up-converter410, a delta-sigma ADC412, and an E/O interface414. In a similar manner, RRH404includes an RF front end416, a BPF418, and an O/E interface420. FIG.4Bis a schematic illustration of a digital MFH link422for network400,FIG.4A. In exemplary operation of link422, at respective transmitters424(e.g., of respective BBUs402), after baseband processing by baseband processor408, a plurality of various wireless services426(e.g., from different RATs) are up-converted by RF up-converter410to RF frequencies, and then aggregated in the frequency domain by an FDM428. The wireless signals of aggregated services426are then digitized by delta-sigma ADC412(e.g., a multiband delta-sigma ADC) to generate a digitized delta-sigma data stream430. In the exemplary embodiment, delta-sigma ADC412digitizes multiband signals/services426simultaneously. Unlike Nyquist ADC techniques used in CPRI (e.g., by Nyquist ADC310,FIG.3), which only digitize baseband signals, multiband delta-sigma ADC412is advantageously able to digitize wireless services426in an “as is” manner, without the need of frequency down-conversion. In the exemplary embodiment depicted inFIG.4B, transmitters424are depicted, for example, to illustrate the RF up-conversion of I and Q components of different wireless services. Further to this example, in this architecture, respective RF devices, including without limitation local oscillators432, mixers434, and delta-sigma ADCs412may all be advantageously centralized in BBU402, whereas only BPFs418and respective antennas of RF front ends416are needed in RRHs404. This simplified design enables a DAC-free and RF-free RRH, which may be further advantageously implemented by essentially all relevant analog devices. This configuration is particularly advantageous with respect to the 5G paradigm, given the wide and dense deployment of small cells. That is, an all-analog, DAC-free, RF-free architecture (i.e., according toFIGS.4A-B) will significantly reduce the cost and complexity of existing and future RRHs. In the embodiments depicted inFIGS.4A-B, the digital MFH architecture is depicted to implement FDM (e.g., FDM428) to multiplex wireless services (e.g., services426), and analog BPFs (e.g., BPFs418) to separate the multiplexed wireless services. This configuration thus avoids the compatibility problem of different baseband chip rates for various RATs, and also circumvents the synchronization problem experienced among the different services. Furthermore, the delta-sigma digitization techniques of the present embodiments provide a waveform-agnostic interface, which not only supports OFDM, but also works with other multicarrier waveforms, such as filter bank multicarrier (FBMC), universal filtered multicarrier (UFMC), etc. FIG.5is a graphical illustration depicting a conventional digitization process500. Sampling process500depicts the operation of a conventional Nyquist ADC used in CPRI for an analog signal502(shown in the time domain). In operation, process500bandwidth-limits analog signal502as a corresponding frequency domain signal504using a low-pass filter. That is, in the frequency domain, analog signal502is bandwidth limited to digital signal504. After digitization, quantization noise506is uncorrelated with the frequency of the input signal, and is spread evenly over the Nyquist zone fS/2. In the time domain, process500performs Nyquist sampling508of analog signal502(i.e., at the Nyquist frequency), and quantizes each obtained sample by multiple quantization bits to produce multi-bit quantization signal510. Since the quantization noise of a Nyquist ADC is approximately Gaussian, as well as uniformly spread over the Nyquist zone, a very large number of quantization bits are needed to ensure the signal-to-noise ratio (SNR) (e.g., CNR or MER) of the resulting digitized signals510. Such a large number of required quantization bits leads to low spectral efficiency, as well as a data rate bottleneck of MFH networks. More specifically, as depicted inFIG.5, in conventional CPRI Nyquist ADC, each LTE carrier is digitized individually by a Nyquist ADC having, for example, a sampling rate of 30.72 MSa/s. For each sample, 15 quantization bits and one control bit (16 bits total) are used to represent the analog amplitude. The quantization noise (e.g., quantization noise506) of a Nyquist ADC is evenly distributed in the Nyquist zone in the frequency domain, which can be approximated by Gaussian white noise. To reduce the quantization noise and increase the SNR of digitized signal, CPRI requires a large number of quantization bits, thereby resulting in the low spectral efficiency and significant bandwidth after digitization, which render CPRI the data rate bottleneck of digital MFH networks. In the case of line coding of 8b/10b, CPRI will consume up to 30.72 MSa/s*16 bit′Sa*10/8*2=1.23 Gb/s of MFH capacity for each 20 MHz LTE carrier. Within a 10-Gb/s PON link, for example, CPRI is only capable of accommodating eight LTE carriers. Additionally, CPRI is known to operate at a fixed chip rate of 3.84 MHz, and to only support a limited number of RATs, such as UMTS (CPRI v1 and v2), WiMAX (v3), LTE (v4), and GSM (v5). Given the different clock rates of various RATs, time synchronization remains a problem for multi-RAT coexistence. Moreover, the low spectral efficiency and inability to support to Wi-Fi and 5G-NR render CPRI technically lacking and cost-prohibitive as a digitization interface for 5G heterogeneous MFH networks. These drawbacks are solved through implementation of the following innovative processes. FIGS.6A-Care graphical illustrations depicting a digitization process600. In an exemplary embodiment, process600demonstrates an operational principle of the multiband delta-sigma ADC techniques described herein, and may be executed by a processor (not shown inFIGS.6A-C) in one or more BBUs. More specifically,FIG.6Adepicts an oversampling subprocess602of process600,FIG.6Bdepicts a noise shaping subprocess604of process600, andFIG.6Cdepicts a filtering subprocess606of process600. In an exemplary embodiment of oversampling subprocess602, quantization noise608is spread over a relatively wide Nyquist zone (e.g., the oversampling rate (OSR) times the Nyquist sampling rate fS/2, or OSR*fS/2). In this example, because the quantization number is limited to one or two quantization bits, namely, one-bit quantization610(e.g., a binary, or on-off keying (OOK) signal) or two-bit quantization612(e.g., a PAM4 signal), quantization noise608is significant. In the exemplary embodiment depicted inFIGS.6A-C, three non-contiguous signal bands614of wireless services are aggregated together. In some embodiments, signal bands614come from the same RAT (e.g., intra-RAT carrier aggregation). In other embodiments, signal bands614come from different RATs (e.g., inter-RAT carrier aggregation). Oversampling subprocess602and thus results in an oversampled analog signal616. In an exemplary embodiment of noise shaping subprocess604, quantization noise608′ is pushed out of the signal bands614, thereby separating signals from noise in the frequency domain. In this example of subprocess604, the respective spectra of signal bands614are not modified during the operation of digitization process600. In an exemplary embodiment of filtering subprocess606, bandpass filters616are respectively applied to signal bands614to substantially eliminate the out-of-band (OOB) noise (e.g., quantization noise608′) and thereby enable retrieval of an output signal618closely approximating the original analog waveform. This advantageous technique thus represents a significant improvement over the conventional Nyquist ADC techniques described above with respect toFIG.5. More particularly, through implementation of a multiband delta-sigma ADC according to the operational principles of process600, the known shortcomings of CPRI may be successfully circumvented. For example, instead of the large number of quantization bits required by conventional CPRI techniques, the present delta-sigma ADC embodiments successfully “trade” quantization bits for the sampling rates described herein. The present techniques thus exploit a high sampling rate, but only require relatively few (i.e., one or two) quantization bits to be fully implemented. In the exemplary embodiments depicted inFIGS.6A-C, the OOB quantization noise (e.g., quantization noise608′) is added by the delta-sigma ADC (not shown inFIGS.6A-C), and which converts the original signal waveform from analog to digital. At the RRH, the original analog waveform (e.g., output signal618) may then be easily retrieved once the quantization noise is eliminated by filtering (e.g., filtering subprocess606). From the noise shaping technique of noise shaping subprocess604though, the retrieved analog signal may have an uneven noise floor. Accordingly, in an embodiment, the noise shaping technique may be configured to exploit a noise transfer function to control the frequency distribution of quantization noise608′, where each conjugate pair of zero points of the noise transfer function corresponds to a null point of noise. In the design of a multiband delta-sigma ADC, one or two pairs of zeros of the noise transfer function may be assigned to each signal band614, depending on the bandwidth. The operational principles of the present delta-sigma ADC may also be advantageously interpreted in the time domain. The present delta-sigma ADC techniques have, for example, a memory effect, whereas conventional Nyquist ADC techniques have no such memory effect. Conventional Nyquist ADC operations quantize each sample individually and independently, and the resultant output bits are only determined by the input amplitude for that particular sample, which has no dependence on previous samples. In contrast, the present delta-sigma ADC techniques are able to digitize samples consecutively whereby a particular output bit may depend not only on the particular input sample, but also on previous samples. For example, in the case of a sinusoidal analog input, a one-bit delta-sigma ADC according to the present embodiments outputs a high speed OOK signal with a density of “1” bits, proportional to the amplitude of analog input. Thus, when the input is close to its maximum value, the output will include almost all “1” bits. However, when the input is close to its minimum value, the output will include all “0” bits. Similarly, for intermediate inputs, the output will be expected to have an equal density of “0” and “1” bits. FIGS.7A-Care graphical illustrations depicting respective applications700,702,704of digitization process600,FIGS.6A-C(e.g., after noise filtering subprocess604). More specifically, application700depicts a case of intra-RAT contiguous carrier aggregation, application702depicts a case of intra-RAT non-contiguous carrier aggregation, and application704depicts a case of heterogeneous inter-RAT carrier aggregation. In an exemplary embodiment of application700, a case of intra-RAT contiguous carrier aggregation may occur where wireless services706from the same RAT are bonded together contiguously in the frequency domain, and digitized simultaneously by a single-band delta-sigma ADC. Examples of this scenario include LTE contiguous carrier aggregation and Wi-Fi channel bonding. In an exemplary embodiment of application702, a case of intra-RAT non-contiguous carrier aggregation may occur where wireless services708from the same RAT are aggregated non-contiguously, and digitized together by a multiband delta-sigma ADC. Examples of this scenario include LTE non-contiguous carrier aggregation. In an exemplary embodiment of application704, a case of heterogeneous inter-RAT carrier aggregation may occur where respective wireless services710,712,714from different RATs (e.g., an LTE RAT for service710, a Wi-Fi RAT for service712, and a 5G-NR RAT for service714) are aggregated in a heterogeneous MFH network. As illustrated in this embodiment, a waveform/RAT-agnostic digitization interface is provided that eliminates the need for DAC and RF devices in RRHs, while also supporting multiband wireless services with different carrier frequencies and bandwidths from multiple RATs, without presenting the synchronization or compatibility problems experienced by conventional digitization interfaces. In the embodiments depicted inFIGS.7A-C, each frequency band is utilized by only one wireless service. Other application scenarios of frequency sharing, such as in the case where one frequency component is occupied by more than one wireless signals (e.g., frequency overlap among multiple RATs or multiple-input multiple-output (MIMO)) are contemplated, but not illustrated in this example. Various frequency ranges of different RATs, including overlaps, are illustrated below in Table 1. TABLE 1RATWi-Fi (802.11)WiMAXLTEUWBProtocolagnac/axafah802.16 e3GPP802.15.3a(rel. 8)Freq. bands5.15-5.8752.4-2.4975.15-5.875,5.15-5.8750.054-0.698,<12.1-5.90.7-2.63.168-10.56(GHz)2.4-2.4970.47-0.79 As can be seen from the information provided in Table 1, problems occur as a result of frequency reuse. As described further below with respect toFIGS.8and9, respectively, the present systems and methods provide further solutions to overcome the problems of frequency reuse based on wavelength division multiplexing (WDM) and power division multiplexing (PDM) technologies. FIG.8is a schematic illustration of an MFH link800implementing WDM. MFH link800is similar in some structural respects to MFH link400,FIG.4, and includes a first group of transmitters802and a second group of transmitters804in operational communication with a first FDM806and a second FDM808, respectively. Additionally, first FDM806and second FDM808are also in operational communication with a first delta-sigma ADC810and a second delta-sigma ADC812, respectively. In an exemplary embodiment of MFH link800, multiple wireless services at the same RF frequencies may be advantageously digitized and supported by different wavelengths using WDM technology. More particularly, digital bit streams from first and second delta-sigma ADCs810,812are carried by different wavelengths λ1and λ2, respectively, and then multiplexed by a WDM multiplexer814onto a single fiber transport medium816. In the example depicted inFIG.8, a first OOK1is carried on wavelength λ1, which supports three wireless services818at respective frequencies of fRF1, fRF2, and fRF3, and a second OOK2is carried on wavelength λ2, which supports three different wireless services820at respective frequencies of fRF4, fRF5, and fRF6. Further in this example, the frequencies fRF2=fRF5, however, the two wavelengths λ1and λ2are separated at first RRH822and second RRH824by a WDM de-multiplexer826. Thus, the separate services fRF2and fRF5may be filtered out by corresponding filters828(e.g., BPF2and BPF5, respectively, in this example). FIG.9is a schematic illustration of an MFH link900implementing PDM. MFH link900is similar to MFH link800,FIG.8, and includes a first group of transmitters902and a second group of transmitters904in operational communication with a first FDM906and a second FDM908, respectively. Additionally, first FDM906and second FDM908are also in operational communication with a first delta-sigma ADC910and a second delta-sigma ADC912, respectively. In an exemplary embodiment of MFH link900, multiple wireless services at the same RF frequencies may be advantageously supported by different power levels using PDM technology. More particularly, a first digitized bit stream914from first delta-sigma ADC910and a second digitized bit stream916from second delta-sigma ADC912have different amplitudes and may be superimposed in the power domain by a power combiner918. That is, in MFH link900, the two digitized bit streams914,916of differing amplitudes are multiplexed in the power division and synthesized to a single 4-level pulse amplitude modulation (PAM4) signal920. A signal920may then be delivered from first and second transmitter groups902,904(e.g., of respective BBUs) to corresponding first and second RRH groups922,924, respectively over a single fiber transport medium926. Similar to the embodiment depicted inFIG.8, in MFH link900, first digitized bit stream914represents an OOK1signal carrying wireless services928at respective frequencies of fRF1, fRF2, and fRF3, and second digitized bit stream916represents an OOK2signal carrying different wireless services930at respective frequencies of fRF4, fRF5, and fRF6. However, in this example, the amplitude of OOK1is twice that of OOK2, and thus the summation of the OOK1and OOK2signals synthesize PAM4 signal920(described further below with respect toFIG.10). Also similar to the example depicted inFIG.8, again frequencies fRF2=fRF5. In further operation of MFH link900, prior to reception by first and second RRH groups922,924, and further downstream from an O/E interface932(e.g., a photodetector), and OOK receiver934is configured to retrieve the OOK1signal, and a PAM4 receiver936is configured to retrieve the OOK2signal. In this example, the relatively larger offset imposed by the OOK1signal is removed before MFH link900is able to retrieve the relatively smaller amplitude of the OOK2signal. FIG.10is a graphical illustration depicting an operating principle1000of MFH link900,FIG.9. In an exemplary embodiment, operating principle1000depicts a synthesis effect of PDM using the present delta-sigma digitization techniques. More particularly, operating principle1000illustrates the synthesis of PAM4 signal920by the summation (e.g., by power combiner918) of the OOK1signal of first digitized bit stream914and the OOK2signal of the second digitized bit stream916. The amplitude ratio of OOK1signal and the OOK2signal is 2:1. According to the embodiments described herein, innovative multiband delta-sigma digitization are provided that are advantageously capable of supporting heterogeneous carrier aggregations in 5G heterogeneous mobile fronthaul networks, including without limitation, 4G-LTE, Wi-Fi, and 5G-NR. The advantageous systems and methods of the present embodiments are further capable of aggregating heterogeneous wireless services in the frequency domain, thereby avoiding the baseband clock rate compatibility and time-synchronization problems arising from multi-RAT coexistence. The present techniques are further capable of digitizing multiband wireless services simultaneously, in an “as is” manner, without requiring frequency conversion, and thereby eliminating the need for DAC and RF devices at RRHs. By providing a significantly lower-cost and efficient all-analog implementation capability for RRHs the present systems and methods are particularly useful to significantly reduce RRH cost and complexity, which will facilitate wide dense deployment of 5G small cells. The embodiments described herein further propose respective solutions based on wavelength/power division multiplexing (WDM/PDM) technologies to accommodate more than one wireless service at the same frequency. These additional embodiments therefore further enable frequency sharing among multiple RATs and MIMO deployments. Additional exemplary systems and methods for implementing delta-sigma digitization are described in co-pending U.S. patent application Ser. No. 15/847,417, filed Dec. 19, 2017, and to U.S. patent application Ser. No. 16/180,591, filed Nov. 5, 2018, the disclosures of both of which are incorporated by reference herein. Flexible Digitization Interface In accordance with one or more of the systems and methods described above, an innovative flexible digitization interface is provided. In an exemplary embodiment, the present digitization interface is based on delta-sigma ADC, which advantageously enables on-demand provisioning of SNR and data rates for MFH networks. By eliminating the conventional DAC at the RRH, the present systems and methods are capable of significantly reducing the cost and complexity of small cells. In particular embodiments, the present digitization interface enables an all-analog implementation of RRHs, and is capable of handling variable sampling rates, adjustable quantization bits, and/or flexible distribution of quantization noise. In some embodiments, the interface further utilizes noise shaping techniques to adjust the frequency distribution of quantization noise as needed or desired, thereby further enabling advantageous on-demand SNR and data rate provisioning. As described above, the rapid growth of mobile data, driven by the emerging video-intensive/bandwidth-hungry services, immersive applications, 5G-NR paradigm technologies (e.g., MIMO, carrier aggregation, etc.), creates significant challenges for existing optical and wireless access networks. The embodiments described above feature an innovative C-RAN architecture that enhances the capacity and coverage of cellular networks and consolidates baseband signal processing and management functions into a BBU pool. The exemplary architectures divide the RANs into two segments: (1) an MBH segment from the core network to the BBUs; and (2) a MFH segment from the BBUs to the RRHs. However, as also described above, conventional techniques such as CPRI, despite the overprovisioning SNR, suffer from low spectral efficiency and lack of scalability/flexibility, rendering such techniques a bottleneck of digital MFH networks for 5G services. Accordingly there is a need for an improved delta-sigma digitization interface to replace CPRI, which not only circumvents the CPRI data-rate bottleneck by improving the spectral efficiency, but also addresses the scalability and flexibility problems from CPRI by advantageously providing reconfigurability and flexibility in terms of sampling rate, quantization bit number, and quantization noise distribution. The present delta-sigma digitization interface thus provides for agile, on-demand SNR and data rate provisioning, while also allowing a significantly simplified RRH design that enables all-analog, DAC-free implementation. Such architectural simplifications significantly reduce the cost and complexity of 5G small cells for wide deployment. An exemplary architecture that may implement the present flexible digitization interface is described above with respect toFIG.4. Compared with the conventional digital MFH based on CPRI (e.g.,FIG.3), the Nyquist ADC in the BBU may be replaced by a delta-sigma ADC, and the Nyquist DAC in RRH may be replaced by a BPF. At the BBU, different mobile services are carried on IFs and multiplexed in the frequency domain. After delta-sigma ADC, the services may be digitized into bits and delivered to the RRH, for example, by an optical IM-DD link. At the RRH, a BPF may filter out the desired mobile service, eliminate the OOB quantization noise, and retrieve the analog waveform. This exemplary configuration, where the BPF implements DAC and frequency de-multiplexer functions, significantly reduces the system complexity of the RRH, enables an all-analog implementation thereof, capable of handling any sampling rate or quantization bit number without synchronization problems. Given the wide and dense deployment of small cells in 5G paradigm, this all-analog, DAC-free RRH design will significantly reduce the cost and complexity of small cells. A comparison ofFIG.5withFIGS.6A-C, above, illustrates the difference in operating principles between a Nyquist ADC and a delta-sigma ADC, respectively. As described above, in CPRI, each LTE carrier is digitized individually by a Nyquist ADC with a sampling rate of 30.72 MSa/s and 15 quantization bits. For each sample, 16 bits total (i.e., 15 quantization bits and one control bit) are used to transform the analog amplitude to digital bits. To accommodate various RATs, CPRI has a fixed basic frame rate 3.84 MHz, and can only work at a fixed sampling rate and fixed number of quantization bits. The quantization noise of a Nyquist ADC is evenly distributed in the frequency domain, and therefore CPRI requires a large number of quantization bits to reduce the quantization noise and maintain a high SNR for the digitized signal, thereby leading to the low spectral efficiency and high data bandwidth bottleneck problems. CPRI data rate options are shown in Table 2, below. With line coding of 8b/10b, CPRI consumes up to 30.72 MSa/s*16 bit′Sa*10/8*2=1.23 Gb/s MFH capacity for each 20 MHz LTE carrier (e.g., Option 2 in Table 2). Within a 10-Gb/s PON, only eight LTE carriers may be accommodated (e.g., Table 2, Option 7). LTE carrier aggregation was initially standardized by 3GPP release 10, which allowed 5 component carriers, and then expanded to allow 32 CCs in 3GPP release 13. This expanded carrier aggregation may consume up to 40 Gb/s fronthaul capacity if digitized by CPRI, which cannot be supported by existing optical/wireless access networks. TABLE 2LineLTEOptioncodingcarrier #ExamplesBit rate (Mb/s)18b/10b0.5Only I or Q491.52 × 10/8 = 614.428b/10b1One 20-MHz LTE CC491.52 × 10/8 × 2 = 1228.838b/10b22 CA or 2 × 2 MIMO491.52 × 10/8 × 4 = 2457.648b/10b2.5Only I/Q, 5 CA491.52 × 10/8 × 5 = 307258b/10b44 × 4 MIMO or491.52 × 10/8 × 8 = 4915.22 CA + 2 × 2 MIMO68b/10b55 CA491.52 × 10/8 × 10 = 614478b/10b88 × 8 MIMO or491.52 × 10/8 × 16 = 9830.42 CA + 4 × 4 MIMO7A64b/66b88 × 8 MIMO or491.52 × 66/64 × 16 = 8110.084 CA + 2 × 2 MIMO864b/66b105 CA + 2 × 2 MIMO491.52 × 66/64 × 20 = 10137.6964b/66b123CA + 4 × 4 MIMO491.52 × 66/64 × 24 = 12165.12 FIGS.11A-Dare graphical illustrations depicting a digitization process1100. In an exemplary embodiment, process1100demonstrates an operational principle of an alternative delta-sigma ADC techniques according to the present systems and methods. Similar to process600,FIGS.6A-C, process1100may also be executed by a processor in one or more BBUs. More specifically,FIG.11Adepicts a Nyquist sampling condition1102,FIG.11Bdepicts an oversampling subprocess1104of process1100,FIG.11Cdepicts a noise shaping subprocess1106of process1100, andFIG.11Ddepicts a filtering subprocess1108of process1100. Sampling condition1102, for example, represents a case where a limited number of quantization bits1110results in significant quantization noise1112for non-contiguous aggregated wireless service signal bands1114sampled at the Nyquist sampling rate fS/2. In this case, due to the limited number of quantization bits1110, significant quantization noise is present if the analog signal is sampled at its Nyquist rate. In contrast, in an exemplary embodiment of oversampling subprocess1104, oversampling extends the Nyquist zone, and quantization noise1116is spread over a relatively wider frequency range/wide Nyquist zone (e.g., the oversampling rate (OSR) times the Nyquist sampling rate fS/2, or OSR*fS/2). Similar to the embodiments described above, oversampling subprocess1104extends the Nyquist zone, spreads quantization noise1116over a wider frequency range, and thereby results in an oversampled analog signal1118where in-band SNR is improved. In an exemplary embodiment of noise shaping subprocess1106, quantization noise1116′ is pushed out of the signal bands1114′, thereby separating signals from noise in the frequency domain. In this example of subprocess1106, the respective spectra of signal bands1114′ are not modified during the operation of process1100. In an exemplary embodiment of filtering subprocess1108, a BPF1118is applied to signal bands1114′ to substantially eliminate the OOB noise, and also enable retrieval of an output signal1120closely approximating the original analog waveform. Process1100therefore advantageously circumvents the data rate bottleneck and flexibility issues of CPRI through the innovative flexible digitization interface described above, which is based on delta-sigma ADC. According to the techniques described herein, instead of digitizing each LTE carrier individually, the carriers may first be multiplexed in the frequency domain, and then digitized by a delta-sigma ADC. Unlike the Nyquist ADC, which uses many quantization bits, the present delta-sigma ADC techniques trade quantization bits for sampling rate, exploiting a high sampling rate, but only one or two quantization bits. According to the present delta-sigma ADC systems and methods, the signal waveforms are transformed from analog to digital by adding quantization noise without changing the spectrum of original analog signal. Therefore, to retrieve the analog waveform, the present delta-sigma digitization processing does not require a DAC, and may instead utilize a BPF to filter out the desired signal (e.g.,FIG.11D), which greatly simplifies the architectural design of the system. Once OOB noise is eliminated, the analog waveform is retrieved. Accordingly, a BPF (e.g., BPF1118,FIG.11D) may replace the Nyquist DAC (e.g., Nyquist DAC320,FIG.3A), and further perform frequency de-multiplexing functions in additions to the DAC functions, thereby also replacing a de-multiplexer (e.g., time domain de-multiplexer322,FIG.3A). In some cases, the retrieved analog signal may have an uneven noise floor from noise shaping. In some embodiments, the present delta-sigma ADC techniques may also operate in the time domain. One key difference between Nyquist and delta-sigma ADC, for example, is that Nyquist ADC has no memory effect, whereas delta-sigma ADC does have a memory effect. As described above, Nyquist ADC quantizes each sample individually and independently, i.e., current output bits are only determined by the current sample, but have no relevance to previous samples. Delta-sigma ADC, on the other hand, digitizes samples consecutively, i.e., the current output bit may depend on not only the current input sample, but also on previous samples. For example, with a sinusoidal analog input, a one-bit delta-sigma ADC outputs an OOK signal with a density of “1” bits proportional to the input analog amplitude. When the input is close to its maximum, the output contains almost all “1” bits; when the input is close to a minimum value, the output contains all “0” bits (e.g., bits1110,FIG.11C). For intermediate inputs, the output will have an equal density of “0” and “1” bits. The present embodiments thus concentrate a significant quantity of digital signal processing (DSP) capabilities into the BBU, and enable a DAC-free, all analog implementation of the RRHs, which not only reduces the cost and complexity of RRHs significantly, but also makes flexible digitization possible. With an analog RRH, the sampling rate, the number of quantization bits, and the frequency distribution of quantization noise may be flexibly reconfigured according to the required SNR and data rate without experiencing synchronization problems. As described further below with respect toFIGS.12-25B, a digitization process (i.e.,FIG.12) is provided for several exemplary implementation scenarios (i.e.,FIGS.13A-25B) that demonstrate the flexibility and reconfigurability of the present delta-sigma digitization interface for on-demand SNR provisioning. More specifically, five exemplary scenarios are described and illustrated below, which demonstrate the reconfigurability of the present delta-sigma digitization interface in terms of sampling rate, quantization bits, and noise distribution. The flexibility of the present digitization interface is described with respect to enhanced capabilities for on-demand provisioning of SNR, and also of data rate (e.g., for LTE). In some of the examples described below, the SNR is evaluated in terms of error vector magnitude (EVM). Exemplary 3GPP EVM requirements for different modulation formats are listed in Table 3, below. TABLE 3ModulationQPSK16QAM64QAM256QAM1024QAM*EVM (%)17.512.583.51 With respect to Table 3, it is noted that the 3GPP specification only includes modulation formats up to 256QAM, and therefore does not include an EVM for the 1024QAM modulation format. Accordingly, an EVM value of 1% it is included in Table 3 as a tentative criterion. The five separate exemplary implementation scenarios are illustrated in Table 4, below. These exemplary implementation scenarios demonstrate the flexibility of the present delta-sigma digitization techniques for on-demand provisioning of SNR and LTE data rates, in terms of ADC order, sampling rate, quantization bits, and noise distribution. For each Case listed in Table 4, different modulation formats are assigned to different carriers according to the respective SNR and EVM requirements specified by 3GPP for the particular modulation order. Accordingly, several different data rate options may be provisioned depending on the distribution of quantization noise. TABLE 4CaseIIIIIIIVVOrder24444Bits11212Digital waveformOOKOOKPAM4OOKPAM4MFH capacity1010201020(Gb/s)LTE carriers3232323737MFH capacity312.5312.5625270.27540.54per LTE carrier(Mb/s)SE Improvement3.933.931.974.552.27than CPRIModulation64QAM × 18256QAM × 161024QAM × 10256QAM × 121024QAM × 816QAM × 1464QAM × 16256QAM × 2264QAM × 25256QAM × 29Raw LTE data2.9524.0324.9684.4285.616rate (Gb/s)Digitization0.300.400.250.440.28efficiencyCommentsLow costHigh SEHigh SNR HighHighest SEHigh SNR HighLow SNRdata ratedata rateLow data rateFIGS.13A-15B16A-18B19A-20B21A-23B24A-25B In the first Case I example, which is based on a second-order one-bit delta-sigma ADC, a relatively simple, low-cost MFH solution is provided, and which exhibits a limited SNR and low data rate, and which is capable of digitizing 32 carriers with low modulation formats (e.g., 64QAM and 16QAM). This exemplary embodiment is described further below with respect toFIGS.13-15. In the Case II example, the order of delta-sigma ADC is upgraded from two to four, which significantly reduces the quantization noise. Accordingly, higher SNR and modulation formats may be supported to provision a larger data rate. This exemplary embodiment is described further below with respect toFIGS.16-18. In the Case III example, the quantization bit number is increased from one to two, which further reduces the quantization noise. Accordingly, even higher SNR and modulation formats may be supported. This exemplary embodiment is described further below with respect toFIGS.19-20. As listed in Table 4, the Case IV (described further below with respect toFIGS.21-23) and Case V (described further below with respect toFIGS.21-23) examples may utilize a fourth-order ADC similarly to an ADC implemented with respect to the Case II and Case III examples, but a different noise distribution. That is, the frequency distribution of quantization noise in the Case II and Case III example scenarios is tuned to maximize the SNR for 32 carriers. In contrast, the Case IV and Case V example scenarios may implement the same fourth-order ADC, but tune the noise distribution to accommodate 5 more carriers, with a slight SNR penalty. For example, the Case II example scenario may support 16 carriers of 256QAM, and 16 carriers of 64QAM, whereas the Case IV example scenario may accommodate 5 additional carriers, but with only 12 of the Case IV carriers having sufficient SNR to support 256QAM (i.e., the remaining 25 Case IV carriers will only support 64QAM. Nevertheless, in the Case IV example scenario, the overall LTE data rate is improved by approximately 10%. For the exemplary embodiments described in Table 4, above, and also with respect to the following embodiments, the exemplary carriers are described as LTE carriers (e.g., Table 2), for purposes of illustration. Nevertheless, the person of ordinary skill in the art will understand that these examples are provided for ease of explanation, and are not intended to be limiting. Thus, as shown in Table 2, CPRI consumes 1228.8 Mb/s MFH capacity for each LTE carrier. In contrast, as shown in Table 4, according to the present delta-sigma digitization interface techniques, each LTE carrier consumes 270.27-625 Mb/s MFH capacity, and the resultant spectral efficiency (SE) is improved by 1.97-4.55 times in comparison with CPRI. FIG.12is a flow diagram for a digitization process1200. Similar to process1100,FIGS.11A-D, digitization process1200may also be executed by a processor of one or more BBUs for implementing the present flexible delta-sigma digitization interface, and with respect to carriers, such as LTE, for example, having particular data rate requirements. In an exemplary embodiment, the number of LTE carriers and their particular modulation formats may be selected according to the demanded LTE data rate. SNR requirements and the number of quantization bits may then be determined, while keeping the EVM performance of each LTE carrier compatible with 3GPP specifications. According to the determined noise distribution, zeros and poles of a noise transfer function (NTF) may then be calculated, and a Z-domain block diagram may be implemented for the design of the delta-sigma ADC, based on the NTF and quantization bit number. In an embodiment, digitization process1200may be implemented as a series of logical steps. The person of ordinary skill in the art though, will understand that except where indicated to the contrary, one or more the following steps may be performed in a different order and/or simultaneously. In the exemplary embodiment, process1200begins at step1202, in which the LTE data rate requirements are obtained. In step1204, process1200selects the number of LTE carriers according to the LTE data rate requirements obtained in step1202. In an exemplary embodiment of step1204, the particular LTE data rate requirements are previously known, i.e., stored in a memory of, or in operable communication with, the respective processor implementing process1200. In step1206, process1200selects the LTE modulation format(s) applicable to the obtained data rate and the selected carriers. In step1208, process1200determines the SNR requirements according to the relevant communication standard (3GPP, in this example), and in consideration of the LTE carriers and modulation formats selected. In step1210, process1200may additionally obtain the particular EVM requirements of the relevant standard (e.g., 3GPP), such that the EVM performance of each LTE carrier may be maintained according to the particular standard. Step1210may, for example, be performed before, after, or simultaneously with step1208. After the SNR requirements are determined, process1200may implement separate sub-process branches. In an exemplary first branch/subprocess, in step1212, process1200determines the quantization bit number. In an exemplary embodiment, step1214may be performed in an exemplary second branch/subprocess. In step1214, process1200calculates the zeros and poles for the NTF. In step1216, process1200determines the NTF and distribution of quantization noise in the frequency domain corresponding to the zeros and poles selected in step1214. In step1218, process1200implements a logical Z-domain block filter configuration having an order corresponding to the number of zeros of the NTF. In step1220, process1200configures the delta-sigma ADC from the quantization bits determined in step1212and from the Z-domain block configuration implemented in step1216. FIG.13Ais a schematic illustration of a filter1300. In an embodiment, filter1300may represent a Z-block diagram and/or impulse response filter for a delta-sigma ADC according to the systems and methods described herein. In an exemplary embodiment, filter1300is a second-order delta-sigma ADC that may be implemented for the Case I implementation scenario illustrated in Table 4. More particularly, in the example depicted inFIG.13A, filter1300operates with respect to a second-order delta-sigma ADC working at 10 GSa/s with one quantization bit and, after digitization, filter1300operates to transform 32 LTE carriers, at an input1302, into a 10 Gb/s OOK signal, for example, at an output1304. In an exemplary embodiment, because the relevant NTF of the delta-sigma ADC has an order of two, filter1300includes two feedforward coefficients a, and two feedback loops1306each having a z−1delay cell. In the embodiment depicted inFIG.13A, filter1300includes a “DAC” recursion1308for implementing the delta-sigma memory effect of past outputs, described above, and a one-bit quantizer1310. FIG.13Bis a graphical illustration depicting an I-Q plot1312for the NTF for filter1300,FIG.13A. Plot1312illustrates the respective zeros and poles of the second-order NTF for filter1300, which has a conjugate pair of zeros, and a conjugate pair of poles. In the embodiment depicted inFIG.13B, the two conjugate zeros may be seen to degenerate to z=1, which corresponds to a DC frequency (i.e., f=0). FIG.13Cis a graphical illustration depicting a frequency response1314of the NTF for filter1300,FIG.13A. In an exemplary embodiment, frequency response1314represents a distribution of quantization noise in the frequency domain. In an embodiment of the delta-sigma ADC described herein, the distribution of quantization noise is uneven, and may therefore be determined by the zeros of the NTF (e.g.,FIG.13B). That is, each zero corresponds to a null point1316of quantization noise on frequency response1314. In this example, using a sampling rate of 10 GSa/s, the relevant Nyquist zone is shown to occur in the range of 0-5 GHz. The only zero may then be seen to be located along frequency response1314at f=0. Accordingly, the quantization noise is shown to be minimized at DC, and to rapidly increase with frequency along frequency response1314. Thus, according to the embodiments depicted inFIGS.13A-C, LTE carriers at lower frequencies may be seen to have smaller quantization noise and higher SNR, while also supporting higher modulation formats. In contrast, the higher frequency carriers are seen to have smaller SNR, and will only be capable of accommodating lower modulation formats. The exemplary second-order configuration may therefore be capable of accommodating 32 LTE carriers with differential SNR provisioning, where the first 18 carriers thereof will have sufficient SNR to accommodate a 64QAM modulation format, and the remaining 14 carriers will be capable of supporting a 16QAM modulation format. According to the exemplary embodiment ofFIGS.13A-C, after digitization, 32 LTE carriers may be transformed into a 10 Gb/s digital OOK signal. Accordingly, each individual LTE carrier will consume 312.5 Mb/s MFH capacity (i.e., 10 Gb/s/32 carriers=312.5 Mb/s per carrier). Compared with CPRI, where each LTE carrier consumes a MFH capacity of 1228.8 Mb/s, the spectral efficiency is improved by 3.93 times according to the present embodiments. FIG.14Ais a graphical illustration depicting a spectrum plot1400. In an exemplary embodiment, spectrum plot1400illustrates the frequency spectrum including the 32 LTE carriers digitized by a second-order one-bit delta-sigma ADC (e.g.,FIG.13A). In the example depicted inFIG.14A, the respective spectra of the 32 LTE carriers are contained within a carrier spectrum portion1402. In some embodiments, to further improve the SNR of LTE carriers at high frequencies, a pre-emphasis may be used to boost the power of high frequency carriers. FIG.14Bis a graphical illustration depicting a close-up view of carrier spectrum portion1402,FIG.14A. Within the close-up view, the first 18 of the 32 LTE carriers (i.e., at 64QAM) may be more readily distinguished from the remaining 14 LTE carriers (i.e., at 16QAM). FIG.15Ais a graphical illustration1500depicting the EVMs for the LTE component carriers depicted inFIG.14B. In the example depicted inFIG.15A, the first 18 component carriers (i.e., 64QAM) exhibit an EVM percentage below 8%, and the remaining 14 component carriers (i.e., 16QAM) exhibit an EVM percentage above 8% and below 12.5%. FIG.15Bis a graphical illustration of constellation plots1502,1504,1506,1508for best case and worst case scenarios for the carriers depicted inFIG.15A. More specifically, constellation plot1502demonstrates the best case scenario for the 64QAM component carriers, which occurs at the first component carrier (i.e., CC1) exhibiting the lowest EVM percentage of the group (e.g., illustration1500,FIG.15A). Constellation plot1504demonstrates the worst case scenario for the 64QAM component carriers, which occurs at the last of the 18 component carriers (i.e., CC18) exhibiting the highest EVM percentage of the group. Similarly, constellation plot1506demonstrates the best case scenario for the 16QAM component carriers, which occurs at the first component carrier of the 14-carrier group (i.e., CC19) exhibiting the lowest relative EVM percentage, and constellation plot1508demonstrates the worst case scenario for the 16QAM component carriers, which occurs at the last of the 16QAM component carriers (i.e., CC32) exhibiting the highest EVM percentage. From these constellations, it can be seen how the respective constellation points are much more closely clustered in the respective best case scenarios (i.e., constellation plots1502,1506), but appear more to exhibit more distortion in the respective worst case scenarios (i.e., constellation plots1504,1508). As can be further seen from the foregoing embodiments, innovative second-order delta-sigma ADCs may be advantageously realized using only one- or two-feedback loops, which provide simple and low-cost implementation incentives. Accordingly, the person of ordinary skill in the art will understand that systems and methods according to the Case I implementation example are particularly suitable for scenarios having relatively low SNR and low data rate requirements. FIG.16Ais a schematic illustration of a filter1600. In an embodiment, filter1600also represents a Z-block diagram and/or impulse response filter for a delta-sigma ADC according to the systems and methods described herein. In an exemplary embodiment, filter1600constitutes fourth-order delta-sigma ADC for the Case II and Case III implementation scenarios illustrated in Table 4, above. More particularly, in the example depicted inFIG.16A, filter1600operates similarly, in some respects, to filter1300,FIG.13A, but as a fourth-order system, in contrast to the second-order system ofFIG.13A. That is, between an input1602and an output1604, filter1600includes four feedforward coefficients a, and also four feedback loops1606each having a z−1delay cell, corresponding to the order of 4. In the embodiment depicted inFIG.16A, filter1600further includes two feedback coefficients g, a DAC recursion1608for implementing the delta-sigma memory effect, and a quantizer1610. In some embodiments, the same general filter architecture of filter1600may be implemented for both of the Case II and Case III example scenarios, except that, in Case II, quantizer1610is a one-bit quantizer that outputs only two levels, similar to quantizer1310,FIG.13A(i.e., Case I). In Case III though, quantizer1310′ is a two-bit quantizer that outputs four levels. FIG.16Bis a graphical illustration depicting an I-Q plot1612for an NTF for filter1600,FIG.16A. Plot1612illustrates the respective zeros and poles of the fourth-order NTF for filter1600, which, in contrast to plot1312,FIG.13B, has two conjugate pairs of zeros, and two conjugate pairs of poles. FIG.16Cis a graphical illustration depicting a frequency response1614of the NTF for filter1600,FIG.16A. In an exemplary embodiment, similar to frequency response1314,FIG.13C, frequency response1614represents a distribution of quantization noise in the frequency domain. Different though, from frequency response1314, frequency response1614includes two null points1616. FIG.17Ais a graphical illustration depicting a spectrum plot1700. In an exemplary embodiment, spectrum plot1700illustrates a frequency spectrum including the 32 LTE carriers digitized by a fourth-order one-bit delta-sigma ADC (e.g.,FIG.16A, Case II). In the example depicted inFIG.17A, the respective spectra of the 32 LTE carriers are contained within a carrier spectrum portion1702. FIG.17Bis a graphical illustration depicting a close-up view of carrier spectrum portion1702,FIG.17A. Similar to the Case I implementation scenario (e.g.,FIG.14B), within this close-up view, it can be seen that this design configuration will also support 32 LTE carriers. However, due to the increased order of delta-sigma ADC (i.e., from second to fourth), the in-band quantization noise in this Case II scenario is significantly reduced in comparison with Case I, and a higher SNR and modulation may therefore be provisioned. In this Case II example, all 32 LTE carriers may be seen to have sufficient SNR to support a 64QAM modulation format, and half of the carriers (i.e.,16) have sufficient SNR to support a 256QAM modulation format. The RF spectrum and EVMs of all 32 carriers in the Case II scenario are described further below with respect toFIG.18. FIG.18Ais a graphical illustration1800depicting the EVMs for the 32 Case II LTE component carriers depicted inFIG.17B. That is, illustration1800depicts the EVM percentages of 32 carriers digitized by a fourth-order one-bit delta-sigma ADC. In the example depicted inFIG.18A, 16 of the 32 component carriers (i.e., 256QAM) exhibit an EVM percentage below 3.5%, and the remaining 16 carriers (i.e., 64QAM) exhibit an EVM percentage above 3.5% and below 8%. FIG.18Bis a graphical illustration of constellation plots1802,1804,1806,1808for best case and worst case scenarios for the carriers depicted inFIG.18A. More specifically, constellation plot1802demonstrates the best case scenario for the 256QAM component carriers, which occurs at the twelfth component carrier (i.e., CC12) exhibiting the lowest EVM percentage of the modulation format group. Constellation plot1804thus demonstrates the worst case scenario for the 256QAM component carriers, which occurs at the seventeenth component carrier (i.e., CC17), in this example. Similarly, constellation plot1806demonstrates the best case scenario for the 64QAM component carriers, which occurs at the sixth component carrier (i.e., CC6), and constellation plot1808demonstrates the worst case scenario for the 64QAM component carriers, which occurs at the thirty-second component carrier (i.e., CC32). Fourth-order delta-sigma ADC techniques are more complex than second-order ADC techniques. However, fourth-order delta-sigma ADC comparatively enables significantly reduced in-band quantization noise and enhanced SNR. The present fourth-order delta-sigma ADC embodiments are of particular use for high SNR and data rate scenarios, and can potentially support more LTE carriers. In this exemplary implementation scenario, 32 LTE carriers are shown to be supported. As described further below with respect to the Case IV and V implementation scenarios, the present fourth-order delta-sigma ADC embodiments may also support up to 37 LTE carriers as well. FIG.19Ais a graphical illustration depicting a spectrum plot1900. In an exemplary embodiment, spectrum plot1900illustrates a frequency spectrum including the 32 LTE carriers digitized by a fourth-order two-bit delta-sigma ADC (e.g.,FIG.16A, Case III). In the example depicted inFIG.19A, the respective spectra of the 32 LTE carriers are contained within a carrier spectrum portion1902. As described above, the Case III implementation scenario uses the same fourth-order delta-sigma ADC as in Case II, except for a two-bit quantizer (e.g., quantizer1310′,FIG.13A) instead of a one-bit quantizer (e.g., quantizer1310). Accordingly, both of the Case II and Case III scenarios share the same zeroes and poles (e.g.,FIG.16B), as well as the same NTF frequency distribution (e.g.,FIG.16C). In Case III though, the two-bit quantizer is configured to output a PAM4 signal. The presence of this additional quantization bit enables the present embodiments, according to this example, to realize further reductions in the quantization noise, while also achieving higher SNR provisioning. FIG.19Bis a graphical illustration depicting a close-up view of carrier spectrum portion1902,FIG.19A. Similar to the Case II implementation scenario (e.g.,FIG.17B), within this close-up view, it can be seen that the design configuration for this Case III scenario will also support 32 LTE carriers. In the Case III scenario though, due to the additional quantization bit, the total MFH capacity is increased to 20 Gb/s. Additionally, in this implementation scenario, the fronthaul capacity consumed by each LTE carrier is also doubled in comparison with the respective capacities of the Case I and Case II implementation scenarios. This Case III implementation scenario is therefore particularly useful and instances where it is desirable to trade spectral efficiency for SNR. Nevertheless, as can be seen in Table 4, the spectral efficiency under the Case III implementation scenario is still 1.97 times greater than CPRI. The RF spectrum and EVMs of the 32 Case III carriers are described further below with respect toFIG.20. FIG.20Ais a graphical illustration2000depicting the EVMs for the 32 Case III LTE component carriers depicted inFIG.19B. That is, illustration2000depicts the EVM percentages of 32 carriers digitized by a fourth-order two-bit delta-sigma ADC. In the example depicted inFIG.20A, all 32 component carriers have sufficient SNR to support 256QAM, i.e., all 32 carriers exhibit an EVM percentage below 3.5%. Furthermore, because10of the component carriers exhibit an EVM percentage below 1%, these 10 carriers will support 1024QAM. FIG.20Bis a graphical illustration of constellation plots2002,2004,2006,2008for best case and worst case scenarios for the carriers depicted inFIG.20A. More specifically, constellation plot2002demonstrates the best case scenario for the 1024QAM component carriers, which occurs at the twelfth component carrier (i.e., CC12). Constellation plot2004thus demonstrates the worst case scenario for the 1024QAM component carriers, which occurs at the twenty-eighth component carrier (i.e., CC28), in this example. Similarly, constellation plot2006demonstrates the best case scenario for the 256QAM component carriers, which occurs at the sixteenth component carrier (i.e., CC16), and constellation plot2008demonstrates the worst case scenario for the 256QAM component carriers, which occurs at the twenty-second component carrier (i.e., CC22). FIG.21Ais a graphical illustration depicting an I-Q plot2100for an NTF. In an exemplary embodiment, plot2100illustrates the respective zeros and poles of a fourth-order NTF, for the Case IV implementation scenario, of a filter, such as filter1600,FIG.16A. Indeed, for ease of illustration, the Case IV and Case V scenarios may utilize the same respective fourth-order delta-sigma ADC and Z-domain block diagram implemented with respect to the Case II and Case III scenarios (e.g., filter1600,FIG.16A). However, in the Case IV and Case V implementation scenarios, the coefficients on the feedback (i.e., g1, g2) and feedforward (i.e., a1, a2, a3, a4) paths may be differently tuned to accommodate additional LTE carriers. In some embodiments, the respective two conjugate pairs of zeros in the Case IV scenario may be more separated from each other than in the Case II scenario (e.g.,FIG.16B). FIG.21Bis a graphical illustration depicting a frequency response2102of the NTF for I-Q plot2100,FIG.21A. In an exemplary embodiment, frequency response2102is similar to frequency response1614,FIG.16C, and includes two null points2104. In some embodiments, where the respective conjugate pairs of zeros exhibit more separation from each other, null points2104may similarly exhibit greater separation from one another in relation to the Case II scenario (e.g.,FIG.16C). The Case IV implementation scenario it is therefore particularly advantageous where it is desirable to accommodate as many LTE carriers as possible with maximized spectral efficiency. In comparison with the Case II implementation scenario, the Case IV implementation scenario supports 37 LTE carriers with slight SNR penalty. Additionally, the MFH capacity consumed per carrier in this Case IV scenario is reduced to 270.27 Mb/s, and the spectral efficiency is improved by 4.55 times in comparison with CPRI. FIG.22Ais a graphical illustration depicting a spectrum plot2200. In an exemplary embodiment, spectrum plot2200illustrates a frequency spectrum including 37 LTE carriers digitized by a fourth-order one-bit delta-sigma ADC (e.g.,FIG.16A, Case III). In the example depicted inFIG.22A, the respective spectra of the 37 LTE carriers are contained within a carrier spectrum portion2202. In this Case IV implementation scenario, the same one-bit quantizer (e.g., quantizer1310,FIG.13A) may be used in the fourth order delta-sigma ADC as was used in the Case II scenario. FIG.22Bis a graphical illustration depicting a close-up view of carrier spectrum portion2202,FIG.22A. Different from the Case II implementation scenario (e.g.,FIG.17B), within this close-up view, it can be seen that the design configuration for this Case IV scenario will support 37 LTE carriers. The RF spectrum and EVMs of the 37 Case IV carriers are described further below with respect toFIG.23. FIG.23Ais a graphical illustration2300depicting the EVMs for the 37 Case IV LTE component carriers depicted inFIG.22B. That is, illustration2300depicts the EVM percentages of 37 carriers digitized by a fourth-order one-bit delta-sigma ADC. In the example depicted inFIG.23A, all 37 component carriers have sufficient SNR to support a 64QAM modulation format, i.e., all 37 carriers exhibit an EVM percentage below 8%. Additionally, 12 of the Case IV component carriers exhibit an EVM percentage below 3.5%, and may therefore support a 256QAM modulation format. FIG.23Bis a graphical illustration of constellation plots2302,2304,2306,2308for best case and worst case scenarios for the carriers depicted inFIG.23A. More specifically, constellation plot2302demonstrates the best case scenario for the 64QAM component carriers, which occurs at the tenth component carrier (i.e., CC10). Constellation plot2304demonstrates the worst case scenario for the 64QAM component carriers, which occurs at the thirty-seventh component carrier (i.e., CC37), in this example. Similarly, constellation plot2306demonstrates the best case scenario for the 256QAM component carriers, which occurs at the fourteenth component carrier (i.e., CC14), and constellation plot2308demonstrates the worst case scenario for the 256QAM component carriers, which occurs at the thirty-second component carrier (i.e., CC32). FIG.24Ais a graphical illustration depicting a spectrum plot2400. In an exemplary embodiment, spectrum plot2400illustrates a frequency spectrum including 37 LTE carriers digitized by a fourth-order two-bit delta-sigma ADC (e.g.,FIG.16A, Case III). In the example depicted inFIG.24A, the respective spectra of the 37 LTE carriers are contained within a carrier spectrum portion2402. In this Case V implementation scenario, the same two-bit quantizer (e.g., quantizer1310′,FIG.13A) may be used in the fourth order delta-sigma ADC as was used in the Case III scenario. In other words, the Case V implementation scenario is similar to the Case IV implementation scenario, except that in Case V, the one-bit Case IV quantizer is replaced with a two-bit quantizer. The zeros, poles, and frequency response of the corresponding NTF though, remain the same as with the Case IV scenario. Due to the increase from one quantization bit to two quantization bits, the quantization noise in the Case V scenario is reduced in comparison with the Case IV scenario. Furthermore, in the Case V scenario, all 37 LTE carriers have sufficient SNR to support a 256QAM modulation format, and 8 of the 37 carriers exhibit an EVM less than 1%, and may therefore support up to a 1024QAM modulation format. FIG.24Bis a graphical illustration depicting a close-up view of carrier spectrum portion2402,FIG.24A. Different from the Case III implementation scenario (e.g.,FIG.19B), within this close-up view, it can be seen that the design configuration for this Case V scenario will support 37 LTE carriers. The RF spectrum and EVMs of the 37 Case V carriers are described further below with respect toFIG.25. FIG.25Ais a graphical illustration depicting the EVMs for the 37 Case V LTE component carriers depicted inFIG.24B. That is, illustration2500depicts the EVM percentages of 37 carriers digitized by a fourth-order two-bit delta-sigma ADC. In the example depicted inFIG.25A, all 37 component carriers have sufficient SNR to support a 256QAM modulation format, i.e., all 37 carriers exhibit an EVM percentage below 3.5%. Additionally, eight of the Case V component carriers exhibit an EVM percentage below 1%, and may therefore support a 1024QAM modulation format. FIG.25Bis a graphical illustration of constellation plots2502,2504,2506,2508for best case and worst case scenarios for the carriers depicted inFIG.25A. More specifically, constellation plot2502demonstrates the best case scenario for the 256QAM component carriers, which occurs at the thirty-third component carrier (i.e., CC33). Constellation plot2504demonstrates the worst case scenario for the 256QAM component carriers, which occurs at the thirty-seventh component carrier (i.e., CC37), in this example. Similarly, constellation plot2506demonstrates the best case scenario for the 1024QAM component carriers, which occurs at the twelfth component carrier (i.e., CC12), and constellation plot2508demonstrates the worst case scenario for the 1024QAM component carriers, which occurs at the fifteenth component carrier (i.e., CC15). According to the systems and methods described herein, an innovative flexible digitization interface is provided that is based on delta-sigma ADC, and which enables on-demand SNR and LTE data rate provisioning in next generation MFH networks. The present embodiments advantageously eliminate the need for conventional DAC at the RRH by providing a simplified architecture that allows replacement with a DAC by a BPF, which significantly reduces the cost and complexity of small cell deployment. According to the techniques described herein, a simplified, DAC-free, all-analog implementation of RRHs it may also be effectively provided. These all-analog RRH implementations offer additional flexibility to the digitization interface in terms of sampling rate, quantization bits, and quantization noise distribution. Through exploitation of the noise shaping techniques described herein, the present systems and methods are further capable of manipulating the frequency distribution of quantization noise as needed or desired. By allowing for a more flexible choice of sampling rate, quantization bits, and noise distribution, the present systems and methods significantly improve over conventional systems by enabling an efficient capability for on-demand SNR and data rate provisioning. In comparison with conventional CPRI, the present digitization interface embodiments are capable of improving the spectral efficiency by at least 1.97-4.55 times. Real-Time Implementation Proof of the concepts of the present systems and methods is demonstrated with respect to several real-time implementations. In one exemplary implementation, delta-sigma ADC as demonstrated using a real-time field-programmable gate array (FPGA). The FPGA-based system provides a 5-GSa/s delta-sigma ADC capable of digitizing signals up to 252 MHz 5G (LTE, in this example, backspace), using a 1024QAM modulation format and having an EVM less than 1.25%. Additionally, the following embodiments further provide an innovative digitization approach that enables greater functional split options for next generation fronthaul interfaces (NGFIs). As described above, an improved delta-sigma ADC is provided that delivers bandwidth efficiency four times better than conventional CPRI techniques. For ease of explanation, some of the exemplary embodiments above are described with respect to a low-pass ADC that may be emulated by offline processing (e.g., a waveform generator). As described further above, in such cases, RF up-conversion would still be necessary at each RRU. In further exemplary embodiments, an NGFI according to the present systems and methods is configured to implement a real-time FPGA-based bandpass delta-sigma ADC. This real-time bandpass delta-sigma ADC both further improves the bandwidth efficiency, and also enables digitization of mobile signals “AS IS” at respective radio frequencies without requiring frequency conversion. This additional functionality further simplifies the RRU design in a significant manner by eliminating the conventional need for a local oscillator and RF mixer. These architectural improvements may be implemented singly, or in combination with one or more of the innovative configurations described above. The present systems and methods further enable an innovative functional split option for NGFIs. In an exemplary embodiment, a significant portion of RF functionality is consolidated in a distributed unit (DU), which enables a significantly simplified, and thus lower-cost, configuration at the RRU for small cell deployment. In an exemplary implementation, a high-performance FPGA (e.g., XILINX VC707) is employed as a bandpass delta-sigma ADC, using a 5 GSa/s sampling rate and having a widest reported signal bandwidth of 252 MHz. In such exemplary configurations, real-time digitization may be provided for both 5G-new radio (5GNR) and LTE signals, and for modulation formats up to 1024QAM having an EVM less than 1.25%. FIG.26is a graphical illustration of a comparative summary plot2600of delta-sigma RF sampling rates taken against conversion bandwidths. More specifically, comparative summary plot2600graphically illustrates known, reported results2602of a plurality of delta-sigma ADC studies that have been performed by numerous universities, corporations, and research centers. It can be seen, from reported results2602that all of these recent studies, with one exception (i.e., reported result2602(4), MIT) are all confined to bandwidths of between zero and 50 MHz, irrespective of the noted sampling rate. Reported result2602(4) is the lone exception to this trend, indicating a 200 MHz bandwidth increase at a sampling rate between 2 and 3 GHz. However, reported result2602(4) does not rise above a 3 GHz sampling rate. In contrast, according to the present systems and methods, a set of present results2604, namely, that of the real-time implementations described herein, are illustrated to all locate at an approximately 5 GHz sampling rate, and all for bandwidths ranging between 100 MHz at the low end, to 250 MHz at the high-end. Accordingly, the present systems and methods are configured to operate at considerably higher sampling rates (e.g., 5 GSa/s) and bandwidths (e.g., up to 100-250 MHz and greater) than all of the known, reported delta-sigma ADC implementations.FIG.26is just one example of the superior qualities provided according to the present techniques. FIG.27is a schematic illustration of a network architecture2700. In an exemplary embodiment, architecture2700is similar in some respects to architecture100,FIG.1, and may represent a C-RAN architecture including an MBH network portion2702, a first MFH network portion2704, and a second MFH network portion2706. In the exemplary embodiment, architecture2700further includes a core network2708and a plurality of S-GW/MMEs2710in communication with a central unit (CU)2712in operable communication with MBH network portion2702. That is, MBH network portion2702constitutes the network segment from S-GW/MMEs2710and core network2708to CU2712. Architecture2700further includes one or more RRHs2714(also referred to as remote radio units, or RRUs), accessible by mobile and/or wireless users (not separately shown inFIG.27). A plurality of DUs2716are in operable communication with CU2712, and serve to facilitate communication between CU2712and one or more RRHs2714. In some embodiments, each DU2716may include one or more BBUs or a BBU pool (not separately shown). In at least one embodiment, CU2712may include additional BBUs or BBU pools. Accordingly, first MFH network portion2704constitutes the network segment from DUs2716to RRHs2714, and second MFH network portion2704constitutes the combination of CU2712and DUs2716. In exemplary operation of architecture2700, general functionality may be similar to that of architecture100,FIG.1. Different from architecture100though, in architecture2700, NGFI functions are split and/or shared between CU2712, DU2716, and RRHs2714. An NGFI functional layer diagram2718illustrates exemplary NGFI functional split options between several functional layers of CU2712, DU2716, and RRH2714, which options are schematically represented in diagram2718as numbered connections between the various layers. For example, CU2716includes a radio resource control (RRC) layer2720and a packet data convergence protocol (PDCP) layer2722, with a split option 1 indicated therebetween. Additionally, in this example, DU2716includes one or more of a high radio link control (RLC) layer2724, a low RLC layer2726, a high media access control (MAC) layer2728, a low MAC layer2730, a high physical (PHY) layer2732, a low PHY layer2734, and a high RF layer2736. RRH2714includes a low RF layer2738in operable communication with high RF layer2736of DU2716, through split option 9. During the evolution to 5G, NGFI was proposed to split baseband functions into a central unit and a distributed unit, thereby dividing a C-RAN architecture (e.g., architecture2700) into three segments: (1) an MBH segment (e.g., MBH network portion2702) from service gateways (e.g., S-GW2710) to the BBU; (2) one fronthaul segment (e.g., second MFH network portion2706) from the CU (e.g., CU2712) to the DU (e.g., DU2716); and (3) another fronthaul segment (e.g., first MFH network portion2704) from the DU to the RRU (e.g., RRH2714). Some of the split options depicted in diagram2718became achievable according to this original NGFI proposal. However, using the architectural and functional improvements of the embodiments herein, the present bandpass delta-sigma ADC techniques newly enable split option 9 (i.e., between high-RF layer2736and low-RF layers2738) as being achievable due to the consolidation of a significant portion of the RF functions in the DU. This consolidation at the DU advantageously lowers both the cost and complexity of the RRU architecture and functionality which thereby facilitates a substantially denser deployment of small cells. FIG.28is a schematic illustration of an RoF link2800. RoF link2800includes at least one DU2802in operable communication with at least one RRU2804over a transport medium2806(e.g., a single mode fiber, or SMF). In an exemplary embodiment, DU2802includes one or more fronthaul technologies of an analog link portion2808, a first digital link portion2810, and a second digital link portion2812. Analog link portion2808, for example, serves to provide RoF-based analog MFH functionality, similar to MFH network200,FIG.2A. Similarly, first digital link portion2810serves to provide CPRI-based digital MFH functionality, similar to MFH network300,FIG.3A, and second digital link portion2812serves to provide bandpass delta-sigma ADC-based digital MFH functionality, similar to MFH network400,FIG.4A. More specifically, analog link portion2808includes, at DU2802, a baseband processing layer2814, an RF up-conversion layer2816, an FDM2818, and an E/O interface2820, and at RRU2804, a complementary RF front end2822, a first power amplifier2824, a first BPF2826, and an O/E interface2828. Similarly, first digital link portion2810includes, at DU2802, a baseband processing layer2830, a compression unit2832, a Nyquist ADC2834, a first TDM2836, and an E/O interface2838, and at RRU2804, a complementary RF front end2840, a second power amplifier2842, an RF up-converter2844, a decompression unit2846, a Nyquist DAC2848, a second TDM2850, and an O/E interface2852. Additionally, second digital link portion2812includes, at DU2802, a baseband processor2854, an RF up-converter2856, a delta-sigma ADC2858(e.g., a bandpass delta-sigma ADC), and an E/O interface2860, and at RRU2804, a complementary RF front end2862, a second BPF2864, a third power amplifier2866, and an O/E interface2868. According to the exemplary configuration of link2800, a simplified, inexpensive system is obtained, which provides high spectral efficiency. Limitations due to nonlinear impairments are also advantageously addressed by the innovative configuration therein. For example, the CPRI-based digital MFH system of first digital link portion2810implements Nyquist ADC at DU2802, and DAC at RRU2804, to digitize/retrieve the analog waveforms of baseband signals. Nevertheless, RF up-conversion performance is still necessary at RRU2804. Because CPRI-based solutions only work at fixed chip rates (e.g., 3.84 MHz), synchronization presents a significant challenge for different radio access technologies such as LTE, 5G, Wi-Fi, etc. However, by implementing the innovative functional split provided by split option 9 (e.g.,FIG.27) at second digital link portion2812of the same DU (e.g., DU2802), the limitations of the CPRI-based digital MFH system may be avoided, or at least significantly mitigated. More particularly, at DU2802, mobile signals may be up-converted to radio frequencies and digitized “AS IS” by bandpass delta-sigma ADC2858. Additionally, at RRU2804, a conventional DAC is replaced by the lower-cost second BPF2864to retrieve the analog waveform. As described above, the retrieved analog waveform is then ready for wireless transmission without the need for RF up-conversion. The operational principles of bandpass delta-sigma ADC2858and second BPF2864are described above in greater detail with respect toFIGS.6and7, and the operational principles of Nyquist ADC2834are described in greater detail with respect toFIG.5. That is, in summary, delta-sigma ADC techniques are different from Nyquist ADC in that delta-sigma ADC trades quantization bit(s) for the sampling rate. For example, as described above, delta-sigma ADC enables use of a high sampling rate with only one quantization bit (or two bits). The input signal is first oversampled, followed by exploitation of a noise shaping technique to push the quantization noise out of the signal band, so that the signal and noise are separated in the frequency domain. Using these innovative techniques at delta-sigma ADC2858, the analog waveform may be easily retrieved at RRU2804by second BPF2864, which filters out the OOB noise. In the exemplary embodiment, in analog link portion2808, first power amplifier2824is deployed after first BPF2826to amplify the analog signals, whereas in second digital link portion2812, third power amplifier2866is deployed before second BPF2864to boost the OOK signal (or a PAM4 signal, in the case where two quantization bits are used). Link2800is thus able to advantageously avoid the amplifier nonlinearity limitations described above, and further provide for use of a significantly lower-cost, higher-efficiency, switch-mode power amplifier than would be realized according to conventional techniques. FIG.29is a schematic illustration of a system architecture2900. System architecture2900represents a real-time experimental implementation of the architectures in operating principles described herein. In the exemplary embodiment, system architecture2900represents a three-stage implementation, including an analog input source2902, an FPGA2904, and a fronthaul system2906. In the real-time implementation of architecture2900, FPGA2904receives the analog signal of analog input source2902using an ADC interface2908. In this implementation, ADC interface2908was a 4DSP FPGA Mezzanine Card (FMC170) inserted on the high-pin count (HPC) connector of a Xilinx VC707 FPGA of FPGA2904, which realizes a 5 GSa/s one-bit bandpass delta-sigma ADC as ADC interface2908of the input analog signal. The person of ordinary skill in the art though, will understand that these specific hardware components are described for illustrative purposes, and are not intended to be limiting. Other structural components may be utilized without departing from the scope of the principles described herein. In exemplary operation, ADC interface2908samples the input analog signal from input source2902at 5 GSa/s, with 10 bits per sample. FPGA2904then performed one-bit delta-sigma modulation to transform 10 input bits, at an input buffer2910, into one output bit at an output buffer2912. FPGA2904was then configured to output the resulting one output bit through a multi-gigabit transceiver (MGT) port2914. In this exemplary configuration, due to the speed limitations of FPGA2904, the FPGA configuration was pipelined to de-serialize the input data into 32 pipelines, such that the operation speed of each pipeline was individually reduced to 156.25 MSa/s. Fronthaul system2906thus represents a real-time experimental setup implementation of a functional DU2916that includes FPGA2904, and is in operable communication with a functional RRU2918over a 30 km SMF transport medium2920. In operation, DU2916generated real-time LTE and 5G signals using a Rohde Schwarz (R&S) vector signal generator2922and an arbitrary waveform generator (AWG), respectively. FPGA2904then, for this implementation, digitized the mobile signal(s) into a 5-Gb/s OOK signal, which was then transmitted from DU2916to RRU2918over medium2920using an optical IM-DD system. The real-time LTE signals were received at RRU2918by a BPF2926, followed by an R&S signal analyzer2928. For the 5G signals, the received OOK signal was captured by a data storage oscilloscope (DSO)2930followed by real-time DSP2932. The respective OFDM parameters of the several 5G/LTE signals of this real-time implementation are shown below in Table 5. TABLE 5SamplingSubcarrierR-TrateFFTspacingDataCarrierActual BWModulationCaseSignals(MSa/s)size(kHz)subcarriersnumber(MHz)(QAM)A5G-NR122.8840963033001991024B2198256 × 2C4G-LTE30.72204815120010180256 × 6,1024 × 4D142521024 × 2,256 × 4,64 × 8 For Table 5, the 30 kHz subcarrier spacing and3300active subcarriers values for the 5G-NR signals are according to 3GPP Release 14. The EVM results, as described above, may then be used to evaluate the performance of the digitization. As described further below with respect toFIGS.30-34, the EVM criteria used in accordance with 3GPP, similar to the embodiments described above, were: 12.5% EVM for the 16QAM modulation format, 8% EVM for the 64QAM modulation format, and 3.5% EVM for the 256QAM modulation format. Different from the embodiments above though, an EVM of 2% was used for the 1024QAM modulation format. Again, EVM for the 1024QAM modulation format is not yet specified for 3GPP. The person of ordinary skill in the art though, will understand that the operating principles of the present embodiments fully apply to either EVM value for the 1024QAM modulation format. FIG.30Ais a graphical illustration depicting a power spectral density plot3000for an exemplary carrier. More particularly, power spectral density plot3000represents experimental results for Case A, Table 5, above, in which a single 960 MHz 5G carrier, having 99 MHz bandwidth and using the 1024QAM modulation format, was digitized. Power spectral density plot3000illustrates the respective RF spectra of an input analog signal3002(e.g., 5G), an OOK signal3004after delta-sigma ADC, and a retrieved analog signal3006after BPF. In this experimental implementation, a 5-Gb/s error-free transmission was achieved over 30 km fiber. It can also be seen from power spectral density plot3000that retrieved analog signal3006tracks fairly closely with input analog signal3002across the entire frequency range. FIG.30Bis a graphical illustration of depicting a plot3008of EVM (in %) against received optical power for the carrier depicted inFIG.30A. More particularly, plot3008illustrates EVM as a function of received optical power, and with respect to several hardware simulations, such as floating point, fixed point, and pipeline, which further illustrate the advantageous step-by-step implementation of the present FPGA embodiments. As can be seen from plot3008, no EVM penalty is observed after 30-km fiber transmission. FIG.30Cis a graphical illustration depicting a post-transmission constellation plot3010for the carrier depicted inFIG.30A. More particularly, constellation plot3010further confirms the integrity of the carrier transmission over a 30 km SMF. FIG.31Ais a graphical illustration depicting a power spectral density plot3100for an exemplary pair of carriers. More particularly, power spectral density plot3100is similar to power spectral density plot3000,FIG.30A, but represents experimental results for Case B of Table 5, above, for a digitization implementation of two 5G carriers having a total 198 MHz bandwidth and using the 256QAM modulation format. Power spectral density plot3100illustrates the respective RF spectra of input analog signals3102(e.g., 5G), digitized OOK signal3104, and retrieved analog signals3106. In this experimental implementation, it can be seen that, after transmission over 30 km fiber, quantization noise increases due to the wider signal bandwidth, and the EVMs of both carriers increases to 2.71% (seeFIG.31B, below) in comparison with the single-carrier case depicted inFIG.30A. Nevertheless, as depicted inFIG.31A, the results still satisfy the 3.5% EVM requirements of 3GPP for the 256 QAM modulation format. It can also be seen from power spectral density plot3100that retrieved analog signals3106track more closely with input analog signal3102at higher frequencies than at lower frequencies. FIG.31Bis a graphical illustration depicting a plot3108of EVM (in %) against received optical power for the pair of carriers depicted inFIG.31A. More particularly, plot3108illustrates EVM as a function of received optical power for both carriers, and with respect to the several hardware simulations depicted in plot3008,FIG.30B. In comparison with plot3008, plot3108demonstrates significant increases for each hardware simulation, in addition to the EVM increase described above. FIGS.32A-Bare graphical illustrations depicting post-transmission constellation plots3200,3202for the carriers depicted inFIG.31A. More particularly, constellation plot3200illustrates the post-transmission signal of the first carrier after 30 km, which has an EVM of 2.80%, and constellation plot3202illustrates the post-transmission signal of the second carrier after 30 km, which has an EVM of 2.83%. As can be seen from constellation plots3200,3202, the relative signal integrity between the two carriers is substantially similar, and within 3GPP requirements. FIG.33Ais a graphical illustration depicting a power spectral density plot3300for an exemplary set of carriers. More particularly, power spectral density plot3300is similar to power spectral density plot3100,FIG.31A, but represents experimental results for Case C of Table 5, above, for a real-time digitization implementation of 10 LTE carriers having a total 180 MHz bandwidth and where 6 of the 10 LTE carriers used the 256QAM modulation format, and the remaining 4 LTE carriers used the 1024QAM modulation format. Power spectral density plot3300illustrates the respective RF spectra of input analog signals3302(e.g., LTE), digitized signal3304, and retrieved analog signals3306. It can be seen from power spectral density plot3300that retrieved analog signals3306track with input analog signals3302across most of the frequency range other than zero (i.e., DC). FIG.33Bis a graphical illustration depicting a plot3308of EVM (in %) according to the respective carrier number of the set of 10 carriers depicted inFIG.33A. From plot3308, it can be seen that the different modulations that are assigned to the respective carriers track fairly closely with one another across several different hardware simulations, but with the most significant deviation being between the direct 30 km transmission simulation and the FPGA hardware simulation. FIG.34Ais a graphical illustration depicting a power spectral density plot3400for an alternative set of carriers. More particularly, power spectral density plot3400is similar to power spectral density plot3300,FIG.33A, but represents experimental results for Case D of Table 5, above, which represents a real-time digitization implementation of 14 LTE carriers having a total 252 MHz bandwidth and where 8 of the 14 LTE carriers used the 64QAM modulation format, 4 of the 14 LTE carriers used the 256QAM modulation format, and the remaining 2 LTE carriers used the 1024QAM modulation format. Power spectral density plot3400illustrates the respective RF spectra of input analog signals3402(e.g., LTE), digitized signal3404, and retrieved analog signals3406. It can be seen from power spectral density plot3300that retrieved analog signals3406track more closely with input analog signals3402at lower frequencies than at higher frequencies, but still within desired results. FIG.34Bis a graphical illustration depicting a plot3408of EVM (in %) according to the respective carrier number of the set of 14 carriers depicted inFIG.34A. From plot3408, it can be seen that the different modulations that are assigned to the respective carriers track more closely with one another across the several different hardware simulations, then in the 10-carrier case illustrated inFIG.33B. The largest still occurs between the direct 30 km transmission simulation and the FPGA hardware simulation, but this deviation is smaller than in the 10-carrier case. According to the embodiments described herein, innovative real-time, FPGA-based, bandpass delta-sigma ADC his advantageously implemented at the 5 GSa/s sampling rate, and significantly beyond the widest reported signal bandwidth (e.g.,FIG.26) for the digitization of 5G and LTE signals. According to the present embodiments, the bandwidth efficiency of the fronthaul segment to the RRH is significantly improved, while the cost and complexity of the RRUs are significantly reduced. The present techniques therefore unable a new and useful functional split option for NGFI that significantly improves over conventional proposals. Pipeline Implementation In accordance with one or more of the systems and methods described above, the following embodiments further describe embodiments for pipeline implementations of the present delta-sigma ADC techniques. In an exemplary embodiment, delta-sigma ADC is implemented using a pipeline FPGA architecture and corresponding operational principles with respect to timing and machine status. More particularly, conventional delta-sigma ADC techniques rely on sequential operation, which requires not only a high sampling rate, but also a high processing speed due to the current output bit depending on both the current input and previous outputs. These conventional constraints are resolved by the present embodiments, which include an innovative pipeline technique for segmenting a continuous input data stream, and then performing pipeline processing for each segment thereof, thereby successfully trading the processing speed for the hardware resourcing. According to these new systems and methods, the speed requirement of FPGA may be significantly relaxed. As described further below, for a practical experimental implementation utilizing an input sampling rate of 5 GSa/s, a 32-pipeline architecture effectively realized a reduction of the FPGA operation speed to 156.25 MHz. Accordingly, the present inventors were able to successfully demonstrate efficient implementation of delta-sigma digitization of 5G and LTE signals without realizing a significant performance penalty from pipeline processing. Referring back toFIG.27, network architecture2700illustrates a C-RAN in a 5G mobile network paradigm that advantageously simplifies each BS-to-RRU connection, while also making hoteling, pooling, and clouding of baseband processing possible, as well as enabling the coordination among multiple cells. Network portions2702,2704,2706represent three distinct segments of the network, namely, the “backhaul,” the “fronthaul,” and the “midhaul,” respectively. Backhaul2702functions to transmit baseband bits from S-GW2710to CU2712using, for example, WDM coherent optical links, midhaul2706connects CU2712with DUs2716using digital fiber links based on IM/DD, and fronthaul2704delivers mobile signals from DU2716to RRUs2714in either analog or digital waveforms. As described above, techniques have been proposed to increase spectral efficiency and reduce latency for fronthaul2704, such as by, for example, analog fronthaul based on RoF technology (e.g., analog link portion2808,FIG.28) and digital fronthaul based on CPRI (e.g., first digital link portion2810,FIG.28). More particularly, in analog link portion2808, after baseband processing in DU2802, mobile signals are synthesized and up-converted to radio frequencies, and then transmitted as analog waveforms from DU2802to RRU2804. For multiple bands of mobile services, the mobile signals are aggregated in the frequency domain before analog transmission. At RRU2804, different mobile signals may be first separated by BPFs, and then amplified and fed to an antenna for wireless emission. In some embodiments high-RF layer devices (e.g., local oscillator, mixer, etc.) may be consolidated in DU2802, whereas low-RF layer functions (e.g., filtering, amplification, etc.) may be distributed in RRUs2804. However, the analog fronthaul of analog link portion2808will still experience nonlinear impairments due to the continuous envelope and high PAPR of mobile signals. Conventional digital fronthaul techniques have attempted to avoid these analog impairments by employing a digital front haul interface based on CPRI (e.g., first digital link portion2810). In the conventional interface, at DU2802, a Nyquist ADC (e.g., ADC2834) digitizes mobile signals into bits, which are then transported to RRUs2804over digital IM/DD fiber links. Since each signal is digitized in the baseband, its I and Q components are digitized separately and multiplexed in the time domain. At RRU2804, after time division de-multiplexing (e.g., by demultiplexer2850), a Nyquist DAC (e.g., DAC2848) is used to recover the analog waveforms of I/Q components, which are then up-converted to radio frequencies by RF local oscillator and mixer. CPRI-based digital fronthaul techniques are therefore more resilient against nonlinear impairments, as well as capable of employment within existing 2.5/10G PONs. However, conventional CPRI-based digital fronthaul interfaces require Nyquist DAC and the all RF layer functions in each RRU, which increases the complexity and cost of cell sites. Additionally, as described above, CPRI is constrained by its low spectral efficiency, requires significantly high data rates after digitization, and only operates at a fixed chip rate (e.g., 3.84 MHz) capable of accommodating only a few RATs, such as UMTS (CPRI version 1 and 2), WiMAX (v3), LTE (v4), and GSM (v5). Moreover, because mobile signals are multiplexed using TDM technology, time synchronization is a particular problem for the CPRI-based digital fronthaul, which is not able to effectively coordinate the coexistence of these legacy RATs with the new and upcoming 5G services. The low spectral efficiency and lack of compatibility of CPRI renders it technically infeasible and cost prohibitive to implement CPRI for the NGFI. Some conventional proposals suggests these CPRI constraints may be circumvented using IQ compression and nonlinear digitization, however, all CPRI-based solutions only deal with baseband signals, which always require DAC and RF up-conversion at RRUs. Referring back toFIG.28, a new digitization interface (e.g., second digital link portion2812) avoids the constraints at the CPRI-based solution by implementing a delta-sigma ADC that is capable of trading quantization bits for sampling rate. That is, the delta-sigma ADC uses a sampling rate much higher than the Nyquist rate, but only one quantization bit. Therefore, unlike the Nyquist ADC in CPRI, which only handles baseband signals, the delta-sigma ADCs may effectively function in lowpass or bandpass, and digitize mobile signals at baseband, or “AS IS” at RFs without frequency conversion. Using bandpass delta-sigma ADC (e.g., ADC2858), mobile signals may be up-converted to RFs and multiplexed in the frequency domain at DU2802, and then digitized “AS IS” before delivery to RRU2804, where, instead of a conventional DAC, a simple, low-cost filter (e.g., BPF2864) may be used to retrieve the analog waveform, which is thus already at RF for wireless transmission. Accordingly, when compared with CPRI, the innovative delta-sigma digitization techniques of the present embodiments both improves the spectral efficiency, and also simplifies the RRU by consolidating high-RF layer functions in the DU, thereby advantageously leaving only low-RF layer functions in the RRU. Through these advantageous systems and methods, a new NGFI functional split option (e.g., split option 9,FIG.27) may be achieved between the high-RF and low-RF layers. Referring back toFIG.27, CPRI is capable of adopting only split option 8, by leaving DAC and all RF layer functions in each RRU. According to the present embodiments, hand, the RoF-based analog fronthaul (e.g., analog link portion2808) and delta-sigma ADC-based digital fronthaul (e.g., second digital link portion2812) may adopt split option 9 to simplify the RRU by centralizing RF up-conversion in the DU and replacing conventional DAC in the RRU by a low-cost BPF. This advantageous architecture thereby enables a DAC-free cell site, having simplified RF, which facilitates new 5G small cell deployment. Additionally, through frequency division multiplexing techniques, the delta-sigma digitization interface is able to heterogeneously aggregate multiband mobile services from different RATs, which thereby circumvents the clock rate compatibility issues and time synchronization problems of CPRI, as described above with respect toFIG.5, and the corresponding Nyquist ADC operational principles. Nevertheless, the present embodiments are capable of integration and coexistence with these legacy techniques as such are presently employed, or as may be modified as described further below with respect toFIG.35. FIG.35is a schematic illustration of a parallel quantization ADC architecture3500. In an exemplary embodiment, architecture3500is implemented as a modified Nyquist ADC. That is, in exemplary operation, architecture3500receives an analog signal3502at a sampling unit3504, which samples (at the Nyquist sampling rate, in this example) analog signal3502into an input sample stream3506. Input sample stream3506is received by de-serialization unit3508, which is configured to de-serialize input sample stream3506into first and second parallel data streams3510(1) and3510(2), respectively. First and second parallel data streams3510(1),3510(2) are then both quantized in parallel by their own respective multi-bit quantization unit3512, and the results therefrom are serialized, by a serialization unit3514, into a single output bit stream3516. In an exemplary embodiment, the de-serialization/serialization operations of units3508/3514are configured to implement time interleaving. For example, in the case where input sample stream3506is a 5 GSa/s sample stream, de-serialization unit3508may separate the 5 GSa/s stream into two parallel 2.5 GSa/s data streams (e.g., first and second parallel data streams3510(1),3510(2)), which may place even samples in one stream and odd samples in the other. Serialization unit3514may then, after quantization by quantization units3512, interleave the two parallel quantized streams in the time domain. Architecture3500is therefore similar, in some respects, to the architecture described above with respect to digitization process500,FIG.5, except that architecture3500additionally de-serializes the Nyquist sampled signal prior to quantization, and then quantizes each de-serialized stream in parallel. This modified de-serialization, parallel quantization, serialization approach may be effectively implemented with a CPRI-based digital interface using a Nyquist ADC because the Nyquist ADC digitizes each sample individually, and thus the quantization bits are determined only by amplitude, with no dependence on previous samples. FIG.36is a graphical illustration depicting an operating principle of a delta sigma digitization process3600. As described above, the present delta-sigma ADC techniques may be implemented for either lowpass or bandpass configurations. Accordingly, digitization process3600is similar to digitization process600,FIG.6, except that digitization process600illustrates an example of lowpass ADC, whereas digitization process3600illustrates an example of bandpass ADC. In exemplary operation of process3600, the analog input signal (not shown inFIG.36) is digitized, by an oversampling subprocess3602, “AS IS” without frequency conversion, into a sampled data stream3604. In an embodiment, oversampling subprocess3602uses a high sampling rate to extend the Nyquist zone about a signal band3606, and spread quantization noise3608over a wide frequency range. A noise shaping subprocess3610then pushes quantization noise3608′ out of signal band3606, thereby separating the signal from noise in the frequency domain. In an exemplary embodiment, quantization noise3608from a quantization unit3612(1-bit, in this example) is OoB, and process3600implements bandpass delta-sigma ADC to transform the signal waveform from analog to digital by adding OoB quantization noise3608, but leaving the original spectrum of signal band3606intact. Accordingly, in a filtering subprocess3614, a BPF3616is configured to retrieve an output analog waveform3618by filtering out OoB quantization noise3608′ without any need for conventional DAC. Thus, BPF3616effectively provides the functionality of both of the conventional DAC and frequency de-multiplexers for multiband mobile signals. In some embodiments, retrieved analog signal3618may have an uneven noise floor from the noise shaping technique of noise shaping subprocess3610. FIG.37is a schematic illustration of a delta-sigma ADC feedback architecture3700. Architecture3700is similar to architecture3500,FIG.35, except that, whereas architecture3500operates in parallel, architecture3700operates sequentially. That is, although conventional Nyquist ADC may be modified into the parallel serialization of architecture3500, delta-sigma ADC operates in a sequential manner. In exemplary operation, architecture3700receives an analog signal3702at an oversampling unit3704, which oversamples (at OSR*fS/2, in this example) analog signal3702into an input sample stream3706. Input sample stream3706is received by a noise shaping filter3708, and then a quantizer3710(1-bit, in this example), to produce an output bit stream3712. In an exemplary embodiment, architecture3700further includes a feedback loop3714from output bit stream3712to noise shaping filter3708. Feedback loop thus enables architecture3700to configure the output bits of output bit stream3712to not only depend on a current input sample, but also on one or more previous outputs. Because this dependence on consecutive output bits renders de-serialization and parallel processing of the input sample stream difficult, in the exemplary embodiment, architecture3700is configured to implement delta-sigma ADC at a high sampling rate and high processing speed enable the associated FPGA (not shown inFIG.37) to adequately follow the feed-in speed of input samples. However, typical conventional FPGAs are known to operate at only hundreds of MHz, whereas GHz sampling rates and greater are required for delta-sigma digitization of LTE/5G signals. Therefore, a functional gap exists between the minimum sampling rates of the present delta-sigma ADC techniques and the considerably slower operational speeds of conventional FPGAs. This gap is bridged according to the innovative pipeline systems and methods described below. FIG.38is a schematic illustration of a pipeline architecture3800for a delta sigma digitization process. Pipeline architecture3800advantageously functions to relax the FPGA speed requirements described immediately above, but without introducing a significant performance penalty thereby. Accordingly, pipeline architecture3800represents a structural alternative to FPGA2904of system architecture2900,FIG.29. In an exemplary embodiment, pipeline architecture3800includes an analog input source3802at an ADC interface3804configured to realize a 5 GSa/s 10-bit delta-sigma ADC. In exemplary operation, ADC interface3804samples the signal of input analog source3802at 5 GSa/s, with 10 bits per sample. A segmentation unit3806segments the continuous stream of input samples into 32 blocks (i.e., pipelines) at an input buffer3808, and sequentially fed to 32 respective input first-in-first-out buffers (FIFOs)3810for each pipeline. In an exemplary embodiment, each input FIFO3810is a 10-bit buffer, that is, ADC interface3804provides 10 quantization bits for each sample in this example. Accordingly, in this embodiment, each input FIFO3810stores W samples, and thus has a size of at least 10 W bits. In each pipeline, once the respective input FIFO3810is filled, the respective data therefrom is fed to a delta-sigma modulator3812, which performs delta-sigma digitization to transform the respective 10 input bits (in this example) to a single output bit. The delta-sigma digitization by delta-sigma modulator3812is performed in parallel with other pipelines. In an embodiment, delta-sigma modulator3812may constitute 32 respective individual delta-sigma modulation units. The respective output bits from the pipelines of modulator3812may then be stored in a respective output FIFO3814of an output buffer3816. The size of each output FIFO3814is therefore only 1/10 the size of each input FIFO3810. In a practical implementation of pipeline architecture3800for a conventional FPGA, ΔW more samples may be allocated both to input FIFOs3810and output FIFOs3814, such that the respective sizes thereof become 10*(W+ΔW) bits and W+ΔW bits. In other words, the output bits after digitization are stored in 32 separate output FIFOs3814(1-32), and then combined, by a cascading unit3818, into a single stream of output bits at an MGT port2914(5 Gb/s MGT-SMA, in this example). Enabled with the pipeline design of pipeline architecture3800, the operation speed of each pipeline is advantageously reduced to 1/32*5 GSa/s=156.25 MSa/s. Accordingly, the FPGA clock rate may be effectively relaxed to 156.25 MHz without significant performance penalty. It may be noted that the segmentation operation of segmentation unit3806in pipeline architecture3800differs from the de-serialization operation of de-serialization unit3508, of parallel quantization ADC architecture3500,FIG.35, in that the samples of each segmented block in pipeline architecture3800is consecutive. That is, in the first block, the segmented samples will be sample 0 through sample W−1 in the first block. The second block will thus contain samples W through 2 W−1, and so on. In contrast, the samples of parallel quantization ADC architecture3500are time interleaved, such that first parallel data stream3510(1) contains even samples 0, 2, 4, 6, etc., and second parallel data stream3510(2) contains samples 1, 3, 5, 7, etc. At output bit stream3516of parallel quantization ADC architecture3500, the samples from different streams are time interleaved; whereas pipeline architecture3800, the output blocks from respective output FIFOs3814smay be simply cascaded (e.g., by cascading unit3818, one after another. Because delta-sigma ADC relies on sequential operation, there may be some performance penalty encountered when segmenting a continuous sample stream into a plurality of blocks. In theory, the smaller is the block size, the larger will be the penalty introduced. Accordingly, in a real-time practical implementation of the exemplary embodiments described immediately above, a 5-GSa/s 32-pipeline ADC was used, together with a selected block size of W=20K, which establishes the tradeoff between the performance penalty and use of memory on the FPGA. In a further embodiment, a ΔW=2K margin value is added to each FIFO3810,3814for greater ease of implementation and relaxation of the time constraint. FIG.39is a flow diagram of an input state process3900for pipeline architecture3800,FIG.38. In an exemplary embodiment, process3900illustrates an operational principle of pipeline architecture3800using a state machine flow diagram for input FIFO (e.g., input FIFOs3810,FIG.38) operation. In exemplary operation, process3900begins at step3902, in which the FPGA is powered on, and all input FIFOs enter an IDLE state. In an exemplary embodiment, all input FIFOs stay in the IDLE state until enablement of a start signal. Step3904is a decision step. In step3904, process3900determines whether a start signal has been received. If no start signal has been received, process3900returns to step3902, and all input FIFOs remain in the IDLE state. If, in step3904, process3900determines that a start signal has been received, process3900proceeds to step3906, in which the input FIFOs enter a HOLD state. In an exemplary embodiment, the input FIFOs remain in the HOLD state, for a certain amount of clock cycles and/or until all input FIFOs have completely come out of the IDLE state. In step3908, all input FIFOs enter a WRITE ONLY state. In an embodiment of step3908, the input FIFOs enter the WRITE ONLY state simultaneously, or upon each respective input FIFO coming out of the HOLD state, if the timing is not simultaneous between the FIFOs. In other embodiments, step3908may be performed sequentially for each input FIFO. In step3910, the first input FIFO in the pipeline sequence (FIFO0, in this example) is filled with input signal sample data corresponding to that FIFO block. In an exemplary embodiment of step3910, once the input FIFO is filled, the input FIFO transits from the WRITE ONLY state to a READ&WRITE state. Step3910is then repeated sequentially for each of the N input FIFOs until the last of N input FIFOs in the sequence (FIFO N−1, in this example) is filled and transitions from the WRITE ONLY state to READ&WRITE state. Once an input FIFO is filled and set to the READ&WRITE state, the respective input FIFO will stay in the READ&WRITE state permanently, until receiving a RESET signal in step3912. Accordingly, assuming that the input ADC has a sampling rate of fS, then the FPGA throughput will be fSsamples per second. Furthermore, given that there are N pipelines, the operation speed of each individual pipeline is now advantageously reduced to fS/N, which is also the clock rate of FPGA. Thus, within each clock cycle, N samples are received from the input ADC, and then fed into the N input FIFOs sequentially. In this particular example, it is assumed that the size of each input FIFO is W samples. Accordingly, it will take W/N clock cycles to fill one FIFO with the relevant input sample data, and a total of W clock cycles to fill all N the input FIFOs. FIG.40is a timing diagram4000for operation of input buffer3808,FIG.38. For illustrated purposes, and not in a limiting sense, timing diagram4000is depicted using the assumptions that the number of pipelines N=4, and that the input FIFO size W=8. The person of ordinary skill in the art though, will understand that the number of pipelines and the input FIFO size may differ according to the particular needs of the system design. In an exemplary embodiment, timing diagram4000is implemented with respect to a clock signal4002, a reset signal4004, a state sequence4006, and an input data (Data In) sequence4008. In exemplary operation of timing diagram4000, each of N (four, in this example) input FIFOs4010are written sequentially, so that a respective write enable signal4012are turned on periodically, according to a duty cycle of 1/N. Once input FIFOs4010enter the READ&WRITE state of state sequence4006, it may be seen that all respective read enable signals4014are always on. In this example, different from the sequential writing process, all N input FIFOs4010may be read out (e.g., Data Out sequences4016) simultaneously. Nevertheless, in this example, one sample is read out from each input FIFO4010for each cycle of clock signal4002. Accordingly, for the embodiment depicted inFIG.40, it will take W clock cycles to deplete a FIFO4010. FIG.41diagram is a flow diagram of an output state process4100for pipeline architecture3800,FIG.38. In an exemplary embodiment, process4100illustrates an operational principle of pipeline architecture3800using a state machine flow diagram for output FIFO (e.g., output FIFOs3814,FIG.38) operation. Process4100, for the output FIFO operation, is somewhat similar to process3900,FIG.39for the input FIFO operation but differs in some respects. For example, in exemplary operation, process4100begins at step4102, in which the output FIFOs enter the IDLE state once power is on. Step4104is a decision step, in which process4100determines whether a start signal has been received. If no start signal has been received, process4100returns to step4102, and the output FIFOs remain in the IDLE state. However, if process4100determines that a start signal has been received, process4100proceeds to step4106, in which the output FIFOs enter the HOLD state, and remain in this state for a certain amount of clock cycles until all output FIFOs have come out of the IDLE state. In step4108, all output FIFOs enter the WRITE ONLY state. Step4108thus differs from step3908of process3900,FIG.39, in that all of the output FIFOs are filled simultaneously in step4108, whereas in step3908, the input FIFOs are filled sequentially. In step4110, all output FIFOs will transit to the READ&WRITE state at the same time, and stay in this state permanently until, in step4112, receiving a RESET signal. FIG.42is a timing diagram4200for operation of output buffer3816,FIG.38. In the exemplary embodiment depicted inFIG.42, for illustrative purposes, the same number of pipelines N=4, and the same FIFO size W=8, are used as in the example depicted inFIG.40, above. The person of ordinary skill in the art though, will understand that the number of pipelines and the output FIFO size may also differ according to the particular needs of the system design. In the exemplary embodiment, timing diagram4200is implemented with respect to a clock signal4202, a reset signal4204, a state sequence4206, and an output data (Data Out) sequence4008. In exemplary operation of timing diagram4200, each of four output FIFOs4210are written simultaneously, with only one sample written (e.g., Data In sequences4212) per cycle of clock signal4202. That is, because all output FIFOs are written simultaneously, respective write enable signals4214are always on. This operation is different from operation of input FIFOs4010,FIG.40, which are written sequentially with N samples coming per clock cycle. On the other hand, output FIFOs4210are read out sequentially, and thus respective read enable signals4216are turned on periodically with a duty cycle of 1/N. Accordingly, all output FIFOs4210are filled at the same time, but W cycles are needed to completely write all samples (i.e., one sample written per clock cycle). In contrast, input FIFOs4010require W/N clock cycles to fill one input FIFO. Therefore, within each clock cycle, N samples are read out from each output FIFO4210, thereby requiring W/N clock cycles to deplete an output FIFO, and W cycles total to read from all N output FIFOs. The embodiments described above were demonstrably implemented using a 4th-order bandpass delta-sigma ADC for an FPGA employing filter1600,FIG.16A. More particularly, the cascaded resonator feedforward (CRFF) structure of filter1600provides four cascaded stages of feedback loops1606(z−1), with the outputs of the four stages being fed forward to a combiner before quantizer1610(1-bit, in this example), with coefficients of a1, a2, a3, and a4. Each pair of two stages may then be cascaded together to form a resonator including a different feedback path in each resonator (e.g., g1 and g2). Quantizer1610thus serves to function (in 1-bit mode) as a comparator, which then outputs a one-bit OOK signal. By adjusting the coefficients of a and g, various delta-sigma ADCs with different OSRs and passbands may be implemented. FIG.43is a flow diagram for a fixed point coefficient implementation process4300. In an exemplary embodiment, process4300is implemented with respect to the FPGA implementation procedure illustrated in Table 6, below. This example, an initial delta-sigma ADC design of is based on a floating-point simulation in MATLAB without pipeline processing, with a final goal being a real-world fixed-point FPGA implementation with pipeline processing. Process4300thus serves to bridge the gap between simulation and implementation, by effectively evaluating the performance degradation during the transition from floating-point to fixed-point, and also the penalty induced by pipeline processing. In the exemplary embodiment, process4300is configured to isolate at least one reason for performance degradation in each of the several subprocesses shown in Table 6 and generate a systematic implementation procedure therefrom. TABLE 6PerformanceSub-penalty fromProcessImplementationCoefficientsCalculationPipelineInputOutputlast step1MATLABFloatingFloatingNoIdealLogicBest performancesimulationsignallevels2VerilogFloatingFloatingNoIdealLogicIdentical withsimulationsignallevelsSubprocess 13VerilogFixedFloatingNoIdealLogicDifferentsimulationsignallevelscoefficients4VerilogFixedFixedNoIdealLogicFixed-pointsimulationsignallevelsintermediatevariables5VerilogFixedFixedYesIdealLogicInput data streamsimulationsignallevelssegmentation6FPGAFixedFixedYesIdealLogicTime constraintsignallevels In exemplary operation, process4300begins at step4302, in which Subprocess 1 of Table 6 is executed. In an exemplary embodiment of Subprocess 1, a delta-sigma ADC is designed based on floating-point MATLAB simulation without pipeline processing, and the ADC performance thereof is optimized since the penalty due to fixed-point approximation, hardware constraint, and pipeline processing have not been yet included. Process4300then proceeds to step4304, in which Subprocess 2 of Table 6 is executed. In an exemplary embodiment of Subprocess 2, the floating-point design from Subprocess 1 is translated from MATLAB to hardware description language, such as Verilog. Step4306is a decision step. In step4306, process4300determines whether the respective performances of Subprocesses 1 and 2 are substantially identical, which would be expected. If the performances are not substantial identical, process4300proceeds to step4308, in which the Verilog code is debugged, and then step4304is repeated. If, however, in step4306, the respective performances of Subprocesses 1 and 2 are found to be substantially identical, process4300proceeds to step4310. In step4310, Subprocess 3 of Table 6 is executed. In an exemplary embodiment of Subprocess 3, key coefficients (e.g., a and g) of the delta-sigma ADC are approximated from floating-point to fixed-point, while keeping all intermediate calculations still in the floating-point mode. In comparison with Subprocess 2, some performance degradation in Subprocess 3 will be expected, due to the difference between floating-point and fixed-point coefficients. Step4312is therefore a decision step. In step4312, process4300determines whether the performance degradation greater than an expected, tolerable value, and therefore not acceptable. If process determines, at step4312, but the performance degradation is not acceptable, process them proceeds to step4314, in which key parameters may be identified to provide a better approximation Subprocess 3. Accordingly, process4300proceeds from step4314, to step4316, in which a better approximation is achieved, for example, by adjusting the bit number of each coefficient. In an embodiment of Subprocess 3, steps4310through4316may be repeated a number of times, over a few trials, which may be needed to identify the coefficients that have most impact on the final performance. Through this embodiment of Subprocess 3, the bit numbers may be fine-tuned until satisfactory performance is achieved. After successful values are achieved through Subprocess 3, process4300proceeds to step4318. In step4318, process4300executes Subprocess 4. In an exemplary embodiment of Subprocess 4, all the intermediate calculations and variables are transformed from floating-point to fixed-point. Due to the limited bit number, further performance degradation may be expected. Accordingly, process4300proceeds to step4320. Step4320is a decision step. If, in step4320, process4300determines that there is performance degradation is greater than acceptable limit (e.g., predetermined), then the degradation is not acceptable, in process4300proceeds to step4322, in which process4300identifies key parameters having the most impact, and then to step4324, in which process4300adjusts the bit number of each intermediate valuable to identify those most impactful parameters, and then fine-tunes their bit numbers to achieve satisfactory performance. As with Subprocess 3, it might require several iterations to find the key parameters and adjust their bit numbers. Thus, it can be seen that, in each of Subprocesses 1-4, no segmentation was considered. That is, the input data stream is processed continuously without interruption. Accordingly, on the completion of Subprocess 4, process4300proceeds to step4326, in which Subprocess 5 is executed. In an exemplary embodiment of Subprocess 5, pipeline processing is added, similar to the techniques described above, which segments the continuous input data stream into several blocks, and then performs fixed-point calculation on each such segmented block. Because delta-sigma digitization is a sequential processing technique (e.g., the current output bit may depend on both current and previous input samples), segmentation of continuous input data stream will be expected to degrade the performance, and this degradation penalty will increase as the block size decreases. In decision step4328, process4300determines if this degradation penalty is less than the predetermined level acceptable performance degradation. If process4300determines that the performance degradation is not acceptable, process4300proceeds to step4330, in which a block size is adjusted. Process4300then returns to step4326. If process4300determines that the degradation is less than an acceptable level, process4300proceeds to step4332. In step4332, Subprocess 6 of Table 6 is executed. In an exemplary embodiment of step4332, Subprocess 6 is executed as a real-world FPGA implementation. In contrast, and as indicated in Table 6, Subprocesses 2 through 5 were performed as Verilog simulations. Accordingly, in an optional step4334, prior to evaluating the performance of the FPGA, process4300may first determine whether the FPGA meets the time constraint. For example, given an input ADC operating at 5 GSa/s, segmented into 32 pipelines, the operation speed of each pipeline should be 156.25 MHz. That is, to satisfy the time constraint, the delta-sigma modulation in each pipeline should be completed within 1/156.25 MHz=6.4 ns. If, in step4334, the FPGA cannot meet the time constraint, process4300may proceed to step4336, in which process4300may optionally perform a tradeoff evaluation between the performance penalty and the memory consumption, and then calculate an appropriate block size for the optimum balance between performance and memory. In an exemplary embodiment of step4336, the bit numbers of key coefficients and intermediate valuables are fine-tuned and fed to one or more of Subprocess 3 (e.g., at step4316) and Subprocess 4 (e.g., at step4324). Referring back toFIG.38, where input FIFOs3810and output FIFOs3814are used in each pipeline to buffer the segmented data stream, too large of a block size may result in an undesirable increase in memory usage on the FPGA. In general, reducing the number of bits will improve the FPGA speed, while at the same time, degrade the system performance. Accordingly, the tradeoff operations of step4336may be particularly effective to ensure that the FPGA meets the time constraints without sacrificing too much performance. In the real-world implementation embodiment described herein, a block size of W=20K samples, and having a 10% margin (i.e., ΔW=2K), was used. Process4300then proceeds to step4338. Step4338is a decision step, in which process4300evaluates the performance of the FPGA implementation, and determines whether the performance demonstrates an acceptable degradation value. If the performance degradation of the FPGA is not within acceptable limits, process4300proceeds to step4340, in which a debugging operation of the FPGA is performed, and step4332is then repeated. If, however, in step4338, process4302determines that the performance degradation is acceptable, process4300proceeds to step4342, and completes the implementation. FIG.44is a schematic illustration of an exemplary testbed4400. In an exemplary embodiment, testbed4400represents a real-world implementation used to prove the concept of, and verify, the pipeline design of pipeline architecture3800,FIG.38, using the implementation procedure of fixed point coefficient implementation process4300,FIG.43. As implemented, testbed4400was similar in structure and functionality to fronthaul system2906,FIG.29. In the exemplary embodiment, testbed4400includes a transmitting DU4402in operable communication with an RRU4404over a transport medium4406(30 km SMF, and this example). In this embodiment, DU4402includes a transmitting AWG4408(e.g., Tektronix 7122C AWG) for generating real-time LTE and 5G signals, an attenuator4410, and an FPGA4412(e.g., Xilinx Virtex-7 FPGA on a VC707 development board) for implementing real-time bandpass delta-sigma ADC. FPGA4412includes an input ADC interface4414(e.g., a 4DSP FMC170) and an output port4416(e.g., a multi-gigabit transceiver (MGT)-SubMiniature version A (SMA) connector). In this example, FPGA4412implemented a 5 GSa/s 1-bit bandpass delta-sigma ADC for the digitization of LTE and 5G analog signals4418from transmitting AWG4408, that is, a sampling rate of 5 GSa/s and 10 quantization bits per sample. In exemplary operation of DU4402, analog signals4418are input to ADC interface4414, where they are first digitized to 10 bits, and then transmitted to FPGA4412, where the delta-sigma digitization transforms the 10 input bits to one output bit. FPGA4412then outputs a 5-Gb/s OOK signal4420at output port4416. In this implementation, to relax the speed of FPGA4412, a 32-pipeline architecture (e.g.,FIG.38) is used to reduce the FPGA clock rate to 156.25 MHz. In further exemplary operation of testbed4400, digitized 5-Gb/s OOK signal4420was then used to drive an optical modulator4422(e.g., a 12.5 Gb/s Cyoptics DFB+EAM) for transmitting signal4420as a modulated optical signal over a transport medium4406to RRU4404. At RRU4404, the optical signal is received by a photodetector4424, captured by a DSO4426(e.g., a 20 GSa/s Keysight DSO), and then processed by a DSP4428(e.g., a MATLAB DSP) for bandpass filtering by a BPF4430to retrieve the analog waveform at an LTE/5G receiver4432. In further real-world operation of testbed4400, LTE/5G signals4414were generated according to the OFDM parameters listed in Table 5, above. Similarly, EVM performance requirements were according to the values listed in Table 3, above, except for the case of the 1024QAM modulation format, which is not yet specified. In implementation of testbed4400instead of the 1% value listed in Table 3, 2% was used for the 1024QAM EVM performance requirement as a temporary criterion. Using these values, testbed4400was tested under various operations according to the exemplary implementation scenarios listed in Table 4, above, and produced experimental results therefrom substantially consistent with the results described with respect toFIGS.30A-34B. Therefore, according to the innovative systems and methods presented herein, a real-time FPGA-based bandpass delta-sigma ADC is provided for digitizing LTE and 5G signals having significantly higher reported sampling rate and wider signal bandwidth then any previously-reported signals from conventional systems and known implementations. The present bandpass delta-sigma ADC techniques are capable of digitizing the 5G/LTE signals “AS IS” at RFs without the need of frequency conversion, thereby further enabling the new function split option between the high-RF and low-RF layers. Thus, the present delta-sigma ADC-based digital fronthaul interface still further reduces the RRU cost and complexity, while also further facilitating even wider deployment of 5G small cells. The present systems and methods still further provide innovative pipeline architectures and processes for delta-sigma ADC that significantly relax the FPGA speed requirement, thereby enabling the implementation of high-speed delta-sigma ADC, even using relatively slow-speed FPGA, but without significantly sacrificing performance. The present embodiments still further introduce innovative evaluation process that is configured to transform a floating-point simulation to a fixed-point FPGA implementation, to further optimize the design of the pipeline systems and methods described herein. New Function Split Option for Software Defined and Virtualized NGFI Further to the embodiments described above, a new function split option for the NGFI, that is, new option 9, may be advantageously provided based on all-digital RF transmitter using bandpass delta-sigma modulation. In contrast to the conventional lower layer split (LLS) option 6 (MAC-PHY), option 7 (high-low PHY), and option 8 (CPRI), the present option 9 embodiments are enabled to split functions within the RF layer, with high-RF layer functions centralized in the DU, and low-RF layer functions distributed in the RRUs. A proof-of-concept all-digital RF transmitter is described above, and based on real-time bandpass delta-sigma modulation implemented by Xilinx Virtex-7 FPGA. The embodiments of further demonstrate a 5-GSa/s delta-sigma modulator encoding LTE/5G signals with bandwidth up to 252 MHz and 1024-QAM modulation to a 5-Gb/s OOK signal, transmitted over 30-km fiber from DU to RRU. The present pipeline architectures (a 32-pipeline architecture, in the example described above) relax the FPGA speed limit, and the present two-carrier aggregation of 5G signals and 14-carrier aggregation of LTE signals demonstrate EVM performance satisfying 3GPP requirements. In an exemplary embodiment, the option 9 techniques described herein are capable of splitting the signal at a lower level than conventional option 8, while still providing improved spectral efficiency and a lower NGFI data rate than the conventional CPRI techniques. Therefore, in comparison with higher split options 6, 7, and 8, the present option 9 is capable of exploiting an all-digital RF transceiver that is predominantly centralized in the DU, and also capable of eliminating the need for a DAC, LO, and RF mixer at the RRU. According to the exemplary systems and methods described herein, lower-cost, lower-power, and smaller-footprint cell sites may be more readily provided for the wide deployment of small cells. By providing an achievable all-digital solution, systems and methods according to the present embodiments are further enabled to realize SDR and virtualized DU/RRU for improved compatibility and reconfigurability of existing multi-RATs, and in easier evolution toward the next generation RAT. Due to the centralized architecture and highly deterministic latency advantages, the present option 9 techniques are suitable for radio coordination applications, such as coordinated multipoint (CoMP) or joint Tx/Rx. Additionally, due to the limited speed of conventional FPGA/CMOS implementations, it is expected that the present option 9 function split it is particularly applicable for immediate implementation in low-frequency narrowband Internet of Things (IoT) scenarios or applications having cost-, power-, and size-sensitive cell sites, such as mMTC and NB-IoT. As described above, the rapid growth of mobile data, driven by the emerging video-intensive/bandwidth-hungry services, immersive applications, and 5G-NR paradigm technologies, creates significant challenges for existing optical and wireless access networks made the RAN a new bottleneck of user experience. During the emergent 4G era, to enhance the capacity, coverage, and flexibility of mobile data networks, the C-RAN was proposed in 2011 to separate and consolidate baseband processing functions from the BS in each cell site to a centralized BBU pool, which simplifies each BS to an RRH, and enables coordination among multiple cells. Accordingly, the C-RAN architecture was originally divided into two segments by the BBU, namely, the backhaul segment, from the 4G Ethernet packet core (EPC) to the BBUs, and the fronthaul segment, from the BBUs to the RRHs. The collocation of the BBUs for multiple cells enabled resource pooling among different BBUs, making the inter-cell coordination possible. The fronthaul segment, however, which was based on CPRI, had limited flexibility and scalability due to the low spectral efficiency and significantly high data rate. Furthermore, CPRI that was developed for narrowband RATs, such as UMTS (CPRI version 1 and 2), WiMAX (CPRI v3), LTE (CPRI v4), and GSM (CPRI v5), featured constant data rates that were traffic-independent but antenna-dependent, and could not support Ethernet packetization and statistical multiplexing, rendering CPRI the above-described bottleneck for massive MIMO and large-scale carrier aggregation. Various strategies have been proposed to circumvent the CPRI (i.e., option 8) bottleneck, including analog fronthaul and improved CPRI. As described above, analog fronthaul transmits mobile signals in their analog waveforms using RoF technology, which features simple, low-cost system implementations having high spectral efficiency, but which are susceptible to nonlinear/noise impairments. The improved CPRI solutions implement CPRI compression to maintain the CPRI interface, but significantly reduce the data rate of the fronthaul segment by exploiting CPRI compression algorithms or nonlinear quantization techniques, which significantly increase the structural cost by requiring additional hardware complexity, and also undesirably significantly increase the latency. Accordingly, improved NGFI systems and methods are needed based on new function splits. FIG.45is a graphical illustration of a comparative summary plot4500of delta-sigma modulation sampling rates taken against bandwidths. In an exemplary embodiment, summary plot4500depicts a comparison of the state-of-the-art of a variety of reported delta-sigma modulation implementations4502(depicted as references [30]-[48], in the example illustrated inFIG.45) for an all-digital RF transmitter, with proof of concept results4504. More specifically, reported modulation implementations4502represent the publicly reported performance results of a plurality of delta-sigma modulation studies. Detailed parameters of modulation implementations4502and results4504are listed further below in Table 7. As can be seen from plot4500and Table 7, with few exceptions (i.e., references [32], [44], [45], and [48]), reported modulation implementations4502are confined to bandwidths well below 100 MHz, and in only one instance (i.e., reference [48]) has a study been performed implementing both a greater bandwidth and sampling rate than those of results4504, which represent the practical implementations of a concepts and techniques described herein with respect to the present embodiments. Indeed, a majority of reported modulation implementations4502are confined to a small region4506of plot4500representing a bandwidth less than 12 MHz and a sampling rate lower than 1 GHz. TABLE 7Sampling rateBandwidthFcReference(GSa/s)(MHz)(GHz)TypeImplementationPipeline #Application[30]0.03521.1BasebandLowpassCMOS 0.5 pm1Continuoustime Tx[31]0.7<10.175BandpassCMOS 130 nm1GSM[32]2.6252005.25LowpassCMOS 130 nm1Digital RF Tx[33]<3.610, 202.4-3.6LowpassCMOS 90 nm1Digital RF Tx[34]5.45.6, 11.2, 202.4-2.7LowpassCMOS 65 nm1Wi-Fi,WiMAX[35]2.6, 4Up to 500.05-1BandpassCMOS 90 nm1Digital RF Tx[36]0.050.25, 0.5BasebandLowpassAltera Stratix1OFDM,CDMA[37]0.0451.25/1.232.45, 1LowpassAltera Stratix1WiMAX,CDMA,EDGE[38]0.64, 0.83.84/7.68 (LTE)2.1, 2.5LowpassAltera Stratix II8WiMAX,4/8 (WiMAX)GXLTE[39]0.0251.61LowpassUnknown FPGA4CDMA[40]3.95 + 50.8, 1.5Dual-bandUnknown FPGA1Dual-bandLTE[41]0.2251.25 + 1.50.45, 0.9LowpassXilinx Virtex 61Dual-bandHX380T onWiMAX +ML628SC-Qam[42]0.156251.25 + 1.51.25, 0.78125LowpassXilinx Virtex 61Dual-bandHX380T onSC-64QAM +ML628WiMAX[43]1/0.9Up to 12.51, 0.9LowpassXilinx Virtex 64Single-carrier(1st/2ndorder)HX380T on(SC)ML628[44]3.26.1-1221.6LowpassXilinx Virtex 616Single-carrierVHX280T on(SC)ML628[45]3.26-1203.2LowpassXilinx Virtex16Single-carrierUltraScale(SC)XCVU095 onVCU1287[46]0.750.7EnvelopeCMOS 90 nm1LTE[47]10.4 =205.2LowpassXilinx UltraScale32Wi-Fi0.325 * 32XCVU095 on802.11aVCU108[48]9.6 =4884.8LowpassXilinx UltraScale32SC-64QAM0.3 * 32XCVU095 onVCU108[49]6.2520 + 200.856, 1.45Dual-bandSimulation +1Dual-bandAWGLTE[50]6.2510 + 200.874, 1.501Dual-bandSimulation +1Dual-bandAWGLTE[51]2.155 + 100.244, 0.5Dual-bandSimulation +1Dual-bandAWGLTE[52]710 + 10 + 100.71, 1.75, 2.51Triple-bandSimulation +1Triple-bandAWGLTE[60, 61]10625BasebandLowpassSimulation +132 LTEAWGcarrieraggregation[62, 63]16-321200BasebandLowpassSimulation +15 DOCSISAWG3.1 channelsResults599-2520.96BandpassXilinx Virtex-7325G, LTE4504VX485T oncarrierVC707aggregation With respect toFIG.45, it can be seen that the performance results of only References [30]-[48] of Table 7 are illustrated as modulation implementations4502, and that the performance results of References [49]-[52] and [60]-[63] are not illustrated in plot4500. In this case, the performance results of References [49]-[52] and [60]-[63] are not included in plot4500because they are implemented by off-line processing, and not by a CMOS or FPGA. Accordingly, plot4500demonstrates the unique advantageous nature of the present option 9 function split embodiments as an alternative to the conventional approaches (e.g., modulation implementations4502) that indicate merely a move of the LLS to a higher level of option 8. The present option 9, rather than moving the LLS to a higher level, instead pushes the LLS deeper into the RF layer, with high-RF layer functions centralized in the DU, and low-RF layer distributed in the RRUs. This non-conventional option 9 function split approach thus enables an all-digital RF transceiver based on delta-sigma modulation, and also implements both baseband and RF functions in the digital domain, which not only improves the spectral efficiency compared with CPRI. The option 9 function split further eliminates the need for analog RF functions (e.g., DAC, LO, mixer, etc.) at the RRUs, thus providing a simplified, low-cost, and reconfigurable structural architecture of the RRU for small cell deployment. With respect to SDR, it is desirable to push the ADC/DAC operations as close as possible to the antenna, such that the baseband and RF processing may be more easily confined to the digital domain for enhanced flexibility and compatibility multi-RATs having different PHY layer specifications. The advantageous capability to implement the present systems and methods with SDR further enables a dynamically reconfigurable function split, which is of particular value with respect to various 5G scenarios (e.g., eMBB, uRLLC, mMTC, etc.) that have different data rate and latency requirements. As illustrated above in Table 7, transmitter designs of all-digital RF transceiver based on delta-sigma modulation are reflected with respect to References [30]-[52] and receiver designs with respect to References [53]-[59]. References [30], [32]-[34], [36]-[39], [41]-[45], [47], and [48] indicate performance results of lowpass delta-sigma modulators, References [31] and [35] indicate performance results of bandpass delta-sigma modulators, and References [40] and [49]-[52] indicate performance results of multiband delta-sigma modulators. Due to the speed limit of FPGA, several time-interleaving or parallel processing architectures for delta-sigma modulation are also indicated (i.e., references [38], [39], [43]-[45], [47], and [48]). References [60]-[63] thus represent the simulation results of the embodiments described further above, which replace CPRI with the present delta-sigma modulation techniques that improve the fronthaul spectral efficiency. In the exemplary performance results though, the delta-sigma modulation was realized by offline processing. The embodiments described further below therefore represent a real-time demonstration of delta-sigma modulation for NGFI, which has not been heretofore realized. More particularly, the following embodiments demonstrate a first real-time FPGA implementation of the present new NGFI function split option 9, which is enabled by an all-digital transceiver based on delta-sigma modulation. The present option 9 function split embodiments not only improve the spectral efficiency, but also simplify the RRU design to facilitate the deployment of small cells. Furthermore, the all-digital transceiver architecture advantageously enables improved SDR and virtualization solutions of the DU and the RRU, which make the Next Generation RAN (NG-RAN) compatible with multiple RATs, including 4G-LTE, Wi-Fi, 5G-NR, and other emerging technologies. The evolution of the RAN (e.g., from 3G to 4G, and further toward 5G and beyond) is described further below with respect toFIG.46. FIG.46Ais a schematic illustration of a 3G radio access network architecture4600. In an embodiment, architecture4600includes a core network4602and an aggregation network4604in communication with a cell site4606(e.g., a macro cell). Cell site4606includes at least one BS4608in operable communication with a plurality of antennas4610over a respective plurality of coaxial cables4612. In an embodiment, both baseband and RF processing are distributed in BS4608at cell site4606, and mobile signals are fed from BS4608to antennas4610over coaxial cables4612due to the relatively short distance between BS4608and antennas4610. FIG.46Bis a schematic illustration of a C-RAN architecture4614. In an embodiment, architecture4614includes a core network4616, a backhaul segment4618, and a fronthaul segment4620. Backhaul segment4618includes a centralized BBU pool4622in communication with core network4616through an EPC4624, and fronthaul segment4620includes one or more RRHs4626in communication with centralized BBU pool4622over digital fiber links4628. More particularly, similar to the embodiments described above, C-RAN architecture4614separates the baseband processing functions from each BS (not separately shown inFIG.46B), and consolidates the processing functions in centralized BBU pool4622, such that each BS is simplified to the respective RRHs4626. Since the distance between centralized BBU pool4622and an individual RRH4626is typically tens of kilometers, mobile signals are transmitted over fronthaul segment4620through digital fiber links4628. Conventionally, such digital fiber links will exploit a CPRI interface. According to the present embodiments though, this conventional CPRI interface is replaced. FIG.46Cis a schematic illustration of an NG-RAN architecture4630. In an exemplary embodiment, architecture4630includes a core network4632, a backhaul segment4634, a midhaul segment4636, and a fronthaul segment4638. In the exemplary embodiment, backhaul segment4634includes a plurality of CUs4640in communication with core network4632over a mobile edge computing (MEC) platform4642, midhaul segment4636includes a plurality of DUs4644in communication with CUs4640, and fronthaul segment4638includes a plurality of RRUs4646in communication with DUs4644over fibers4648. In an embodiment, RRUs4646are small cell sites. Accordingly, the improved structure of NG-RAN architecture4630, similar to the innovative embodiments described above, rethinks and reorganizes the functional distribution of the RAN architecture, which enables new function split options that advantageously avoid CPRI (i.e., function split option 8). As described further below with respect toFIG.47, NG-RAN architecture4630includes two function splits, namely, a high layer split (HLS) and a low layer split (LLS), and, as with the above embodiments, baseband functions originally located in the BBUs of the C-RAN (e.g., C-RAN architecture4614,FIG.46B) are now distributed into CUs4640and DUs4644. In conventional implementations, for the HLS, option 2 function split has been adopted the standard by 3GPP, whereas, for the LLS, function split options 6 and 7 (e.g., options 7.1, 7.2, 7.3) have been proposed by 3GPP, and interfaces ID, IID, IUhave been proposed by eCPRI. FIG.47Ais a schematic illustration depicting an exemplary function split option4700for a C-RAN architecture (e.g., C-RAN architecture4614,FIG.46B). In an embodiment, split option4700represents a CPRI split (e.g., 3GPP option 8 or eCPRI option E) between a BBU4702and an RRH4704.FIG.47Bis a schematic illustration depicting a MAC-PHY function split option4706for an NG-RAN architecture (e.g., NG-RAN architecture4630,FIG.46C). In an embodiment, split option4706represents an HLS option 2 function split at a CU4708, and an LLS option 6 function split between the MAC layer (e.g., low MAC layer2730,FIG.27) and the PHY layer (e.g., high PHY layer2732,FIG.27) of a DU4710, which is in communication with an RRU4712.FIG.47Cis a schematic illustration depicting a high-low PHY function split option4714for an NG-RAN architecture. In an embodiment, split option4714represents an HLS option 2 function split at a CU4716, and an LLS option 7 function split between the high PHY layer (e.g., high PHY layer2732,FIG.27) and the low PHY layer (e.g., low PHY layer2734,FIG.27) of a DU4718, which is in communication with an RRU4720. FIG.47Dis a schematic illustration depicting a high-low RF function split option4722for an NG-RAN architecture. In an embodiment, split option4722represents an HLS option 2 function split at a CU4724, and an LLS option 9 function split, according to the embodiments described herein, between the high RF layer (e.g., high-RF layer2736,FIG.27) and the low RF layer (e.g., low-RF layer2738,FIG.27) of a DU4726, which is in communication with an RRU4728. In an exemplary embodiment of split option4722, delta-sigma modulation is implemented between DU4726and RRU4728. FIG.47Eis a schematic illustration depicting an exemplary functional layer diagram4730for the function split options depicted inFIGS.47A-D. In the embodiment illustrated inFIG.47E, diagram4730is depicted with respect to a downstream communication path4732from the backhaul to the fronthaul, and an upstream communication path4734from the fronthaul to the backhaul. Functional layer diagram4730is therefore similar to NGFI functional layer diagram2718,FIG.27, and illustrates similar functional split options between various respective NGFI functional layers of a CU (e.g., CUs4708,4716,4724), and a DU (e.g., DUs4710,4718,4726). In an exemplary embodiment, functional layer diagram4730includes an RRC layer4736, a PDCP layer4738, a high RLC layer4740, a low RLC layer4742, a high MAC layer4744, a low MAC layer4746, a high PHY layer4748, a low PHY layer4750, a high RF layer4752, and a low RF layer4754. In the exemplary embodiment depicted inFIG.47E, layers4748,4750,4752, and4754are referred to herein as a post-MAC portion4756of NGFI functional layer diagram4730. In this example, 3GPP-based function split options our representative across the entirety of diagram4730, and correlated with corresponding eCPRI-based function split options in post-MAC portion4756. For example, the MAC-PHY split is defined as option 6 by 3GPP, but as option D by eCPRI. Similarly, the PHY-RF split is defined as option 8 by 3GPP, but option E by eCPRI. As described further below with respect toFIG.47F, within the PHY layer, both 3GPP and eCPRI provide three different choices (e.g., split options 7.1, 7.2, 7.3 in 3GPP, and split options ID, IID, IUin eCPRI). However, of these choices, only split options 7.1 and 7.2 are bi-directional; split options 7.3, ID, and IIDare unidirectional for the downstream transmission (e.g., downstream communication path4732), and split option IUis unidirectional for the upstream transmission (e.g., upstream communication path4734). FIG.47Fis a schematic illustration depicting an exemplary architecture of post-MAC layer portion4758,FIG.47E. In an exemplary embodiment, post-MAC layer portion4758is implemented by an all-digital transceiver of a DU (e.g., based on delta-sigma modulation/ADC), and downstream communication path4732includes one or more of an encoder4760, a downstream rate matching unit4761, a scrambler4762, a modulation unit4763, a MIMO layer mapping unit4764, a pre-coding unit4765, a resource mapping unit4766, a beamforming port expansion unit4767, an IFFT unit4768, a CP insertion unit4769, and a digital RF transmitter4770. In an exemplary embodiment, digital RF transmitter4770includes an up-sampler4771, an up-converter4772, and a downstream delta-sigma modulator4773. In an embodiment, digital RF transmitter4770may further implement the present option 9 function split to a power amplifier (PA)4774connected to a BPF4775of an RRU4776. In a similar manner, upstream communication path4734may include one or more of a respectively corresponding decoder4777, an upstream rate matching unit4778, a descrambler4779, a demodulation unit4780, a MIMO layer demapping unit4781and a MIMO equalization unit4782, a pre-filtering unit4783, a resource demapping unit4784, and a port reduction unit4785. In an embodiment, upstream communication path4734may further include a channel estimation unit4786between resource demapping unit4784and MIMO equalization unit4782, bypassing pre-filtering unit4783. Upstream communication path4734may further include one or more of a physical random access channel (PRACH)4787, a corresponding FFT unit4788, a CP removal unit4789, and a digital RF receiver4790. In an exemplary embodiment, digital RF receiver4790includes a down-sampler4791, a down-converter4792, and an upstream delta-sigma modulator4793. In an embodiment, digital RF receiver4790is in operable communication with a low noise amplifier (LNA)4794of RRU4776, and may be further configured to implement the present option 9 function split within digital RF receiver4790, between down-converter4792and upstream delta-sigma modulator4793, and after receipt of upstream data communication transmitted from RRU4776, in contrast with downstream communication path4732, in which up-converter4772and downstream delta-sigma modulator4773of digital RF transmitter4770are effectively interchangeable in the functional order, since function split option 9 occurs between digital RF transmitter4770and RRU4776. Comparative architectures of analog and digital RF transmitters are described further below with respect toFIG.48. FIG.48Ais a schematic illustration of an analog RF transmitter4800. In an embodiment, analog RF transmitter4800includes a digital domain4802and an analog link4804. In operation of RF transmitter4800, a baseband DSP processor4806is configured to perform baseband processing within digital domain4802, and a DAC4808(e.g., a conventional DAC, and/or separate DAC units for each of the respective I and Q signals from baseband DSP processor4806) separates the digital baseband processing of baseband DSP processor4806from the analog RF chain of analog link4804. In an embodiment, analog link4804further includes one or more of a low-pass filter4810, an LO4812, a mixer4814, a PA4816, and a BPF4820. Accordingly, baseband processing is carried out in the digital domain, and RF processing is implemented in the analog domain. FIG.48Bis a schematic illustration of an exemplary RF transmitter4822. In an embodiment, RF transmitter4822is an all-digital transmitter, and includes a digital domain4824and an analog link4826. In operation of RF transmitter4822, a baseband DSP processor4828is configured to perform baseband processing within digital domain4824, and the baseband I and Q components therefrom are first up-sampled by respective up-sampling units4830, and then encoded by respective low-pass delta-sigma modulators4832that are configured to convert discrete-time continuous-amplitude baseband signals into discrete bits. The respective I and Q bit streams from low-pass delta-sigma modulators4832may then be up-converted to RF by a digital up-converter4834prior to transmission to a PA4836, and then to a BPF4838. In this example, it may be noted that BPF4838is the element that separates digital domain4824from analog link4826. It may be further noted that, in consideration of analog RF transmitter4800, both analog RF transmitter4800and digital RF transmitter4824carry out baseband processing in the digital domain, but differ with respect to their RF stages. More particularly, in RF transmitter4822, RF functions are carried out in digital domain4824, and no LO or mixer is needed, thus significantly simplifying the architectural complexity of the relevant RRU. FIG.48Cis a schematic illustration of an alternative digital RF transmitter4840. In an embodiment, RF transmitter4840is also an all-digital transmitter, and includes a digital domain4842and an analog link4844. In operation of RF transmitter4840, a baseband DSP processor4846is similarly configured to perform baseband processing within digital domain4842, and the baseband I and Q components therefrom are first up-sampled by respective up-sampling units4848, and then combined and up-converted to a radio frequency by a digital up-converter4850prior to encoding by a bandpass delta-sigma modulator4852, which then transmits the encoded signal to a PA4854, and then to a BPF4856. In this example, BPF4856similarly separates digital domain4842from analog link4844. Further to this example, since the delta-sigma modulation of modulator4852is configured to utilize noise shaping to push the quantization noise out of the signal band (e.g.,FIGS.6-7), BPF4856may be configured to not only filter out the desired signal band, but also to eliminate the OoB noise and retrieve the relevant analog waveform from the received digital signal. Accordingly, the exemplary all-digital configuration of RF transmitter4840is enabled to advantageously utilize BPF4856as a DAC, but with no need for an actual DAC, such as conventional DAC4808,FIG.48A. The configuration of RF transmitter4840further advantageously moves the DAC functionality as close as possible to an antenna4858of analog link4844, such that both baseband and RF functions may be carried out within digital domain4842. The all-digital configuration of RF transmitter4840provides still further advantages with respect to its flexibility and reconfigurability to different carrier frequencies and multiple RATs. In the case of SDR, RF transmitter4840advantageously enables the virtualization of the DU and the RRU, thereby rendering NG-RAN readily compatible with 4G-LTE, Wi-Fi, and 5G-NR technologies. The present all-digital transmitter embodiments further provide advantageous high linearity capabilities in comparison with an analog RF transmitter (e.g., analog RF transmitter4800,FIG.48A). As depicted inFIG.48A, both PA4816and BPF4820receive an analog RF signal, and therefore the inevitable nonlinear impairments that come with an analog RF signal. In contrast, in both of digital RF transmitters4822,4840, the respective PA4836,4854is functionally disposed before the respective BPF4838,4856(e.g., functioning as a DAC), such that the PA is in the respective digital domain4824,4842. Accordingly, high-efficiency switch-mode PAs may be implemented without nonlinearities. In an embodiment, respective all-digital transmitter configurations may implement significantly high oversampling rates. Accordingly, in an embodiment, a clock rate of four times the carrier frequency (4fc) may be used for digital up-conversion by the respective digital up-converter4834,4850. In at least one embodiment, the exemplary option 9 function split depicted inFIG.47Fis implemented with respect to the all-digital transmitter scheme depicted inFIG.48C. In this example, in the downstream direction (e.g., downstream communication path4732,FIG.47F, the option 9 function split occurs after processing by bandpass delta-sigma modulator4852, where the discrete-time continuous-amplitude signal is encoded to discrete bits and transmitted from the DU to the RRU over digital fiber links (not shown inFIG.48C). In the upstream direction (e.g., upstream communication path4734,FIG.47F) though, an all-digital RF receiver (e.g., digital RF receiver4790,FIG.47F) based on a continuous-time delta-sigma ADC may be used to digitize the received analog signal to several discrete levels, and the option 9 split may occur after the delta-sigma ADC (e.g., by delta-sigma ADC4793), and the transmitted digital bits therefrom representing the several discrete levels from the RRU back to the DU. FIG.49Ais a schematic illustration depicting an analog fronthaul architecture4900based on an RoF protocol. In an embodiment, architecture4900represents an analog link similar, in some aspects, to analog link portion2808,FIG.28, and includes at least one DU4902in operable communication with at least one RRU4904over a transport medium4906(e.g., an SMF). In the example depicted inFIG.49A, DU4902includes a PHY layer4908, a DAC4910, an analog up-converter4912, and an analog E/O interface4914. RRU4904includes an antenna4916, a BPF4918, an analog PA4920, and an analog O/E interface4922. In operation of analog fronthaul architecture4900, after baseband processing in PHY layer4908(i.e., the digital domain), DAC4910converts the processed mobile signals into an analog signal. Remaining RF functions of the RF layer (not separately numbered) are then implemented in the analog domain. For example, after frequency up-conversion by analog up-converter4912, the up-converted analog mobile signals are delivered to RRU4904, through analog E/O4914, over the analog fiber link (i.e., transport medium4906) using RoF technology. At RRU4904(i.e., through analog O/E4922), both analog PA4920and BPF4918receive and process analog signals, along with the inevitable nonlinear impairments thereof. In conventional analog fronthaul systems, most high-RF layer devices, such as the RF LO and the mixer, are consolidated at the DU (e.g., DU4902), with only low-RF layer functions, such as the PA (e.g., PA4920) and the BPF (e.g., BPF4918), distributed in the RRUs (e.g., RRU4904). FIG.49Bis a schematic illustration depicting a digital fronthaul architecture4924based on function split option4714,FIG.47C. In the exemplary embodiment depicted inFIG.49B, digital fronthaul architecture4924is based on LLS option 7 (e.g., within the PHY layer, and HLS option 2), and includes a DU4926in operable communication with an RRU4928over a transport medium4930(e.g., an SMF). DU4926includes one or more of a high-PHY layer4932, a first option 7 interface4934, and a digital E/O interface4932. RRU4928includes an antenna4938, a BPF4940, an analog PA4942, an analog up-converter4944, a DAC4946, a low-PHY layer4948, a second option 7 interface4950, and a digital O/E interface4952. Although not necessarily shown inFIG.49B, digital fronthaul architecture4924may include some or all of the additional elements and components depicted in NGFI functional layer diagram4730,FIGS.47F-G. In exemplary operation of digital fronthaul architecture4924, the option 7 function split occurs between high-PHY layer4932and low-PHY layer4948, baseband processing of high-PHY layer4932is centralized in DU4926, and the remaining baseband processing of low-PHY layer4948is distributed in RRU4928. After conversion by DAC4910, all RF functions are realized in the analog domain at RRU4928. Compared with the option 8 function split (CPRI, described further below with respect toFIG.49C), the option 7 function split effectively reduces the fronthaul data rate, but also increases the cost and complexity of the RRU (e.g., RRU4928) at the cell site (e.g., cell site4626,FIG.46B), which hinders the desirability of wider small cell deployment. FIG.49Cis a schematic illustration depicting a digital fronthaul architecture4954based on function split option4700,FIG.47A. In the exemplary embodiment depicted inFIG.49C, digital fronthaul architecture4954is based on LLS option 8 (e.g., CPRI, between the PHY and RF layers, and HLS option 2), and includes a DU4956in operable communication with an RRU4958over a transport medium4960(e.g., an SMF). DU4956includes one or more of a PHY layer4962, a first CPRI interface4964, and a digital E/O interface4962. RRU4958includes an antenna4968, a BPF4970, an analog PA4972, an analog up-converter4973, a DAC4974, a second CPRI interface4975, and a digital O/E interface4976. Similar to digital fronthaul architecture4924,FIG.49B, digital fronthaul architecture4954is depicted for purposes of illustration, and not in a limiting sense, and may include additional elements and/or layers according to the embodiments described above. In exemplary operation of digital fronthaul architecture4954, the option 8 function split occurs between PHY layer4962and RF layers in DU4956. The digital fiber link transmits the bits of I/Q samples (e.g., through digital E/O4966, after FFT) from DU4956to RRU4958over fiber4960. Similar to the requirements of the option 7 function split (e.g., digital fronthaul architecture4924,FIG.49B), DAC4974is still needed at RRU4958, and after conversion by DAC4974, all RF layer functions are carried out in the analog domain of RRU4958. As described above, implementation of CPRI has low spectral efficiency, requires a tremendous data rate, and has limited scalability for massive MIMO and carrier aggregation. Moreover, CPRI has a fixed chip rate (e.g., 3.84 MHz), and is able to only accommodate UMTS (v1 and 2), WiMAX (v3), LTE (v4), and GSM (v5). FIG.49Dis a schematic illustration depicting a digital fronthaul architecture4978based on function split option4722,FIG.47D. In the exemplary embodiment depicted inFIG.49D, digital fronthaul architecture4978is based on the present, new LLS option 9 (e.g., within the RF layer, HLS option 2), and includes a DU4980in operable communication with an RRU4982over a transport medium4984(e.g., an SMF). DU4980includes one or more of a PHY layer4986, a digital up-converter4988, a bandpass delta sigma modulator4990, and analog up-converter4992, and an analog E/O interface4994. RRU4982includes an antenna4995, a BPF4996(i.e., functionally performing DAC), a digital PA4997, an option 9 interface4998, and a digital O/E interface4999. As with the embodiments described above, digital fronthaul architecture4978may include additional elements and/or layers beyond those depicted inFIG.49D. In exemplary operation of digital fronthaul architecture4978, the new option 9 function split occurs between the high-RF and low-RF layers (not separately shown) of DU4980. In the exemplary embodiment, both PHY layer4986and the RF layers are implemented in the digital domain, with the high-RF layer functions, such as digital up-conversion by digital up-converter4988and delta-sigma modulation bandpass delta sigma modulator4990, are centralized in DU4980. In this example, only low-RF layer functionality, such as from digital PA4997and BPF4996, are left in RRU4982. Since BPF4996serves to function as an effective DAC for the all-digital transmitter of this embodiment, digital PA4997is capable of functioning entirely in the digital domain, thereby enabling use of a high efficiency switching-mode PA, for example. The new option 9 function split thus enables a significantly lower-cost, DAC-free, and simplified-RF design for the RRU configuration, which will greatly reduce the cost and complexity of the cell site in which the RRU is deployed, which in turn will facilitate a much denser deployment of small cells. A comparison of the option 9-based digital fronthaul configuration depicted inFIG.49D, on the one hand, with the analog fronthaul configuration depicted inFIG.49A, the option 7-based digital fronthaul configuration depicted inFIG.49B, and the option 8-based digital fronthaul configuration depicted inFIG.49C, on the other hand, illustrates how the innovative systems and methods of the present embodiments based on the new option 9 function split is a significantly advantageous over existing analog and digital fronthaul systems, but is also fully compatible to coexist with such other fronthaul systems in the same link (e.g., RoF). More particularly, when compared with the analog fronthaul solution, digital fronthaul techniques in general provide improved resilience against nonlinear impairments, and are capable of exploiting either point-to-point fiber links, or may readily fit into existing networks, such as PONs. However, since the options 7 and 8 function splits transmit digital baseband signals over the fronthaul interface, TDM is needed to interleave the baseband I/Q components, as well as the components from multiple mobile signals. Therefore, time synchronization is an additional factor that must be addressed when considering the coexistence of legacy RATs with, for example, 5G-NR. In contrast, the present option 9 function split is configured to transmit digital RF signal with the I/Q components thereof having been converted to radio frequencies. Accordingly, under the innovative option 9 function split systems and methods described herein, frequency division multiplexing (FDM) may be implemented to advantageously accommodate multiband mobile signals. As described in greater detail above, an experimental setup was implemented to demonstrate proofs of the concepts described herein. More specifically, the CRFF structure of filter1600,FIG.16A, allowed for real-time testing using system architecture2900,FIG.29, and pipeline architecture3800,FIG.38, within testbed4400,FIG.44. The experimental results from a variety of test cases (described in Tables 3 and 5, above) of this real-time implementation are described in detail with respect toFIGS.30A-34B. As shown in the relevant experimental results, the present embodiments demonstrate how memoryless signal processing may be easily implemented using, for example, pipeline architecture3800, since the processing needed for each sample is dependent only on the current sample, and need not regard previous samples. Accordingly, after segmenting the input sample stream into several blocks, all blocks may be processed in parallel without performance penalty. In comparison, the present delta-sigma modulation embodiments provide for a sequential operation having a memory effect. According to the present techniques, the output bit not only depends on the current sample, but also previous samples, which May introduces some performance penalty from the parallel processing. That is, it is expected that there is a performance penalty from segmenting the continuous sample stream into several blocks, with the penalty thereof increasing as the size of the blocks decreases. Nevertheless, the present techniques are configured to optimally implement a tradeoff between the performance penalty and the memory usage on the FPGA (e.g., a buffer size of W=20 k, with margin of ΔW=2K, was selected in the examples described above). A few of modulation implementations4502,FIG.45, report some parallel processing techniques for high-speed, wide bandwidth delta-sigma modulators, such as polyphase decomposition (i.e., References [44], [45]), and look-ahead time-interleaving (i.e., References [47], [48]). The proof-of-concept experimental results demonstrate the broader principle achieved through pipeline processing using a large buffer size. The present inventors contemplate that this broader concept of parallel pipeline processing will be further improved through use of these other reported parallel processing techniques, which may further reduce the buffer size and processing latency in coordination with the systems and methods described herein. As described above, according to the CPRI specification, a single 20-MHz LTE carrier requires 30.72 MSa/s*15 bits/Sa*2=921.6 Mb/s fronthaul capacity without considering the control bit and line coding (8b/10b or 64b/66b). Therefore, CPRI may require up to 9.22 Gb/s or 12.9 Gb/s to support 10 or 14 LTE carriers, respectively. In experimental test cases and results described above, the LTE carriers were encoded by a delta-sigma modulator and transmitted through a 5-Gb/s OOK link. Thus, in a straight comparison with CPRI, the present embodiments demonstrate clear data rate savings of 45.8% and 61.2% over CPRI. Comparative advantages of the present embodiments are described further below with respect toFIG.50. FIG.50is a graphical illustration of a comparative summary plot5000of delta-sigma modulation bit efficiencies taken against bandwidth efficiencies. In an exemplary embodiment, summary plot5000depicts a graphical comparison of reported NGFI works5002(e.g., including delta-sigma modulation implementations, standard CPRI, and CPRI compression solutions) with proof of concept results5004according to the present embodiments. Detailed parameters of reported NGFI works5002and results5004are listed further below in Table 8. For a fair composition between works5002and results5004, no control bit or line coding is considered in Table 8 (CPRI uses 15 quantization bits and one control bit for each sample, and exploits line coding of 8b/10b or 64b/66b). More particularly, since CPRI-based solutions provide smaller quantization noise and lower EVM than delta-sigma modulation techniques, summary plot5000provides a fairer comparison by introducing two measuring metrics: (i) bandwidth efficiency; and (ii) bit efficiency. In the embodiment depicted inFIG.50, bandwidth efficiency is defined as the ratio between the fronthaul data rate and the LTE signal bandwidth (i.e., measuring the required fronthaul capacity per unit of bandwidth, and bit efficiency is defined as the ratio between the fronthaul data rate and the net information data rate carried by the LTE signals, which measures the mapping efficiency from fronthaul traffic to real mobile traffic. As demonstrated by results5004, and by References [60] and [61] of works5002, delta-sigma modulation shows high bandwidth efficiency in comparison with other techniques. That is, delta sigma modulation consumes significantly smaller fronthaul capacity for each unit of bandwidth of LTE signals. However, since CPRI-based solutions offer smaller EVM and higher SNR, and can support higher modulation and larger net information rate, summary plot5000also illustrates bit efficiency as a second metric valuation. In this example, the bit efficiency gain of delta-sigma modulation is not shown to be as high as the corresponding bandwidth efficiency gain due to the relevant high EVM and low modulation format. For the results listed in Table 8, below, it is assumed that the modulation of all CPRI-based solutions is 1024QAM. As indicated in plot5000, the optimal bandwidth efficiency has been so far demonstrated according to the delta-sigma modulation techniques of References [60] and [61], but the highest bit efficiency has been achieved according to the CPRI statistical compression techniques of References [18] and [19], by the present inventors, described above. TABLE 8CPRI-based solutionsStatisticalLloydNGFICPRICompressioncompressionDelta-sigma modulationReferences[9][18], [19][20][60][61][61]Results 5004OrderN/A24444Sampling rate (MSa/s)30.7223.0430.7210,00010,00010,0005,0005,000Bits158811211Fronthaul data rate0.92160.368640.4915210102055(Gbps)LTE carrier #13232321014LTE bandwidth (MHz)18576576576180252Modulation102464*18256*161024*101024*41024*216*1464*16256*22256*6256*464*8Net information data0.182.9524.0324.9681.5841.8rate (Gbps)Bandwidth efficiency19.548.836.657.657.628.83650.4(MHz/Gbps)Bandwidth efficiency12.51.8752.952.951.481.852.58gain w.r.t. CPRIBit efficiency0.1950.4880.3660.2950.4030.2480.3170.36Bit efficiency gain with12.51.8751.512.071.271.631.85respect to CPRI A further comparison of the various function split options proposed by 3GPP and eCPRT is shown below in Table 9. Although the new option 9 function split embodiments occur at a lower level than the option 8 function split solutions, the new option 9 function split techniques provide significantly improved spectral efficiency and reduced NGFI data rates in comparison with CPRI. In comparison with other LLS options 6, 7, and 8, the new option 9 function split is better able to exploit an all-digital RF transceiver, centralize the high-RF layer in DU, and while eliminating the need for a DAC, LO, or RF mixer at the RRU. The new option 9 function split therefore enables a lower-cost, lower-power, and smaller-footprint cell site for small cell deployment, while also rendering SDR and virtualization of DU/RRU more achievable for improved compatibility and reconfigurability among multi-RATs. Unlike Ethernet packet based solutions (e.g., option 5, 6, 7 function splits), which are susceptible to packet delay, the new option 9 function split is configured to provide a stringent latency requirement, with a deterministic latency, which makes the new option 9 function split more suitable for radio coordination applications, such as CoMP and joint Tx/Rx, than conventional function split options. TABLE 93GPP/eCPRIoptions6/D7.3/ID7.2/IID, IU7.18 (CPRI)/E9zzzArchitectureMost distributedMore centralized on the rightMost centralizedRRU functionsPHY + RF layersLow-PHY + RF layersRF layerLow-RF layerRRU complexityHighestMedium (higher on the right)LowLowestNeed whole RF layer in RRU, including DAC, LO, mixerOnly need PAand BPFNGFI dataBaseband bitsFrequency domain I/QTime domain I/QBits after ΔΣsamplessamplesmodulationData rateLowest1/10 of CPRI (higher on the right)Highest¼~½ of CPRI[60, 61]Data rateTraffic dependent, antenna independentTraffic independent, antenna dependentscalabilityScale with MIMOScale with antennaLatencyLowestHigher latency requirement on the righthighestrequirementLatency varianceLarge due toLess variance and More deterministic on the rightSmall, mostEthernet packetdeterministicdelay,Least deterministic As described above, a known challenge to an all-digital transceiver and SDR implementation is the high processing speed thereof. That is, delta-sigma modulation requires a high OSR, and digital frequency up-conversion requires a clock rate of four times the carrier frequency. To circumvent the speed limit of existing CMOS or FPGA configurations, as described above, some parallel processing techniques have been reported (e.g., polyphase decomposition, look-ahead time-interleaving) that the present inventors anticipate to be additionally compatible with the new option 9 embodiments described herein. Given the wide frequency range of 5G from sub-1 GHz to millimeter wave, and various scenarios, such as enhanced mobile broadband (eMBB), ultra-reliable low-latency communication (uRLLC), and massive machine type communication (mMTC), systems and methods according to the present new option 9 are expected to be particularly useful for low-frequency radio coordination/uRLLC scenarios due to the highly deterministic latency effects, and for low-frequency narrowband IoT (NB-IoT) scenarios by leveraging of low-cost, low-power, small-footprint cell sites. As described herein, systems and methods for a new NGFI function split option 9, based on all-digital RF transceiver using delta-sigma modulation, are provided. Despite the popularity of other low layer split options 6 (MAC-PHY),7(high-low PHY), and8(CPRI), the new function split option 9 exploits the design of an all-digital RF transceiver by splitting functions within the RF layer, with the high-RF layer thereof centralized in the DU, and the low-RF layer thereof distributed in the RRUs. An all-digital RF transmitter that implements the present techniques was experimentally demonstrated for LTE/5G signals using real-time bandpass delta-sigma modulation implemented by a Xilinx Virtex-7 FPGA, and a delta-sigma modulator operating at 5 GSa/s, which were able to encode LTE/5G signals with bandwidths up to 252 MHz and modulations up to 1024QAM, to a 5 Gb/s OOK signal transmitted from the DU to the RRU over 30-km fiber. To relax the FPGA speed requirements, a 32-pipeline architecture for parallel processing was demonstrated herein, and four experimental test cases validate the feasibility of the new NGFI option 9 in real-time implementations (i.e., 5G two-carrier aggregation and LTE 14-carrier aggregation, having EVM performances satisfying the 3GPP requirement standards). A detailed comparison of the new NGFI option 9 against CPRI compression and other delta-sigma modulation techniques further validates the value of the present embodiments, in terms of bandwidth and bit efficiencies. Furthermore, new NGFI option 9 splits at a lower level than option 8, and offers improved efficiency in comparison with CPRI, while also reducing the fronthaul data rate requirement. Compared with HLS options 6, 7 and 8, new NGFI option 9 split exploits a centralized architecture, with most RF layer functions consolidated in the DU, thereby eliminating the need for the DAC, LO, and RF mixer at the RRU, which enables a low-cost, low-power, small-footprint cell site for small cell deployment. Moreover, given its highly deterministic latency, new NGFI option 9 is more suitable for radio coordination applications than other HLS options. The present all-digital RF transceiver embodiments tests provide a clear pathway forward to implement SDR and virtualized DU/RRU for multi-RAT compatibility and evolution of new RATs. It is anticipated that the new NGFI option 9 will initially be of particular value for latency sensitive applications, or low frequency and narrowband scenarios with cost/power sensitive cell sites, such as mMTC, NB-IoT. Digitization Interface for HFC Networks In an exemplary embodiment, an innovative digitization interface is further provided, which is capable of implementing the delta-sigma ADC techniques described above with respect to data over cable service interface specification (DOCSIS) 3.1 signals in hybrid fiber coax (HFC) networks. The present HFC digitization interface enables significantly improved robust transmission of DOCSIS signals against noise/nonlinear impairments in comparison with conventional analog HFC networks. The present systems and methods thus support higher signal-to-noise ratio, larger modulation formats, longer fiber distances, and more WDM wavelengths. In an exemplary embodiment, the present delta sigma based digitization interface advantageously improves over conventional digitization interfaces that implement baseband digital forward/return (BDF/BDR) techniques by circumventing the data rate bottleneck with improved spectral efficiency, and by also eliminating the necessity of DAC at each fiber node in an HFC network, which also enables a low-cost all-analog implementation of fiber nodes. The present systems and methods improve over conventional remote PHY architectures by centralizing PHY layer functions at the hub, and by eliminating the need for remote PHY devices (RPDs) distributed in each fiber node, which significantly reduces the cost and complexity of the fiber nodes. The present embodiments thus facilitate migration of HFC networks toward the fiber deep and node splitting architectures. The present delta-sigma ADC embodiments are thus further adapted herein for DOCSIS 3.1 signals to enable digital fiber distribution in HFC networks, such that mature digital fiber transmission technologies, such as intensity modulation/direct detection (IM/DD) and coherent transmission, may be more fully exploited. In an embodiment, a flexible digitization interface based on delta-sigma ADC enables on-demand provisioning of data rate and carrier-to-noise ratio (CNR) for DOCSIS 3.1 signals in HFC networks. The present digitization interface is thus capable of replacing the conventional DAC with a passive filter, which not only advantageously reduces the cost and complexity of fiber nodes, but also enables a variable sampling rate, adjustable quantization bits, and reconfigurable frequency distribution of quantization noise by exploiting noise shaping techniques to render possible on-demand provisioning of modulations, data rate, and CNR for DOCSIS signals. As described above, video-intensive services, such as VR and immersive applications, are significantly driving the growth of data traffic at user premises, making access networks become a bottleneck of user quality of experience. Various optical and wireless access technologies, such as PONs, RANs, and HFC networks, have been investigated to enhance the data rate and improve user experience. In the United States, there are more than 50 million subscribers using cable services for broadband access, which is 40% more than digital subscriber line (DSL) and fiber users. It is expected that DOCSIS over HFC networks will continue to dominate the broadband access market in the US, delivering fastest access speed to the broadest population. As a fifth-generation (5G) broadband access technology, DOCSIS 3.1 specifications involve enhancement in both PHY and MAC layers to support ultrahigh resolution videos (e.g., 4K/8K), mobile backhaul/fronthaul offloading, and other applications emerging from virtual reality and internet of things. The PHY layer signal is transformed from single-carrier QAM (SC-QAM) to OFDM for improved spectral efficiency, flexible resource allocation, and increased data rate with up to 10 Gb/s downstream and 1.8 Gb/s upstream per subscriber. With subcarrier spacing of 25/50 kHz, it can support channel bandwidths from 24 to 192 MHz for downstream, and 6.4 to 96 MHz for upstream, as well as high order modulations up to 4096QAM with optional support of 8192 and 16384QAM. However, the continuous envelope and high PAPR make DOCSIS 3.1 signals vulnerable to noise and nonlinear impairments, and the demanding CNR requirements of high order modulations (e.g., greater than 4096QAM) make it even more difficult to support DOCSIS 3.1 signals by the legacy analog fiber distribution networks. FIG.51is a schematic illustration of an HFC network architecture5100. Architecture5100is somewhat similar to architecture100,FIG.1, except that, whereas architecture100is configured for a C-RAN, architecture5100is configured for an HFC network. Architecture5100includes a core network5102, a hub5104, one or more fiber nodes5106, and a plurality of end users5108(e.g., modems, CMs, etc.). Core network5102further includes at least one aggregation node5110. HFC architecture5100further includes a core segment5112, from aggregation node5112to hub5104, a fiber segment5114including a plurality of distributed fibers5116from hub5104to fiber node(s)5106, and a coaxial segment5118of distributed cables5120from fiber node(s)5106to the modems of end users5108. In operation, core segment5102transmits digital net bit information from aggregation node5110to hub5104. Fiber segment5114supports analog or digital fiber delivery of DOCSIS/video signals. Cable segment5118delivers analog signals over cables5120(e.g., coaxial cable plants) from fiber nodes5106to end users5108. For fiber distribution networks of fiber segment5114, either analog or digital technologies may be exploited, including conventional legacy analog fiber links utilizing linear optics to transport analog DOCSIS/video signals; whereas a digital fiber link exploits a digitization interface utilizing (i) BDF/BDR to digitize the analog signals before fiber transmission, or (ii) a remote PHY architecture to synthesize the analog waveform at fiber nodes5106. Architectural implementations of fiber links/distribution networks for fiber segment5114are described further below with respect toFIGS.52-55. FIG.52is a schematic illustration depicting an analog fiber link architecture5200. In an embodiment, architecture5200includes a hub5202(e.g., including a headend) in operable communication with at least one fiber node5204over a transport medium5206(e.g., an SMF analog radio frequency over glass (RFoG)). In the example depicted inFIG.52, hub5202includes a data/video layer5208, an OFDM modulation/QAM modulation layer5210, a frequency multiplexing layer5212, and an analog E/O interface5214. Fiber node5204includes an RF amplifier5216and an analog O/E interface5218. In operation of analog fiber link architecture5200, DOCSIS and video signals are first aggregated in hub/headend5202, then delivered to fiber node5204by analog fiber link5206(e.g., linear fiber-optic links). At fiber node5204, the received analog DOCSIS signals are forwarded to CMs by cable distribution networks (e.g., cables5120,FIG.51). Analog fiber links are waveform agnostic, and may be used to support different services, including DOCSIS, MPEG, and analog TV. Analog fiber link architecture features simple, low-cost implementation, and high spectral efficiency, but is susceptible to noise and nonlinear impairments, limited SNR and CNR, and is limited to short fiber distances and small numbers of WDM wavelengths. Architecture5200is vulnerable to noise and nonlinear impairments, and therefore imposes high linearity requirements on the channel response, requiring complex RF amplifiers and bi-annual calibration of fiber nodes. Compared with conventional legacy analog fiber distribution networks, digital fiber links feature lower costs, higher capacities, longer transmission distances, and easier setup/maintenance. Upgrading fiber distribution networks from analog to digital offers the opportunity to leverage the mature digital transmission technologies, such as IM/DD and coherent modulation and detection. Moreover, the impairments of optical noise and nonlinear distortions may be more easily isolated from the received signal as soon as error free transmission is achieved, so that larger CNR and higher-order modulations may be supported. Digital fiber links more easily support greater than 80 WDM channels, which facilitates the migrations of HFC networks toward fiber deep and node splitting architectures. A comparison of digital fiber link technologies is described further below with respect toFIGS.53-55. FIG.53is a schematic illustration depicting a digital fiber link architecture5300based on a BDF/BDR digitization interface. In an embodiment, architecture5300includes a hub/headend5302in operable communication with at least one fiber node5304over a transport medium5306(e.g., an SMF). In the example depicted inFIG.53, hub5302includes a data/video layer5308, an OFDM modulation/QAM modulation layer5310, a frequency multiplexing layer5312, a Nyquist ADC5314, and an analog E/O interface5316. Fiber node5304includes an RF amplifier5318, a Nyquist DAC5320, and an analog O/E interface5322. In operation of architecture5300, Nyquist ADC5314is inserted in hub5302to transform the analog DOCSIS/video waveforms into digital bits, which are then transmitted over digital fiber5306from hub5302to fiber node5304. Fiber node5304uses Nyquist DAC5320to retrieve the analog waveforms before feeding the analog waveforms to the coaxial cable plant (e.g., cables5120,FIG.1). Architecture5300represents a BDF/BDR interface that utilizes Nyquist ADC5320, with an oversampling ratio of 2.5 and 12 quantization bits, and thus features a simple, low-cost, and service-transparent implementation. Architecture5300, however, has low spectral efficiency, is framed by TDM, and cannot support Ethernet packet encapsulation. The interface of architecture5300therefore always runs at full data rate, even without a real payload, which renders traffic engineering and statistical multiplexing essentially impossible. Accordingly, conventional HFC networks only deploy upstream BDR, and no downstream BDF, and the BDR specifications are generally vendor proprietary and not interoperable. FIG.54is a schematic illustration depicting a digital fiber link architecture5400using remote PHY technology. In an embodiment, architecture5400includes a hub/headend5402in operable communication with at least one fiber node5404over a transport medium5406(e.g., an SMF). In the example depicted inFIG.54, hub5402includes a converged cable access platform (CCAP) core layer5408, a data/video layer5410, an Ethernet packetization layer5412, and an analog E/O interface5314. Fiber node5404includes an RF amplifier5416, a an OFDM modulation/QAM modulation layer5418, a remote PHY circuit5420, and an analog O/E interface5422. In operation of architecture5400, a digital fiber link exploits remote PHY technology. PHY hardware (e.g., remote PHY device (RPD), chips) for OFDM/QAM modulation/demodulation are moved from hub5402to fiber node5404, and the legacy integrated CCAP in hub5404is separated into CCAP core layer5408in hub5402and the RPD of remote PHY circuit5420in fiber node5404. In the downstream transmission, payload and control bits are packetized into Ethernet packets and transmitted from hub5402to fiber node5404, where the RPD performs OFDM/QAM modulation to synthesize the analog DOCSIS/MPEG signals for coaxial cable distribution (e.g., cables5120,FIG.1). In the upstream transmission, the RPD performs OFDM/QAM demodulation to interpret the received analog DOCSIS signals to baseband bits, then packetizes and transmits these bits back to hub5402. Utilizing Ethernet packetization, Ethernet packetization layer5412of remote PHY architecture5400exploits existing mature Ethernet technologies (e.g., Ethernet PON (EPON), gigabit PON (GPON), and metro Ethernet), and enable traffic engineering and statistical multiplexing. In comparison with other digital solutions, remote PHY links feature reduced traffic payload in fiber5406, but at the penalty of increased complexity and cost of fiber node5404due to the distributed RPD. Although architecture5400maintains the least compatibility with hubs in the legacy analog HFC networks, remote PHY architectures are waveform dependent, and are not transparent to different services. The present systems and methods therefore improve upon digital link solutions by providing an innovative digitization interface based on delta sigma ADC, which advantageously replaces existing BDF/BDR interfaces, and further offers more effective solutions to conventional remote PHY technologies by improving spectral efficiency and simplifying the fiber node design, while also enabling on-demand provisioning of modulations, data rate, and CNR for DOCSIS signals. In comparison with BDF/BDR, the present delta sigma ADC configurations circumvent data rate bottlenecks by reducing the data traffic load, and by eliminating the need for the DAC at each fiber node, thereby enabling a low-cost all-analog implementation of fiber nodes. In comparison with remote PHY architectures, the present delta sigma ADC configurations enable the centralization of all PHY functions in the hub and removal of the need for RPDs distributed in fiber nodes, thereby significantly reducing the cost and complexity of fiber nodes, while also facilitating fiber deep migration. Different from conventional Nyquist ADC/DAC, which uses fixed sampling rates and quantization bit numbers, the present delta sigma ADC digitization interface provides flexible sampling rates and quantization bits, and is further able to utilize noise shaping techniques to manipulate the frequency distribution of quantization noise to make on-demand CNR provisioning possible. An exemplary delta sigma ADC interface is described further below with respect toFIG.55. FIG.55is a schematic illustration depicting a digital fiber link architecture5500using delta sigma digitization. In an embodiment, architecture5500includes a hub/headend5502in operable communication with at least one fiber node5504over a transport medium5506(e.g., an SMF). In the example depicted inFIG.55, hub5502includes a data/video layer5508, an OFDM modulation/QAM modulation layer5510, a frequency multiplexing layer5512, a delta sigma ADC5514, and an analog E/O interface5516. Fiber node5504includes an RF amplifier5518, a passive filter DAC5520, and an analog O/E interface5522. In comparison with the operation of architecture5300,FIG.53, architecture5500replaces Nyquist ADC5314and Nyquist DAC5320in hub5302of the BDF/BDR interface with delta-sigma ADC5514in hub5502to digitize DOCSIS signals into bits, and passive filter DAC5520in fiber node5504to retrieve the analog waveforms for coaxial cable transmission. As described above, different from Nyquist ADC, delta-sigma ADC features high sampling rates and few (i.e., one or two) quantization bits, and exploits noise shaping techniques to manage the frequency distribution of noise. Therefore, in the embodiment depicted inFIG.55, the CNR of DOCSIS signals may be adjusted according to the desired data rate and modulation formats. Moreover, a simplified DAC design based on passive filters (e.g., passive filter DAC5520) may be used at fiber node5504to filter out the desired signals, which simultaneously eliminates the OoB quantization noise and retrieves the analog waveforms. Thus, the delta-sigma digitization interface of architecture5500provides a simplified and relatively low-cost fiber node possible where the DAC and channel de-multiplexing are both carried out by a simple, low-cost filter. Due to the tree/star architecture of HFC networks, a high-speed delta-sigma ADC (e.g., delta sigma ADC5514) in a hub (e.g., hub5502) may be shared among multiple fiber nodes (e.g., fiber node5504), and each such fiber node only requires a low-cost filter (e.g., passive filter DAC5520) to retrieve analog waveforms. Since an actual network will include significantly more fiber nodes than hubs (and growing due to fiber deep and node splitting migrations), the replacement of DAC by a filter significantly reduces the cost and complexity of the fiber nodes. A comparison of the operation principles of Nyquist ADC and delta-sigma ADC are further described below with respect toFIGS.56and57. FIG.56is a graphical illustration depicting a conventional digitization process5600. Process5600is similar to process500,FIG.5, but is implemented with respect to five DOCSIS 3.1 channels5602, each with a 192 MHz bandwidth, in this example. In operation, process5600bandwidth-limits and digitizes an input analog signal5604of corresponding frequency domain signals of channels5602using a low-pass filter. After digitization, quantization noise5606is spread evenly over the Nyquist zone fS/2. In the time domain, process5600performs Nyquist sampling5608of analog signal5604(i.e., at the Nyquist frequency), and quantizes the obtained samples by multiple quantization bits to produce multi-bit quantization signal5610. In the BDF/BDR interface (e.g., architecture5300,FIG.53), the Nyquist ADC (e.g., Nyquist ADC5314) has a 2.5× oversampling ratio and 12 quantization bits per sample. Given five DOCSIS 3.1 channels5602, each having a 192 MHz bandwidth, the total frequency range is 258-1218 MHz (i.e., 5×192 MHz=960 MHz), and the required sampling rate is 1218 MHz×2.5=3.045 GHz. With 12 bits per sample, the required data rate to digitize all five DOCSIS channels5602is 3.045×12=36.54 Gb/s. FIG.57is a graphical illustration depicting operational principles of a digitization process5700. Process5700is similar to process1100,FIGS.11A-D, but is implemented with respect to five DOCSIS 3.1 channels5702, and may be implemented by a processor of a hub or headend. Process5700includes an oversampling subprocess5704, a noise shaping subprocess5706, and a filtering subprocess1108. In exemplary operation, process5700includes a limited number (i.e., one or two) of quantization bits5710, and due to the limited number of quantization bits5710, there is significant amount of quantization noise5712if oversampling is not used. In subprocess5704, oversampling5714extends the Nyquist zone and spreads quantization noise5712′ over a wide frequency range, so that in-band noise is reduced. In subprocess5706, noise shaping subprocesses5716(one-bit or two-bit) push quantization noise5712′ out of the signal band, such that signal and noise are separated in the frequency domain. In subprocess5708, at the fiber node, a passive filter5718(e.g., a BPF) may be used to filter out the desired channel(s)5702, while simultaneously eliminating out-of-band quantization noise, such that a retrieved analog signal5720substantially approximates the initial analog waveform of the input channels5702. Whereas Nyquist ADC techniques have evenly distributed quantization noise, the delta-sigma ADC techniques of subprocess5700implement a shaped noise distribution, such that retrieved analog signal5720has an uneven noise floor. As described above with respect to the RAN embodiments, delta-sigma ADC trades quantization bits for sampling rate, using a high sampling rate, but only a few (one or two) quantization bits. The difference between Nyquist ADC and delta-sigma ADCs may be further explained in the time domain. For example, a Nyquist ADC samples an analog input at a Nyquist rate and quantizes each sample individually, whereas a delta-sigma ADC samples the analog input at a much higher rate and quantizes samples consecutively. One-bit delta-sigma digitization in the DOCSIS paradigm thus outputs a high data rate OOK signal with a density of “1” bits proportional to the amplitude of analog input. Accordingly, for a maximum input, the output will be almost all “1”s, and for a minimum input, the output will be almost all “0”s. For intermediate inputs, densities of “0” and “1” will be almost equal. Two-bit digitization though, will output a PAM4 signal. As a waveform-agnostic interface, the present delta-sigma ADC techniques may therefore be effectively implemented with respect to not only DOCSIS signals, but also with respect to other OFDM or multicarrier waveforms. The present delta-sigma digitization interface is waveform agnostic and transparent to different services, whereas remote PHY interfaces are limited by the service-specific OFDM modulator/demodulator in the RPD at each fiber node, which is not transparent to different services. A comparison of different fiber distribution/HFC network analog/digital fiber links in is shown below in Table 10. TABLE 10AnalogDigitalInterfaceLinear opticsBDF/BDRRemote PHYDelta-sigmadigitizationOperationAnalog RFNyquist AD/DA in hubMove PHY circuits ofDelta-sigma ADC inprinciplesover fiberand fiber nodeOFDM/QAMhubmodulation/demodulation toPassive filter in fiberfiber nodenode as DACTransmit net information bitsover fiberAD/DAN/A2.5 oversampling ratioN/AHigh sampling rate12 quantization bits1-2 quantization bitsProsSimple implementationLow cost, high capacity, large SNR, high order modulation, long distance,High spectralscalability, easy setup/maintenance, many WDM wavelengths, facilitate node splitefficiencyand fiber deep migrationWaveform/service agnosticNo modification of bitsEthernet packet encapsulationNo modification ofWaveform/serviceStatistical multiplexingbitsagnosticReduced fiber traffic loadWaveform/serviceSimple, low-costagnosticAD/DALow-cost DA basedon passive filtersConsShort distanceLow spectral efficiencyModification of bitsHigh cost delta-sigmaSmall capacityNo EthernetNot transparent toADCSNR limitedencapsulationwaveform/servicebyAlways run at full dataIncreased complexity/cost ofnoise/nonlinearitiesratefiber nodesFew WDMVendor proprietarywavelengths FIG.58is a graphical illustration of delta sigma digitization waveform distributions5800,5802. Waveform distributions5800,5802represent the results of a proof-of-concept real-world implementation using five DOCSIS 3.1 downstream channels (e.g., channels5702,FIG.57, each with 192-MHz bandwidth, occupying a total 960 MHz bandwidth in the range of 258-1218 MHz). Waveform distribution5800thus represents the results using a one-bit delta-sigma ADC, and waveform5802represents results using a two-bit delta-sigma ADC, each having a sampling rate of 32 GSa/s and generating a 32 Gbaud OOK or PAM4 signal, respectively. More particularly, within waveform distribution5800, an input analog DOCSIS signal5804is subjected to delta-sigma digitization by a delta sigma ADC OOK signal5806, resulting in a retrieved analog signal5808after application of respective filters (e.g., LPF or BPF). In a similar manner, within waveform distribution5802, an input analog DOCSIS signal5810is subjected to delta-sigma digitization by a delta sigma ADC PAM4 signal5812, resulting in a retrieved analog signal5814after application of respective filters. In both of waveform distributions5800,5802, the input analog signals5804,5810are substantially equivalent to the respective retrieved analog signal5808,5814, indicating that each real-world implementation of the Delta Sigma ADC digitization interface introduced no significant impairment. With respect to waveform distribution5802specifically, the waveform of PAM4 signal5812resulted in more ±1 symbols than ±3 symbols since, as an OFDM signal, a DOCSIS 3.1 signal has a Gaussian distribution, which produces more small samples than large samples. Accordingly, PAM4 signal5812, after digitization, has an unequal distribution of ±11±3 symbols (i.e., more than 80% of the symbols were ±1 s, and fewer than 20% were ±3 s). Accordingly, in this example, to equalize the symbol distribution, a scrambler (not shown) was used to produce a scrambling signal5816that effectively equalized the symbol distribution. FIG.59Ais a graphical illustration depicting an I-Q plot5900for an NTF. In this example, I-Q plot5900is similar to I-Q plot2100,FIG.21A, but represents the DOCSIS paradigm described herein. In an exemplary embodiment, I-Q plot5900illustrates the respective zeros and poles of a fourth-order NTF for a delta-sigma ADC in a DOCSIS HFC.FIG.59Bis a graphical illustration depicting a frequency response5902of the NTF for I-Q plot5900,FIG.59A. In an exemplary embodiment, frequency response5902is otherwise similar to frequency response2102,FIG.21B, and includes two null points5904.FIG.59Cis a schematic illustration of a cascade resonator feedback filter5906. Feedback filter5906represents a Z-domain block diagram of a fourth-order 32 GSa/s delta-sigma ADC based on a cascade-of-resonators feedback (CRFB) structure, and is employed with respect to the embodiments depicted inFIGS.59A-B. In this example, feedback filter5906is essentially the same for one-bit and two-bit digitization implementations, with the only difference between the two respective implementations being the selection of the particular output quantizer5908and feedback DAC5910. Feedback filter5906is otherwise similar to filter1600,FIG.16A, which description is thus not repeated with respect toFIG.59C. In an exemplary embodiment, the order number of feedback filter5906may be determined by the number of integrators or feedback/feedforward loops in a delta-sigma ADC, which will be equal to the number of zeroes and poles of the NTF of I-Q plot5900(two such conjugate pairs of zeroes and poles illustrated inFIG.59A). The number of quantization bits will then determine how many levels are output by the ADC. For example, a log 2(N)-bit quantizer can output N levels, and one-bit and two-bit quantizers output OOK and PAM4 signals, respectively. Accordingly, as illustrated inFIG.59B, frequency response5902of the NTF may employ or serve as a high pass filter (HPF) to push the quantization noise to the high frequency end and separate the noise from the signal. The performance of the delta-sigma digitization techniques ofFIGS.59A-Cwas evaluated using CNR as a measurement of the retrieved analog DOCSIS signals. Table 11, below lists the required CNR values for different modulations according to the DOCSIS 3.1 specification. Higher order modulations require higher CNR values, and there is 0.5 dB increment (shown in parentheses) for the fifth channel above 1 GHz (i.e., 1026-1218 MHz). The maximum modulation included in the DOCSIS 3.1 specification is 4096QAM, and therefore the CNR values for 8192QAM and 16384QAM are not yet specified. Accordingly, CNR values of 44 dB and 48 dB are respectively listed below based on an extrapolation criterion. That is, 44 (44.5) and 48 (48.5) dB were used as temporary criteria to generate the real-world experimental results described below, which obtained the CNR measurements using the respective MER. TABLE 11QAM16641282565121024204840968192*16384CNR (dB)1521242730.53437 (37.5)41 (41.5)44 (44.5)48 (48.5) By exploiting different sampling rates and one or two quantization bits, 10 exemplary scenarios were demonstrated to verify the flexibility of the several delta-sigma digitization interfaces, as listed below in Table 12. TABLE 12Sampling rateQuantizationCase(GSa/s)bitsWaveformChannel=>12345I′161OOKModulation1281024128128128CNR (dB)25.335.225.625.725.6II′2PAM4Modulation5124096512512512CNR (dB)32.642.632.93332.9III′201OOKModulation2562048256256256CNR (dB)30.239.930.129.930.1IV′2PAM4Modulation204816384204820482048CNR (dB)39.949.439.839.639.7V′241OOKModulation10248192102410241024CNR (dB)34.844.73534.734.7VI′2PAM4Modulation409616384409640964096CNR (dB)43.452.743.543.243VII′281OOKModulation204816384204820482048CNR (dB)39.549.539.939.439.9VIII′2PAM4Modulation1638416384163841638416384CNR (dB)48.858.449.348.749.1IX′321OOKModulation819216384819240968192CNR (dB)44.653.944.543.945.3X′2PAM4Modulation1638416384163841638416384CNR (dB)53.661.653.352.553.2 As listed in Table 12, respective ADC sampling rate was chosen from 16, 20, 24, 28, and 32 GSa/s, with Cases I′, III′, V′, VII′, IX′ using one quantization bit, and Cases II′, IV′, VI′, VIII′, and X′ using two quantization bits. The data was measured for five channels (i.e., numbered 1-5) having different CNRs due to the uneven noise floor. Accordingly, different modulations are assigned to the five channels according to the CNR requirements in Table 11. Higher sampling rates lead to wider Nyquist zone and smaller in-band quantization noise, and therefore higher CNR values may be achieved for higher order modulations. For example, with one quantization bit, Case I (i.e., 16 GSa/s) supports four 128QAM channels and one 1024QAM channels due to the limited CNR. However, when the sampling rate is increased to 32 GSa/s, as in Case IX, one 16384QAM channel, one 4096QAM channel, and three 8192QAM channels are supported due to the wider Nyquist zone and smaller in-band noise. Two-bit quantization, on the other hand, will always result in a relatively lower noise than one-bit quantization, and will therefore support relatively higher order modulations, as demonstrated by Cases VIII′ and X′, in which all five channels exhibit sufficient CNR to support 16384QAM because of the additional quantization bit. These ten exemplary Case scenario therefore demonstrate the flexibility of the delta-sigma digitization interfaces in terms of the sampling rate, quantization bits, and noise distribution utilizing noise shaping techniques. Accordingly, on-demand modulation and data rate, as well as CNR provisioning, are achieved by the present delta-sigma interface that represents a significant improvement over conventional interfaces. In this exemplary implementation, all ten cases listed in Table 12 utilized a fourth-order delta-sigma ADC based on a CRFB structure, and having a Z-domain block diagram according to the embodiment described below with respect toFIG.60. FIG.60is a schematic illustration of an alternative CRFB filter6000. In an exemplary embodiment, CRFB filter6000implements a fourth-order delta-sigma ADC similar to feedback filter5906,FIG.59C, utilizing a fourth-order NTF having four zeroes and four poles, similar to I-Q plot5900,FIG.59A. Also similar to feedback filter5906, the number of integrators in CRFB filter6000equals the order number, and CRFB filter6000further includes an input6002and an output1604, four feedforward coefficients a, two integrators6006that each include a pair of z−1delay cells6008(e.g., 1/(z−1) and z/(z−1), two feedback coefficients g, a DAC recursion6010, and a quantizer6010at output for determining how many levels the ADC can output. As described above, a log 2(N)-bit quantizer is capable of outputting N levels, and thus a one-bit quantizer6012may output an OOK signal, and a two-bit quantizer6012may output a or PAM4 signal. NTF design parameters for the ten Case scenarios listed in Table 12, above, are shown below in Table 13, including values for OSR, zeroes, and poles. Higher sampling rates lead to higher OSR values, as well as a wider Nyquist zone, which enables the implemented noise shaping techniques to more easily reduce the in-band quantization noise. TABLE 13SamplingCaserate (GSa/s)OSRZeroesPolesI′, II′167exp (±j0.066π)0.6072 ± j0.1196exp (±j0.134π)0.7206 ± j0.3792III′, IV′208exp (±j0.0525π)0.6118 ± j0.1187exp (±j0.108π)0.7257 ± j0.3752V′, VI′2410exp (±j0.044π)0.6424 ± j0.1129exp (±j0.09π)0.7578 ± j0.3480VII′, VIII′2811exp (±j0.038π)0.6441 ± j0.1125exp (±j0.0775π)0.7596 ± j0.3465IX′, X′3213exp (±j0.033π)0.6465 ± j0.1120exp (±j0.0683π)0.7620 ± j0.3443 A design process for the present flexible digitization HFC-type interface is described further below with respect toFIG.61. FIG.61is a flow diagram for a digitization process6100. Digitization process6100is similar to digitization process1200,FIG.12, but implemented with respect to the DOCSIS paradigm. In an exemplary embodiment, digitization process6100may be implemented as a series of logical steps. The person of ordinary skill in the art though, will understand that except where indicated to the contrary, one or more the following steps may be performed in a different order and/or simultaneously. In the exemplary embodiment, process6100begins at step6102, in which the DOCSIS data rate requirements are obtained. In step6104, process6100selects the number of channels. In step6106, process6100selects the DOCSIS modulation format(s) applicable to the obtained data rate and the selected channels. In step6108, process6100determines the CNR requirements (e.g., including sampling rates) of each channel according to the DOCSIS 3.1 specifications, and in consideration of the channels and modulation format(s) selected. In step6110, process6100may additionally obtain the particular DOCSIS requirements such that the performance of each channel may be maintained according to the DOCSIS 3.1 (at present) standard. Step6110may, for example, be performed before, after, or simultaneously with step6108. After the CNR requirements are determined, process6100may implement separate sub-process branches. In an exemplary first branch/subprocess, in step6112, process6100determines the quantization bit number. In an exemplary embodiment, step6114may be performed in an exemplary second branch/subprocess. In step6114, process6100calculates the zeros and poles for the NTF. In step6116, process6100determines the NTF and distribution of quantization noise in the frequency domain corresponding to the zeros and poles selected in step6114. In step6118, process6100implements a logical Z-domain block filter configuration having an order corresponding to the number of zeros of the NTF. In step6120, process6100configures the delta-sigma ADC from the quantization bits determined in step6112and from the Z-domain block configuration implemented in step6116. FIG.62Ais a graphical illustration depicting an I-Q plot6200for an NTF. In the exemplary embodiment, I-Q plot6200is similar to I-Q plot5900,FIG.59A, and illustrates the respective zeros and poles of a fourth-order NTF for the Cases I′ and II′ implementation scenarios (sampling rate of 16 GSa/s) of Tables 12 and 13, above, for a filter such as CRFB filter6000,FIG.60. Indeed, for ease of illustration, the following description does not include details of graphical plots for all ten Case scenarios from Tables 12 and 13, but is instead intended to emphasize distinctions of utilizing the flexible on-demand provisional capabilities of the present delta-sigma ADC digital interface within the DOCSIS paradigm, or similar channel-based access networks. The person of ordinary skill in the art will understand, in light of the additional embodiments described above, how the a, b, and g coefficients may be differently tuned to accommodate the CNR requirements of selected carriers.FIG.62Bis a graphical illustration depicting a frequency response6202of the NTF for I-Q plot6200,FIG.62A. Similar to the embodiments depicted inFIGS.59A-B, I-Q plot6200includes two conjugate pairs (four in total) of zeroes and poles, and frequency response6202of the NTF acts as an HPF to push the quantization noise to the high frequency end and separate the noise from signal. FIG.63Ais a graphical illustration depicting a spectral plot6300for an exemplary set of channels. In an exemplary embodiment, spectral plot6300represents further experimental power spectral density results for Case I′, Tables 12 and 13, above, in which Channels 1-5 are sampled at a sampling rate of 16 GSa/s. A first subplot6302illustrates the RF spectra of the five input 192 MHz DOCSIS channels, and a second subplot6304illustrates the RF spectra of the five 192 MHz DOCSIS channels after application of delta-sigma digitization at a sampling rate of 16 GSa/s and a one-bit OOK signal. Accordingly, the Nyquist zone of spectral plot6300is 0-8 GHz, where the five 192-MHz DOCSIS channels occupy a frequency range6306of 258-1218 MHz. After delta-sigma digitization, the signal spectrum of spectral plot6300remains intact, that is, first and second subplots6302,6304substantially align within the signal band represented by frequency range6306, but the quantization noise is pushed out of the signal band. FIG.63Bis a graphical illustration depicting a plot6308of MER performance for the set of channels depicted inFIG.63A(i.e., Case I′). More particularly, plot6308illustrates the MER of the five DOCSIS channels of plot6300,FIG.63A, which was used in the experimental results listed in Tables 12 and 13, above, as a measurement of CNR. As demonstrated by the exemplary results illustrated inFIG.63B, Channel 2 exhibits MER greater than 34 dB, and therefore Channel 2 has sufficient CNR to support 1024QAM modulation. The remaining four channels (i.e., Channels 1 and 3-5) exhibit MER greater than 24 dB, and therefore support modulations of 128QAM.FIGS.63C-Dare graphical illustrations respectively depicting a post-transmission constellation plot6310for Channel 2 (1024QAM, MER=35.2 dB), and a post-transmission constellation plot6312for Channel 1 (128QAM, MER=25.3 dB),FIG.63A. In the exemplary results depicted inFIG.63D, constellation plot6312is only shown for Channel 1 as the worst case example (MER=25.3 dB). Nevertheless, constellation plot6312is generally representative of post-transmission results for Channels 3-5 as well. FIG.64Ais a graphical illustration depicting a spectral plot6400for an exemplary set of channels. In the exemplary embodiment, spectral plot6400represents further experimental power spectral density results for Case II′, Tables 12 and 13, above, in which Channels 1-5 are again sampled at a sampling rate of 16 GSa/s, but for a two-bit PAM4 signal. Similar to spectral plot6300,FIG.63A, a first subplot6402illustrates the RF spectra of the five input 192 MHz DOCSIS channels, and a second subplot6404illustrates the RF spectra of the five 192 MHz DOCSIS channels after application of the PAM4 delta-sigma digitization. Accordingly, the Nyquist zone of spectral plot6400is again 0-8 GHz, where the five 192-MHz DOCSIS channels occupy a frequency range6406of 258-1218 MHz. After delta-sigma digitization, the signal spectrum of spectral plot6400remains intact, and first and second subplots6402,6404substantially align within the signal band represented by frequency range6406, with the quantization noise again being pushed out of the signal band. FIG.64Bis a graphical illustration depicting a plot6408of MER performance for the set of channels depicted inFIG.64A(i.e., Case II′). More particularly, plot6408illustrates the MER (thus also providing the CNR) of the five DOCSIS channels of plot6400,FIG.64A. As demonstrated by the exemplary results illustrated inFIG.64B, with the additional quantization bit of the two-bit PAM4 signal, the quantization noise is significantly reduced in relation to Case I′, and a higher CNR may be achieved to support higher modulation orders. Accordingly, in this example, the MER of Channel 2 can be seen to increase to 42.6 dB, and therefore Channel 2 has sufficient CNR to support 4096QAM modulation. The remaining four channels (i.e., Channels 1 and 3-5) now exhibit MER greater than 32 dB, which will support modulations of 512QAM. FIGS.64C-Dare graphical illustrations respectively depicting a post-transmission constellation plot6410for Channel 1 (512QAM, MER=32.6 dB, again representing the worst case scenario of Channels 1 and 3-5), and a post-transmission constellation plot6412for Channel 2 (4096QAM, MER=42.6 dB),FIG.64A. In the exemplary results depicted inFIG.64D, constellation plot6312is only shown for Channel 1 as the worst case example (MER=25.3 dB), but is generally representative of post-transmission results for Channels 3-5 as well. FIG.65Ais a graphical illustration depicting an I-Q plot6500for an NTF. In the exemplary embodiment, I-Q plot6500is similar to I-Q plot6200,FIG.62A, but alternatively illustrates the respective zeros and poles of a fourth-order NTF for the Cases IX′ and X′ implementation scenarios of Tables 12 and 13, above, in which the sampling rate is increased to 32 GSa/s.FIG.65Bis a graphical illustration depicting a frequency response65XX of the NTF for I-Q plot6500,FIG.65A. Similar to the embodiments depicted inFIGS.62A-B, I-Q plot6500includes two conjugate pairs (four in total) of zeroes and poles, and frequency response6502of the NTF similarly acts to provide an HPF to push the quantization noise to the high frequency end and separate the noise from signal. As illustrated inFIG.65B, the Nyquist zone is now expanded to 0-16 GHz, and the in-band quantization noise is significantly reduced, thereby supporting even higher CNR and modulation orders. FIG.66Ais a graphical illustration depicting a spectral plot6600for an exemplary set of channels. In the exemplary embodiment, spectral plot6600represents further experimental power spectral density results for Case IX′, Tables 12 and 13, above, in which Channels 1-5 are sampled at a sampling rate of 32 GSa/s, and using a one-bit OOK signal. Similar to spectral plot6300,FIG.63A, a first subplot6602illustrates the RF spectra of the five input 192 MHz DOCSIS channels, and a second subplot6404illustrates the RF spectra of the five 192 MHz DOCSIS channels after application of the OOK delta-sigma digitization. As described immediately above, the Nyquist zone of spectral plot6600is now 0-16 GHz, and the five 192-MHz DOCSIS channels occupy a frequency range6606. After delta-sigma digitization, the signal spectrum of spectral plot6600remains intact in this exemplary scenario as well, and first and second subplots6602,6604substantially align within the signal band represented by frequency range6606, with the quantization noise again being pushed out of the signal band. FIG.66Bis a graphical illustration depicting a plot6608of MER performance for the set of channels depicted inFIG.66A(i.e., Case IX′). More particularly, plot6608illustrates the MER (thereby, CNR) of the five DOCSIS channels of plot6600,FIG.66A. As demonstrated by the exemplary results illustrated inFIG.66B, the quantization noise is again pushed out of the signal band, with in-band CNR greater than 40 dB. In this example though, due to the uneven noise floor, different modulations are assigned to the different channels such that each channel satisfies the DOCSIS 3.1 requirements. Accordingly, in this example, the MER of Channel 2 can be seen to increase to greater than 50 dB, which supports 16384QAM. The MER of Channel 4 is greater than 41 dB, which supports 4096QAM, but not 8192QAM, which is supported by all of Channels 1, 3, and 5, which all exhibit MER greater than 44 (44.5) dB. FIGS.66C-Eare graphical illustrations respectively depicting a post-transmission constellation plot6610for Channel 2 (16384QAM, MER=53.9 dB), a post-transmission constellation plot6612for Channel 3 (8192QAM, MER=44.5 dB, representing the worst case scenario of Channels 1, 3, and 5), and a post-transmission constellation plot6614for Channel 4 (4096QAM, MER=43.9 dB),FIG.66A. In the exemplary results depicted inFIG.66D, constellation plot6312is only shown for Channel 3 as the worst case example (MER=44.5 dB), but is generally representative of post-transmission results for Channels 1 and 5 as well. FIG.67Ais a graphical illustration depicting a spectral plot6700for an exemplary set of channels. In the exemplary embodiment, spectral plot6700represents further experimental power spectral density results for Case X′, Tables 12 and 13, above, in which Channels 1-5 are sampled at a sampling rate of 32 GSa/s, but for a two-bit PAM4 signal. Similar to spectral plot6600,FIG.66A, a first subplot6702illustrates the RF spectra of the five input 192 MHz DOCSIS channels, and a second subplot6704illustrates the RF spectra of the five 192 MHz DOCSIS channels after application of the PAM4 delta-sigma digitization. The Nyquist zone of spectral plot6700remains 0-16 GHz for the two-bit implementation, and the five 192 MHz DOCSIS channels occupy a frequency range6706. After delta-sigma digitization, the signal spectrum of spectral plot6700remains intact, with first and second subplots6702,6704substantially aligning within the signal band represented by frequency range6706, with the quantization noise again being pushed out of the signal band. FIG.67Bis a graphical illustration depicting a plot6708of MER performance for the set of channels depicted inFIG.67A(i.e., Case X′). More particularly, plot6708illustrates the MER/CNR of the five DOCSIS channels of plot6700,FIG.67A. As demonstrated by the exemplary results illustrated inFIG.67B, with the additional quantization bit of the two-bit PAM4 signal, the quantization noise is significantly reduced in relation to Case IX′, and CNR results greater than 52 dB are achieved for all five channels, and therefore all five channels exhibit sufficient CNR to support 16384QAM modulation.FIG.67Cis a graphical illustration depicting a post-transmission constellation plot6710for Channel 4 (16384QAM, MER=52.5 dB representing the worst case scenario for all five channels) ofFIG.67A. Referring back to Tables 12 and 13, it may be seen that the exemplary scenario of represented by Case VIII′ (sampling rate of 28 GSa/s and two quantization bits) also achieves sufficient CNR to support 16384QAM on all five channels. According to the systems and methods described herein, an innovative digitization interface based on delta-sigma ADC is provided that is particularly useful for the paradigm of DOCSIS signals in HFC networks. In comparison with conventional legacy analog HFC networks, the present delta-sigma ADC digitization interface enables robust transmission of DOCSIS signals against noise/nonlinear impairments, thereby supporting higher SNR and CNR, larger modulation formats, longer fiber distances, and more WDM wavelengths. In comparison with conventional BDF/BDR digitization interfaces, the present delta-sigma ADC digitization interface significantly improves spectral efficiency and reduces the traffic load after digitization. The present techniques further advantageously implement a passive filter at the DAC, thereby eliminating the need for a Nyquist DAC required at each fiber node in the BDF/BDR interface, thereby enabling a low-cost all-analog fiber node implementation. The present delta-sigma ADC interface techniques additionally improve over conventional remote PHY digitization interfaces by enabling the centralization of all PHY layer functions in the hub, thereby eliminating the need for distributed RPDs at the fiber nodes, and thus further reducing the cost and complexity of fiber nodes, which will facilitate improved migration toward fiber deep and node splitting architectures of HFC networks. The present delta-sigma ADC digitization interface embodiments still further enable a low-cost, DAC-free, all-analog implementation of fiber nodes, which provides significant advantages with respect to the flexibility in terms of sampling rate, quantization bits, and noise distribution (e.g., exploiting noise shaping techniques). According to the innovative systems and methods described herein, on-demand provisioning of modulation, data rate, and CNR is elegantly achieved for DOCSIS signals (and similar) in the HFC environment and other access networks. Exemplary embodiments of delta-sigma digitization systems, methods, and real-time implementations are described above in detail. The systems and methods of this disclosure though, are not limited to only the specific embodiments described herein, but rather, the components and/or steps of their implementation may be utilized independently and separately from other components and/or steps described herein. Additionally, the exemplary embodiments described herein may be implemented and utilized in connection with access networks other than MFH and MBH networks. Although specific features of various embodiments of the disclosure may be shown in some drawings and not in others, this is for convenience only. In accordance with the principles of the disclosure, a particular feature shown in a drawing may be referenced and/or claimed in combination with features of the other drawings. Some embodiments involve the use of one or more electronic or computing devices. Such devices typically include a processor or controller, such as a general purpose central processing unit (CPU), a graphics processing unit (GPU), a microcontroller, a reduced instruction set computer (RISC) processor, an application specific integrated circuit (ASIC), a programmable logic circuit (PLC), a field programmable gate array (FPGA), a DSP device, and/or any other circuit or processor capable of executing the functions described herein. The processes described herein may be encoded as executable instructions embodied in a computer readable medium, including, without limitation, a storage device and/or a memory device. Such instructions, when executed by a processor, cause the processor to perform at least a portion of the methods described herein. The above examples are exemplary only, and thus are not intended to limit in any way the definition and/or meaning of the term “processor.” This written description uses examples to disclose the embodiments, including the best mode, and also to enable any person skilled in the art to practice the embodiments, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the disclosure is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims. | 232,113 |
11863233 | DETAILED DESCRIPTION Referring toFIG.1, an integrated CMTS (e.g., Integrated Converged Cable Access Platform (CCAP))100may include data110that is sent and received over the Internet (or other network) typically in the form of packetized data. The integrated CMTS100may also receive downstream video120, typically in the form of packetized data from an operator video aggregation system. By way of example, broadcast video is typically obtained from a satellite delivery system and pre-processed for delivery to the subscriber though the CCAP or video headend system. The integrated CMTS100receives and processes the received data110and downstream video120. The CMTS130may transmit downstream data140and downstream video150to a customer's cable modem and/or set top box160through a RF distribution network, which may include other devices, such as amplifiers and splitters. The CMTS130may receive upstream data170from a customer's cable modem and/or set top box160through a network, which may include other devices, such as amplifiers and splitters. The CMTS130may include multiple devices to achieve its desired capabilities. Referring toFIG.2, as a result of increasing bandwidth demands, limited facility space for integrated CMTSs, and power consumption considerations, it is desirable to include a Distributed Cable Modem Termination System (D-CMTS)200(e.g., Distributed Converged Cable Access Platform (CCAP)). In general, the CMTS is focused on data services while the CCAP further includes broadcast video services. The D-CMTS200distributes a portion of the functionality of the I-CMTS100downstream to a remote location, such as a fiber node, using network packetized data. An exemplary D-CMTS200may include a remote PHY architecture, where a remote PHY (R-PHY) is preferably an optical node device that is located at the junction of the fiber and the coaxial. In general, the R-PHY often includes the MAC and/or PHY layers of a portion of the system. The D-CMTS200may include a D-CMTS230(e.g., core) that includes data210that is sent and received over the Internet (or other network) typically in the form of packetized data. The D-CMTS230may also receive downstream video220, typically in the form of packetized data from an operator video aggregation system. The D-CMTS230receives and processes the received data210and downstream video220. A remote fiber node280preferably include a remote PHY device290. The remote PHY device290may transmit downstream data240and downstream video250to a customer's cable modem and/or set top box260through a network, which may include other devices, such as amplifier and splitters. The remote PHY device290may receive upstream data270from a customer's cable modem and/or set top box260through a network, which may include other devices, such as amplifiers and splitters. The remote PHY device290may include multiple devices to achieve its desired capabilities. The remote PHY device290primarily includes PHY related circuitry, such as downstream QAM modulators, upstream QAM demodulators, together with pseudowire logic to connect to the D-CMTS230using network packetized data. The remote PHY device290and the D-CMTS230may include data and/or video interconnections, such as downstream data, downstream video, and upstream data295. It is noted that, in some embodiments, video traffic may go directly to the remote physical device thereby bypassing the D-CMTS230. By way of example, the remote PHY device290may covert downstream DOCSIS (i.e., Data Over Cable Service Interface Specification) data (e.g., DOCSIS 1.0; 1.1; 2.0; 3.0; 3.1; and 4.0 each of which are incorporated herein by reference in their entirety), video data, out of band signals received from the D-CMTS230to analog for transmission over RF or analog optics. By way of example, the remote PHY device290may convert upstream DOCSIS, and out of band signals received from an analog medium, such as RF or analog optics, to digital for transmission to the D-CMTS230. As it may be observed, depending on the particular configuration, the R-PHY may move all or a portion of the DOCSIS MAC and/or PHY layers down to the fiber node. Referring toFIG.3, for data processing and for transferring data across a network, the architecture of the hardware and/or software may be configured in the form of a plurality of different planes, each of which performing a different set of functionality. In relevant part the layered architecture may include different planes such as a management plane300, a control plane310, and a data plane320. A switch fabric330may be included as part of the layered architecture. For example, the management plane300may be generally considered as the customer interaction or otherwise the general software application being run. The management plane typically configures, monitors, and provides management, monitoring, and configuration served to all layers of the network stack and other portions of the system. For example, the control plane310is a component to a switching function that often includes system configuration, management, and exchange of routing table information and forwarding information. Typically, the exchange of routing table information is performed relatively infrequently. A route controller of the control plane310exchanges topology information with other switches and constructs a routing table based upon a routing protocol. The control plane may also create a forwarding table for a forwarding engine. In general, the control plane may be thought of as the layer that makes decisions about where traffic is sent. Since the control functions are not performed on each arriving individual packet, they tend not to have a strict speed constraint. For example, the data plane320parses packet headers for switching, manages quality of service, filtering, medium access control, encapsulations, and/or queuing. As a general matter, the data plane carriers the data traffic, which may be substantial in the case of cable distribution networks. In general, the data plane may be thought of as the layer that primarily forwards traffic to the next hop along the path to the selected destination according to the control plane logic through the switch fabric. The data plane tends to have strict speed constraints since it is performing functions on each arriving individual packet. The remote physical device290needs to support updating the software of the remote physical device. For example, the D-CMTS230may command the remote physical device290to reset via a ResetCtrl GCP TLV, such as using a command line interface. For example, the remote physical device290may initiate a reset on its own in reaction to some internal or external event. Referring toFIG.4, the remote physical device290may include a hard reset400which is the most comprehensive form of reset. The hard reset400may be thought of as a “reboot” of the device. When the remote physical device290performs a hard reset400, the remote physical device290performs a power cycle, or the equivalent thereof, whereupon the remote physical device290returns to a state similar to the state achieved on initial power up. The remote physical device290retains non-volatile configuration through the hard reset. After the hard reset400, the remote physical device290returns to the beginning of the remote physical device290initialization state machine and performs initialization. The remote physical device290may include a soft reset410that provides a partial reset of the remote physical device290. After a soft reset410, the remote physical device290takes steps to hasten the remote physical device290initialization process and minimize service interruption. The soft reset410resets the remote physical device290volatile configuration and operating state, including terminating all connections to all D-CMTSs, releasing IP addresses obtained via DHCP, clearing network authentication information, etc. The remote physical device290may reset all software states except that which is needed to maintain IEEE 1588 clock frequency. The soft reset410achieves quicker remote physical device290initialization by maintaining the current IEEE 1588 clock frequency without adjustment throughout the soft reset410process until it restarts the sync process with the grand master clock (GMC). This allows the remote physical device290to provide synchronized operation without having to engage in the time consuming full PTP sync process with the GMC. Referring toFIG.5, the hard reset400undergoes a time-consuming process that generally requires 4-5 minutes during which service for the customer is not provided by the remote physical device290. The hard reset400process tends to vary from remote physical device to remote physical device, but in general, the D-CMTS230downloads an image file (.ITB) that includes a FPGA image, a Uboot (Boot.bin), a Linux Kernel, and all applications and software dataplane. The remote physical device290then executes the primary boot loader530that includes instructions to boot the remote physical device's290operating system kernel. The operating system kernel540is booted and then the software stack is started550. The software stack550includes a software dataplane552and a plurality of software applications554. After starting the software stack550, the remote physical device290initializes the hardware560(e.g., initialize hardware/programmable logic or FPGA circuits of RPD). After the hardware initializes560, the remote physical device290connects with the D-CMTS230to be configured570and precision timing protocol580is established. Referring toFIG.6, the soft reset410omits downloading the imagefile (.ITB), omits resetting remote physical device, omits loading the entire image file (.ITB), omits executing the primary boot loader, and omits booting operating system kernel. The soft reset410undergoes a somewhat time-consuming process that requires generally 60 seconds where service for the customer is not provided by the remote physical device290. The soft reset410may include starting the software stack650which includes a software dataplane652and a plurality of software applications654. After starting the software stack650, the remote physical device290initializes the hardware660(e.g., initialize hardware/programmable logic or FPGA circuits of RPD). After the hardware initializes660, the remote physical device290connects with the D-CMTS230to be configured670and precision timing protocol680remains established. In the case of either the hard reset or the soft reset, the video service, data service, out-of-band data service, etc. are impacted because the reset process kills all applications including the software dataplane. During the reset process the remote physical device290re-establishes the GCP (“generic control plane” is a protocol used for configuration of the remote physical device) and the L2TP (layer two tunnelling protocol) connections from scratch. Also, during the reset process of the remote physical device290, the software dataplane is restarted and reprogrammed. Further, the FPGA dataplane modulator is reprogrammed. When the remote physical device290is restarted, either as a result of a hard reset or a soft reset, the processing of video content, data service, and out-of-band data does not restart until after the configuration is processed and the precision timing protocol is established or maintained. Unfortunately, for a hard reset this process typically takes 4-5 minutes to complete. In most cases, resetting the remote physical device, executing the primary boot loader, downloading of the image file (.ITB) file, and booting operating system kernel is not necessary because those portions of the remote physical device290remain operational. In most cases, if updating is required only the software stack650which includes the software dataplane652and the plurality of software applications654is modified. After modification of the software stack650, the remote physical device290initializes the hardware660and connects with the D-CMTS230to be configured670and establish the precision timing protocol680. Unfortunately, the hard reset process typically requires over 4-5 minutes to complete and the soft reset process typically requires over a minute to complete, during which time services for customers are not available. A modified process is desirable to reduce the impact on the currently active services both from the perspective of the remote physical device and the perspective of the D-CMTS. To achieve a reduction in the unavailability of active services during the reset process, the remote physical device should maintain configurations received from the D-CMTS, maintain L2TP connection, and/or session state across the reset process so that the remote physical device does not need to establish a fresh GCP connection and L2TP connection with the D-CMTS. For example, the configuration state may include QAM channel parameters, OFDM channel parameters, and OOB channel parameters. In this manner, the remote physical device maintains the details of the D-CMTS core connection across the reset process. With the remote physical device maintaining the details of the D-CMTS core connection, the D-CMTS is alleviated of the need to resend all of such configuration information during the reset process. Referring toFIG.7, the remote physical device290preferably saves its configuration information700(e.g., global configuration of RPD) and saves its session/channel information710. By saving the configuration information700and the session/channel information710, after the reset process the remote physical device may load this information locally, rather than the traditional process of obtaining such information from the D-CMTS. In this manner, the D-CMTS may remain the substantially the same with respect to the configuration information and the session/channel information. After saving the configuration information700and the session/channel information710the software processes are terminated720. The software applications are modified, if desired, with updated software applications730. The software applications are restarted740and the configuration information and the session/channel information is loaded750. This modified resetting process is especially appropriate if the hardware is functioning properly and it is desirable to restart the software or otherwise it is desirable to update the software which is then started. Referring toFIG.8, to further decrease the impact on services it was determined that if there is an error in the dataplane, then the software dataplane should be updated (if necessary) and restarted in an effective manner. If there is an error in the software applications, then the software applications should be updated (if necessary) and restarted in an effective manner. The reset process may first determine whether restarting the software applications is necessary800. If remote physical device290determines the software applications should be restarted, then its configuration information802is saved and its session/channel information804is saved. By saving the configuration information800and the session/channel information810, after the reset process the remote physical device may load this information locally, rather than the traditional process of obtaining such information from the D-CMTS. In this manner, the D-CMTS may remain the substantially the same with respect to the configuration information and the session/channel information. After saving the configuration information802and the session/channel information804the software applications are terminated810. The software applications are modified, if desired, with updated software applications and the software applications are restarted812. The software applications are loaded with the configuration information and the session/channel information816. The services are not impacted as a result of restarting the software applications. The reset process may next determine whether restarting the software dataplane is necessary830. If remote physical device290determines the software dataplane should be restarted, then the software dataplane is terminated840. The software dataplane is modified, if desired, with updated software dataplane842. The software dataplane is restarted and the configuration is loaded844. The GCP connection between a remote physical device and the D-CMTS core may fail for various reasons. In this case of connection failure, a re-establishment of the GCP connection is preferably made without going through the full remote physical device initialization process. The GCP reconnect process is initiated by the remote physical device. The remote physical device maintains a GCP configuration attribute that controls the remote physical device actions on a GCP connection failure, including whether or not it should attempt to reconnect to a particular core in the event of a GCP connection failure. Each core connected to the remote physical device is responsible for configuring the remote physical device on whether or not to attempt a reconnect to that particular core upon GCP connection failure. The configuration is accomplished via the GCP Connection Recovery Action (GcpRecoveryAction) TLV. The soft reset process, the hard reset process, and/or the modified reset processes may be initiated in any suitable manner. For example, initiation may be through a command line interface, a command from the D-CMTS, and/or the remote physical device in the event of a failure such as a software crash, a watchdog timeout, a software upgrade, etc. Referring toFIG.9, as previously described a software image900that is downloaded to the remote physical device290includes multiple different portions therein. The software image900may be in the form of an image tree blob (.ITB) file format. The software image900may include a FPGA image910, a primary boot loader920(e.g., Uboot), a kernel930(e.g., Linux), a software dataplane940, and software applications950. Downloading the software image900to the remote physical device is a non-service impacting activity for the customers. However, if a substantial number of remote physical devices are simultaneously restarted, a substantial load may be placed on the D-CMTS to provide configurations and connectivity. By using the previously described modified reset process, the software dataplane and/or the software applications can be selectively reset, with minimal impact on service to the customers or no impact on service to the customers, respectively. Referring toFIG.10, it is desirable to use the same software package900, that includes all of the necessary components therein to perform a hard reset of the remote physical device290, that is provided to the remote physical device290to achieve the desired type of reset process. In other words, it is desirable that the software package900that is provided to the remote physical device290include the necessary components to do a hard reset, a soft reset, or a modified reset. A file naming convention1000, such as a version number, of the software package900may be selected in a manner to indicate the desired type of reset process to be performed. The image file name versioning may employ a scheme where a version string indicates whether (a) the FPGA/Kernel needs to be reloaded, (b) whether the software applications need to be reloaded, and/or (c) whether the software dataplane needs to be reloaded. A version number of File_Name_0011010may be used to indicate that the file is intended to be used for a hard reset. A version number of File_Name_0021020may be used to indicate that the file is intended to be used for a soft reset. A version number of File_Name_0031030may be used to indicate that the file is intended to be used for a software dataplane of the dataplane reset. A version number of File_Name_0041040may be used to indicate that the file is intended to be used for software applications of the control plane reset. A version number of File_Name_0051050may be used to indicate that the file is intended to be used for a software dataplane and software applications reset. When the remote physical device290is reset, it may use the version number (e.g.,001,002,003,004,005) to indicate which files should be used during the process of resetting. In some cases, system indicates whether a hard reset (e.g.,001) is desired or a soft reset (e.g.,002,003,004,005) is desired. The version number is of assistance in determining which files should be used for the soft reset among the files included within the file. In this manner, in many cases, less than all of the files included within the software image are used for the particular reset. Typically, a hard reset uses the FPGA image910, the primary boot loader920(e.g., Uboot), the kernel930(e.g., Linux), the software dataplane940, and the software applications950. Typically, a soft reset uses the software dataplane940and the software applications950, albeit with limited caching of information. Typically, a software dataplane reset uses the software dataplane940, together with appropriate caching. Typically, a software applications reset uses the software applications950, together with appropriate caching. Typically, a software dataplane and software applications reset uses the software dataplane940and the software applications950, together with appropriate caching. In some cases, the primary boot loader may be modified prior to performing the reset to decrease the time for the reset. Moreover, each functional block or various features in each of the aforementioned embodiments may be implemented or executed by a circuitry, which is typically an integrated circuit or a plurality of integrated circuits. The circuitry designed to execute the functions described in the present specification may comprise a general-purpose processor, a digital signal processor (DSP), an application specific or general application integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic devices, discrete gates or transistor logic, or a discrete hardware component, or a combination thereof. The general-purpose processor may be a microprocessor, or alternatively, the processor may be a conventional processor, a controller, a microcontroller or a state machine. The general-purpose processor or each circuit described above may be configured by a digital circuit or may be configured by an analogue circuit. Further, when a technology of making into an integrated circuit superseding integrated circuits at the present time appears due to advancement of a semiconductor technology, the integrated circuit by this technology is also able to be used. It will be appreciated that the invention is not restricted to the particular embodiment that has been described, and that variations may be made therein without departing from the scope of the invention as defined in the appended claims, as interpreted in accordance with principles of prevailing law, including the doctrine of equivalents or any other principle that enlarges the enforceable scope of a claim beyond its literal scope. Unless the context indicates otherwise, a reference in a claim to the number of instances of an element, be it a reference to one instance or more than one instance, requires at least the stated number of instances of the element but is not intended to exclude from the scope of the claim a structure or method having more instances of that element than stated. The word “comprise” or a derivative thereof, when used in a claim, is used in a nonexclusive sense that is not intended to exclude the presence of other elements or steps in a claimed structure or method. | 23,744 |
11863234 | DETAILED DESCRIPTION OF THE INVENTION Before embodiments of the present invention will be discussed in more detail below based on the drawings, it should be noted that identical, functionally equal or equal elements, objects and/or structures are provided with the same reference numbers in the different figures, such that the description of these elements illustrated in different embodiments is inter-exchangeable or inter-applicable. The following embodiments relate to wireless optical signal transmission or data transmission. Within the embodiments described herein, the same is also referred to as Li-Fi (light fidelity). Here, the term Li-Fi relates to the terms IrDA (Infrared Data Association) or OWC (Optical Wireless Communication). This means the terms wireless optical data transmission and Li-Fi are used synonymously. Here, optical data transmission means transmitting an electromagnetic signal through a free transmission medium, such as air or another gas or fluid. For this, for example, wavelengths in the ultraviolet (UV) range with at least 350 nm and the infrared range, for example, at most 1550 nm can be used, wherein other wavelengths that differ from wavelengths used for radio standards are also possible. Wireless optical transmission is also to be distinguished from a wired optical data transmission, which is obtained, for example, by means of optical fibers or optical fiber cables. Further, embodiments of the present invention relate to a base station and a participant apparatus moveable with respect to the base station. This means a variable relative position between the base station and the participant apparatus, which can be obtained by moving the base station and/or also by moving the participant apparatus, which includes both rotational as well as translational movements and combinations thereof. FIG.1shows a schematic block diagram of a wireless optic communication network100according to an embodiment. The wireless optical communication network100includes a base station5and a participant apparatus10. The participant apparatus10is movable with respect to, i.e., relative to the base station5. This means a relative position between the base station5and the participant apparatus10is variable in that the base station and/or the participant apparatus10moves in space in order to change a relative position. The base station5and the participant apparatus10are established for wireless optical communication. For this, the participant apparatus10comprises communication means12established for wireless optical communication. The wireless optical communication includes at least one of a wireless optical signal141emitted by the base station5and a wireless optical signal142emitted by the participant apparatus10, in particular the communication means12. Thus, the wireless optical communication network100can be configured to transfer the wireless optical signal141from the base station5to the participant apparatus10and/or to transfer the wireless optical signal142from the participant apparatus10to the base station5, i.e., to communicate or to transmit the same. The participant apparatus10includes deflection means16configured to deflect at least part of the wireless optical signal for wireless optical communication, i.e., the wireless optical signal141and/or142such that the wireless optical signal is deflected between a first direction181between the deflection means16and the communication means12and a second direction182between the deflection means16and the base station5. Here, deflection takes place such that the direction182runs along an axis of a spatially established communication channel32. The communication channel32can be described such that the same includes the spatial area illuminated or irradiated by the wireless optical signal141or142, i.e., optical power is included, which serves the communication between base station5and participant apparatus10. The deflection means16is arranged along the communication channel32, i.e., along an axis of the communication channel. The communication means12is arranged off the axis or the communication channel32, i.e., offset or laterally offset to that part of the communication channel32running along the direction182. The offset can be effected by the deflection means16, such that further participant apparatuses can be placed or arranged in the further course of the (possibly deflected or not deflected) direction182, wherein it is advantageous to provide a further additional communication channel or to couple out merely part of the optical light power of the wireless optical signal141with the deflection means16. Other advantageous configurations, also with respect to the wireless optical signal142, are also described herein and can be easily combined. The deflection of an optical path or course of the wireless optical signal141and/or142allows a movement of the participant apparatus10together with the deflection means16along the direction182without interrupting the communication between the participant apparatus10and the base station5. Alternatively or additionally, a movement of the communication means12relative to the beam deflection means16and/or the base station5along direction181is possible without interrupting such a communication. Thus, the base station5and/or the communication means12can have an optical interface for transmitting and/or receiving wireless optical signals. Such interfaces can have an advantageous direction along which transmitting and/or receiving of wireless optical signals is possible with little attenuation. These directions can, for example, completely or partly influence or determine the directions181and/or182. The deflection means16can be formed reflectively such that the wireless optical signal141and/or142is completely reflected or deflected. Alternatively, it is also possible to configure the deflection means16such that part of a wireless optical signal141or142transmits through the deflection means16, which allows multiple communication. This is possible irrespective of whether the communication means12is configured for transmitting and/or receiving wireless optical signals. The wireless optical signals141and142have a certain spatial extension perpendicular to their propagation direction. Here, the same can be emitted in a spatially overlapping manner. Alternatively, it is possible that the wireless optical signals141and142at least partly differ spatially, i.e., run spatially separated from one another. For this, individual spatially spaced apart beams can be used such that, for example, a channel from the base station5to the communication means12or the other way round, exemplarily referred to as towards channel, runs spatially separated from a beam of the back channel running in the opposite direction. According to an embodiment, the base station can transmit and/or receive two or several wireless optical signals in different spatially separated beams. This means differing parallel beams can be provided for reception, differing parallel beams can be provided for transmission or a combination thereof can be provided. According to an embodiment, the deflection means16can be stationary with respect to the communication means12, i.e., the communication means12and the deflection means16can be moved together. The wireless optical signal141and/or142can define a spatial optical communication channel along which, for example, the mobile participant apparatus10is movable. In particular, this applies to the part of the optical paths along the direction182, i.e., between the base station and the co-moved deflection means16. According to embodiments, which can be implemented as an alternative to deflection means16located stationary with respect to the communication means, the communication means12is moveable with respect to the deflection means16, namely along a deflection direction acting on the optical signal141or142by the deflection means16, i.e., along the direction181. With reference to the base station12, an inclination angle or tilt angle of the deflection means16can influence or determine the direction181in which the optical signals are deflected, such that the direction181can be referred to as deflection direction. In the context of the embodiments described herein, the base station5is arranged stationary in space, wherein one or several participant apparatuses of the wireless optical communication network can be configured to move with respect to the base station. FIG.2shows a schematic block diagram of a wireless optical communication network200according to an embodiment. The wireless optical communication network200can comprise one or several participant apparatuses10. As an example, three participant apparatuses101,102and103are illustrated, wherein any other arbitrary number of at least 1, at least 2, at least 3, at least 5 or more, for example, 7, 8 or 10 or more can be implemented. The deflection means161,162and163as well as the deflection means of possible further participant apparatuses comprise a line of sight to the base station5, which is, at most, obstructed by at least partly transparent objects, so that a straight or deflected line of sight is obtained between the participants via which the participants can exchange the optical signals. Exemplarily, the participant apparatuses10are part of a crane system and configured as trolleys movable along the direction182, i.e., parallel thereto. For example, the deflection means161,162and163of the participant apparatuses101,102or103are configured as beam splitters, this means a respective portion141a,141b or141c of the wireless optical signal141emitted exemplarily by the base station5is coupled out when impinging on the respective beam parts161,162or163, while a remaining portion141′,141″ and141′″ can pass or traverse the respective beam splitter element161to163in order to form the basis for subsequent coupling-out. Both the portions141a,141b and141c as well as the remaining portions141′,141″ and141′″ can have identical information content. Each of the deflection means161,162and163can be configured to couple out a portion of the optical light power, optionally by considering a partial wavelength range and/or a polarization. As long as the wireless optical communication network provides for the fact that the coupled-out portion of the participation apparatus is not exclusively allocated but also further participant apparatuses are to receive this portion or are to couple out a portion thereof, it can be advantageous that the deflection means16is configured such that a portion of at least 0.1% and at most 20%, at least 0.5% and at most 15% and advantageously at least 1% and at most 10% of a light power of a wireless optical signal received by the base station are coupled out. While coupling out less than 1% is possible but technically difficult, coupling out more than 4% can be disadvantageous for a large number of communication participants, as long as energy-saving signal generation is chosen. Depending on the number of participants, an optimum of the coupled-out light power can result at approximately 2% to 4% per beam splitter. The term “received from/from the direction of the base station” relates to both the direct reception of the spatially first participant apparatus101as well as to the participant apparatuses102,103, . . . behind the same, which receive the transmitted portion. A participant apparatus closing the communication channel32or a participant apparatus arranged spatially last, such as the participant apparatus103, can also effect non-transparent beam deflection instead of a beam splitter, for example by using a mirror. The deflection elements161,162and163can be arranged stationary via holding elements221or222or223of the respective participant apparatus101,102or103with respect to the communication means121,122or123. The communication signal141can be emitted from the base station5, for example in parallel to an axis24, which can also be expressed such that a beam26of the wireless optical signal141can have a center beam running along the axis24. This includes both divergent, focused and collimated beams of the base station5, wherein the explanations also apply to the optical signal142and its parts. It is possible but not needed that the wireless optical signal141as well as its remaining portions141′,142″ and141′″ are spatially parallel to one another and/or without any offset to one another. In that way, it is possible that the deflection elements or beam splitter elements effect a respective offset281,282or283by refractions or deflections at the respective interfaces of which the deflection means161,162and163have two, for example. The respective offset281,282and/or283can also take place by a respectively large configuration of a spatial communication channel32, which can be influenced, for example, by the fact in what spatial area perpendicular to the axis24the deflection means161,162and163together or each allow a deflection of the wireless optical signal or a remaining portion thereof. Further, embodiments provide for a compensation of an offset by rotating, for example, a subsequent participant apparatus, such as the participant apparatus102, by 180° around the directional axis182with respect to the participant apparatus101, such that an offset282having an effect on the participant apparatus102can have an opposite effect on the offset281, which all in all can allow at least partial compensation. With reference to the wireless optical signal141, the participant apparatuses101,102and103can be connected in series, wherein each of the participant apparatuses101,102and103can be referred to as belonging to a plurality or group of participant apparatuses, which can also be expressed such that the plurality of participant apparatuses includes the respective participant apparatus. As illustrated inFIG.2, the base station5can emit the wireless optical signal141as transmit signal, which means the same can act as transmitting means. Each of the participant apparatuses101,102and103can be configured to receive at least part of the transmit signal141emitted by the base station. This takes place in an optically passive way, this means that by optical coupling-out, transmitting the signal again by one or several or all of the participant apparatuses10, which means receiving a signal, amplifying the same, possibly evaluating the same and actively transmitting the same again can be dispensed with. Thereby, simple participant apparatuses having low electric power consumption can be implemented. In other words,FIG.2shows a beam splitter based Li-Fi system in a linear communication scenario, such as in a unidirectional communication scenario. Here,FIG.2represents a simple realization of the linear communication scenario based on Li-Fi and beam splitters. Electromagnetic radiation, for example, of the ultraviolet, visible or infrared range, can be used as communication wavelength. In the context of embodiments described herein, this radiation is described as light and includes at least the stated wavelength ranges or parts thereof. The system can allow both unidirectional as well as bidirectional data transmission. In the illustrated unidirectional data transmission, one or several, basically any number of participant apparatuses/trolleys can move along the axis24. The trolleys have, for example, a single spatial degree of freedom with respect to their movement: The same can move forward and backward on the axis24, which means along the direction182. Thus, the scenario can be referred to as linear communication scenario. At any point in time, the trolleys10are arbitrarily distributed along the axis24, i.e., their distance to one another and to the base station5can be arbitrary. The order of the arrangement of the trolleys10along the axis24can also be arbitrary but can also be fixed. The communication channel32can be formed along the axis24. A spatial area along the axis24, where the data transfer takes place or is enabled, can be considered as communication channel. In unidirectional operation, the communication channel can be completely filled by a light beam, in bidirectional operation, the communication channel can be combined of one or several respective beams from forward and backward path. The light beam can be characterized by a certain divergence, wherein the divergence can also be zero. A beam diameter can be established on the transmitter side of the wireless optical signal141and/or142ofFIG.1, in a range of at least 1 mm and at most 250 mm, of at least 5 mm and at most 100 mm, or of at least 10 mm and at most 50 mm, wherein the term diameter does not limit the beam shape on round configurations but also includes other shapes such as polygons, ellipses or free forms. The beam divergence can be configured such that a motion tolerance or adjustment tolerance is possibly compensated and still sufficient optical power reaches the receiver. For example, the divergence can be less than or equal to 3°, less than or equal to 1° or less than or equal to 0.1°, which describes an expansion of the optical path across a beam length. InFIG.2, the light beam is exemplarily emitted by the base station5. The communication channel32can be configured without fixed spatial limitation. Optionally, the same can be limited by intransparent structures, for example a wall. Each of the participant apparatuses101to103can communicate with the base station5via this communication channel32. For this, each trolley has, for example, a beam splitter161,162or163reaching into the communication channel32. The beam splitter couples light out of the communication channel to receive the signal or can couple light to transmit a signal. Coupling out can, for example, be performed in the plane perpendicular to the axis24as illustrated inFIG.2. However, the same can be in any other plane. As the beam splitter16is mounted on the trolley10via one or several holders22, the same moves together with the same along the axis24. The beam splitter16can have any division ratio, this means any ratio of the power that is coupled out. For a larger number of trolleys, for example, a number of more than five participant apparatuses, it can be useful that significantly more light is transmitted than is coupled out such that each of the trolleys can receive a significantly large portion of optical signal power in order to be able to detect the signal without error. WhileFIG.2is illustrated such that the participant apparatuses101,102and103can move linearly along a straight axis24, it is alternatively or additionally also possible that the participant apparatuses101,102and/or103can move along one or several alternative or additional directions in space. For this, for example, the base station5may emit the signal141not only along a single straight light beam but, for example, as continuous or discrete light fan and/or in a circumventing manner, such as a circular segment or circle, such that any, for example one-dimensional or two-dimensional movement within the light fan is possible, for example also by deflecting at least one of the fan-like emitted discrete light beams by the deflection means to prevent interruption of the communication or to tolerate interruption of the communication for a certain time. Based onFIGS.3a,3band3c, possible types of implementation of the deflection means16will be described exemplarily.FIG.3ashows a participant apparatus10ain a schematic perspective view. Here, the participant apparatus10acorresponds exemplarily to the configuration of the participant apparatus according toFIG.2. The deflection means16acan comprise a beam splitter including, for example, a beam splitter plate element. Two oppositely arranged main sides341and342can be configured such that one or both of the main sides result in Fresnel reflections. The deflection means can, for example, be formed in a transparent manner apart from the possibly important Fresnel reflections. This means that part of the light is transmitted and part is reflected between the material of the deflection means16aand the surrounding medium due to the refractive index difference. Here, the reflected portion is the signal portion141acoupled out to the communication means12or the signal portion141reflected to the base station. The reflectance can be specifically adjusted by the polarization direction of light, as the Fresnel reflections are different for perpendicular/parallel polarized light. Above that, one or both main sides341or342can have a reflective coating or anti-reflective coating. With such a coating, it is possible to adjust the beam splitter ratio across a large area to couple out, for example, only 1% or a different amount of the light power to be adjusted or even more than 20%. Further, the reflected part depends on the angle of incidence of the signal141or141aon the deflection element16a. If, for example, the beam splitter is not arranged at a 45° angle with respect to the impinging signal, for example141, but at a higher angle, e.g., 60°, 70°, 80°, the reflective portion of the light can be increased. The angle also influences how strongly perpendicular/parallel polarized light is reflected or transmitted. This means that the portion of the reflected light can also be adjusted via the angle of incidence in connection with a defined polarization of the light. In the latter case, an additional coating could be omitted, for example. Embodiments provide that a tilt angle of the deflection means is in a range of at least 10° and at most 80°, of at least 20° and at most 70°, or of at least 40° and at most 50°, for example, 45°. For example, for normal glass, the reflectance is approximately 4% (i.e. 96% of the light is transmitted). By an anti-reflective coating, the reflectance can be lowered to a lower value, such as 1.4%. Here, for example, magnesium fluoride (MgF2) is used. Irrespective thereof, other light-influencing characteristics can still be implemented, such as surfaces for filtering individual spectral ranges (for example, dichroic mirrors), curved surfaces for collimating or scattering. The beam splitter of the deflection means16acan effect the mentioned offset28along the direction181by performing refraction on main sides or main surfaces341and342that are arranged opposite to one another. An extent of the offset28can at least be influenced by a dimension or thickness of the beam splitter element. This can have the effect that the position of the center beam of the beam changes within the communication channel32in each coupling-out by a deflection means16a. FIG.3bshows a schematic perspective view of a participant apparatus10baccording to an embodiment, wherein the deflection means16bis configured as a combination of two prisms361and362. This allows entry and/or exit of the wireless optical signal or portions thereof at perpendicular areas such that an offset28′ along the direction181can be reduced compared to the configuration inFIG.3a, as the extent is merely influenced by a distance between facing interfaces of the prisms361and362. Both the participant apparatus10aand the participant apparatus10bis configured with the respective deflection means16aor16bto couple out a portion of the transmit signal141or the portion remaining therefrom and to deflect the same in the direction of the communication means12. A respective remaining part141′ passes the deflection means. Coupling out according to the deflection means16aand16bcan be based, for example, on a polarization of the coupled-out part, for example, in that a perpendicularly polarized or transversely polarized or parallel polarized portion is coupled out and other portions pass the deflection means. The reflectance of the deflection means16bcan also be influenced, for example, by the size of the gap42between the prisms361and362and via the material (air, transparent plastic, adhesive or the same). FIG.3cshows a schematic perspective view of two participant apparatuses101and102, each comprising reflective beam-deflecting means16c1or16c2, for example, as deflecting mirrors. The same are formed such that the deflecting means16c1and16c2each couple out only an allocated spatial area311or382of the transmit signal141along one or several directions perpendicular to a course of the communication channel32, for example perpendicular to the direction82, i.e., approximately perpendicular to the direction181or perpendicular thereto, while other spatial portions382or383can pass the deflecting means16to reach participant apparatuses behind the same. Thus, the deflecting means16ccan be configured to couple out the respective part based on a spatial position of the deflecting means within a course of the wireless optical signal141in parallel to the second direction, wherein the spatial positioning according toFIG.3ccan easily be combined with the configuration according toFIG.3aand/orFIG.3bto supply a large number of participant apparatuses with optical signals and/or to direct a high number of wireless optical data signals from a respective number of participant apparatuses to the base station. The shown spatial multiple use, i.e., merely partly coupling out the optical signal along a direction perpendicular to the axis24can relate to one or several directions. If the axis24is considered, for example, as being parallel to an x-direction, partial coupling out can take place along the y-direction, with large or complete coupling-out along the y-direction arranged perpendicular thereto in the Cartesian coordinate system or vice versa. Alternatively, it is also possible to configure the spatial partial coupling-out such that only parts of the optical signal141are coupled out both along the y-direction as well as along the z-direction such that several deflecting elements of different participant apparatuses can be arranged along both respective directions. In other words, the physical principle of coupling out depends on the specific configuration of the beam splitter/deflection means. The following possible exemplarily realizations result, which are also illustrated inFIGS.3a,3band3c:1. The beam splitter is a possibly simple disk rotated, for example, by 45° to the optical axis, as illustrated inFIGS.2and3. The disk can consist of a material transparent with respect to the communication wavelength or can include the same. Here, the coupling-out can be based on Fresnel reflections at the front and rear341,342of the disk. Exemplarily, this can relate to:Polarization of the light in the light beam: a p-polarized light is reflected to a different degree than a s-polarized light, for example;Refractive index of the beam splitter and the surrounding medium, such as air, vacuum, water or the same.The angle at which light impinges on the disk; an angle of >60° measured to the surface normally increases the Fresnel reflectionsA coating on the beam splitter, such as a reflective coating and/or an anti-reflective coating for a specific wavelength range on the front341and/or the rear342.2. Exemplarily, the beam splitter can have a cuboid shape, as illustrated inFIG.3b, in that the same is composed of, for example, two prisms361and362exemplarily having a triangular base, which are connected via a connecting layer42. The connecting layer42can include a transparent adhesive material/adhesive, but can also include another solid or fluidic material, such as air. Here, it is advantageous that the beam offset28/28′, which can result during each coupling out can be reduced with respect to the formation as disk according toFIG.3a. However, the configuration of the beam splitter is here spatially larger.The coupling-out ratio of the optical light power, i.e., the ratio between coupled-out portion141aand transmitted part141′ can result from or can at least be based on the thickness and the material of the connecting layer42or the ratio of the refractive indices of prism material with respect to the connecting material, this means a material of the prisms36and the material of the connecting layer42established for connecting the prisms361and362.Alternatively and while considering the polarization, the beam splitter16bcan also be configured in the form of Glan-Tylor prisms or Glan-Foucault prisms.3. The beam splitter can be configured as described in point 1 or 2, and can additionally be configured as a mirror, for example, a dichroic mirror, i.e., the same selectively couples out a wavelength or a wavelength range. Thus, it is possible that not all trolleys/participant apparatuses receive all data, but only those that are determined for the spatial trolley/trolleys.4. The beam splitter can have a spatial effect, i.e. the same only has an effect on the case of the cross section of the beam as described inFIG.3c. The same can be configured, for example, to be so small the same only has an effect on a small part of the beam cross section and couples out the same or part of the same. This beam splitter16cof each participant apparatus/trolley couples out another part of the beam and lets the rest pass completely. The beam offset during coupling-out can thus be prevented. In such embodiments, it can be advantageous when the beam splitter has a high reflectance, but this is optional. The beam splitter concepts can be combined and/or used in all embodiments described herein. In other words,FIGS.3ato3cshow implementation variations of the beam splitter. The embodiments according toFIGS.2,3a,3band3cdescribe a configuration of wireless optical communication systems such that the base station is established to transmit a wireless optical signal, which is transferred to one or several participant apparatuses. Alternatively or additionally, it is possible that one or several participant apparatuses of the wireless optical communication network are established to transfer a wireless optical signal to the base station. Here, the participant apparatuses of communication networks can be formed in the same way or differently, this means there is the option that some participant apparatuses are established for unidirectional communication operation, which can differ among the participant apparatuses, while other participant apparatuses are established for bidirectional communication operation. FIG.4shows a schematic block diagram of a wireless optical communication network according to an embodiment, wherein a number of participant apparatuses101and102is arranged, wherein a number thereof can be arbitrary as described above. The participant apparatuses101and102are each established for bidirectional communication, this means the communication means121and122can each comprise a transmitting interface441or442for emitting wireless optical signals142aor142b. Additionally, the communication means121and122can comprise receiving means461or462for receiving the coupled-out portions141aor141b. The communication means12can be established for full-duplex operation or half-duplex operation. While half-duplex operation can mean alternating transmitting and receiving, full-duplex operation can mean simultaneous transmitting and receiving of wireless optical signals from a communication means12to the base station5or from all communication means12simultaneously to the base station. Corresponding to the participant apparatus121and122, the base station5can comprise a transmitting interface443and a receiving interface463to transmit the wireless optical signal141in a beam261or to receive the wireless optical signal142in a beam262, wherein the beams261and262can be spatially separated or overlapping. Here, the wireless optical signal142can be an optical combination or overlap of the wireless optical signals142aand142bemitted by the participant apparatuses101and102. Thus, the wireless optical communication network14can be configured such that the combined overlapping wireless optical signal142is no digital or electronic combination of the signals of the participant apparatuses101and102, but an optical combination or overlap. Thus, each of the signals142aand142bcan be part of the combined or overlapping receive signal142that is received by the base station5. In that way, the combination can take place in the optical domain instead of the electrical domain. After receiver-side conversion of the signal into the electrical domain, the signal can at first also be present in a combined manner. Individual signals can be separated from one another by means of de-multiplexing at the base station. For this, the same deflection means161and162can be used, which are also used for coupling-out parts141aand141b, which means the deflection means161and162can be used bidirectionally. Optical paths of the signals141and142can be formed in the communication channel32spatially separated or spatially completely or partly overlapping. Thus, it is intended that the participant apparatuses receive the wireless optical signals by means of the coupled-out portion from the base station, wherein part of the wireless optical signal is coupled out with the deflection means161or162and a respectively remaining part141′ or141″ passes the deflection means161or162. In the transmitting case of the participant apparatuses, the participant apparatus102can emit the wireless optical partial signal142band direct the same with the deflection means162in the direction of the base station5such that the wireless optical partial signal142bimpinges on the deflection means of the participant apparatus161and passes the same in the direction of the base station. Depending on the synchronization between the participant apparatuses101and102, the portion142acan be optically combined with the portion142bor can be transmitted at a different time. Thus, embodiments relate to the fact that the participant apparatuses each emit partial signals142aand142bthat are deflected with the deflection means161or162in the direction of the base station such that the optical partial signals142aand142beach form part of the combined wireless optical signal142. The different participant apparatuses101and102of the wireless optical communication network400or of a different wireless optical communication network described herein can each receive the same wireless optical signal141or transmit the same wireless optical signal142, at least regarding the characteristics of the wireless optical signal142, individually, in groups or globally, i.e., for each participant apparatus. A differentiation between individual participant apparatuses or groups thereof can be made by allocating a wavelength of the wireless optical signal, a frequency in the base band, a polarization of the wireless optical signal or a combination thereof, which is clearly allocated to the participant apparatus or the group thereof. In other words, both when using the wireless optical signal141as well as when using the wireless optical signal142, several participants can share the respective beam, i.e., the optical power and/or at least in areas the spatial area or the spatial course. In other words,FIG.4shows a beam splitter based Li-Fi system in a linear communication scenario (bidirectional) and represents exemplarily a beam from the base station to one of the trolleys via the coupled-out portion141a. The continuing beam, the portion141′ indicates that the beam can propagate further along the communication channel32. Additionally, a beam142a/142is illustrated for the inverse direction, i.e., trolley to base station. Thus, the described system can use the described optical channel also for the back channel to allow bidirectional data transmission. Both the base station and the trolleys can be configured in the following ways:1. The base station consists of a transmitter and the trolley of a receiver;2. The base station consists of a receiver and the trolleys of one transmitter each;3. The base station and each trolley consist both of a transmitter as well as a receiver.4. Combinations thereof. Variations 1 and 2 can be used for unidirectional communication as described inFIG.2. However, variation 3 performs bidirectional communication as described inFIG.4. The transmitter44can include at least one emitter that is configured to emit the wireless optical signal and optionally includes optics for beamforming, for example a lens for collimation, a Köhler integrator or the same. The receiver46includes at least one detector for receiving the light signal and optionally at least one optics, for example a lens for focusing the light beam on the detector. In bidirectional operation, the communication channel32is composed of both communication directions. Both beams can be spatially overlapping. Bidirectional communication can take place in half or full duplex operation method. To allow full duplex operation, a multiple access mechanism can be implemented. Wireless optical communication can be realized by using at least one of a frequency-division multiple access (FDMA) in the base band and/or in the carrier spectrum, a time-division multiple access (TDMA), carrier sense multiple access (CSMA), code-division multiple access (CDMA), space-division multiple access (SDMA) or the same. Even in half duplex operation, such a multiple access mechanism can be implemented, for example to increase data security. For example, the beam splitters16of the trolleys can be configured as dichroic mirrors that only couple out or couple in a specific wavelength of the light. FIG.5shows a schematic block diagram of a wireless optical communication network according to an embodiment, wherein a beam-guiding or beam-deflecting element48is arranged to spatially direct or deflect the communication channel32. This allows any orientation of the direction181in space, for example parallel to the direction182. The wireless optical communication network can comprise one or several optional beam-guiding elements48such that any non-straight course of the communication channel32can be established. The beam-guiding element48can include a reflector or mirror or can consist thereof. Alternatively or additionally, other beam-guiding elements can be arranged, such as optical fibers or optical waveguides that are arranged to couple in an optical signal on a first side to deflect the optical signal across the course of the optical fiber with respect to its direction and to output the same on the second side along the desired direction. By arranging one or several beam-guiding elements, a course of the communication channel32can be changed, which means, with reference toFIG.1, that the direction181is variable across its course. The beam-guiding element can be spatially moved by an actuator and/or can be variable with respect to the beam-guiding characteristics, such as to change a direction of beam-guiding by means of a translational and/or rotational movement and/or to change a transmitted or filtered-out wavelength range or polarization or the same over time. In other words, some embodiments provide for the communication channel extending along an axis simultaneously corresponding to a straight line in space, such as described in the context ofFIG.2. According to further embodiments, it is also possible that the communication channel comprises curves or bends, as illustrated inFIG.5. Thereby, the wireless optical communication system can still be configured in a linear manner in that the participant apparatuses/trolleys only use a spatial degree of freedom, for example forward and backward on the axis of movement. The curvature/curve/bend of the communication channel32can be obtained by arranging one or several reflectors or optical fibers48. For example, the wireless optical signal could be introduced into an optical fiber cable, be deflected and transferred again into a free medium. The communication capability of the participant apparatuses/trolleys could be given within the curve/curvature/bend, but the same is not needed, for example, in the case of an optical fiber. However, depending on the arrangement, the same can be implemented.FIG.5shows a realization of a curve by means of a mirror with the example of a 90° curve. The above-described embodiments relate to a base station emitting the wireless optical signal141along one direction and/or receiving the wireless optical signal142from one direction. Other embodiments provide for the base station operating in several directions, wherein this can be individually adjusted for the transmitting case and/or the receiving case. FIG.6ashows a schematic perspective illustration of a wireless optical communication network600according to an embodiment, wherein the base station5is configured to emit the wireless optical signal141, wherein the explanations also apply for the receiving case without any limitations. The participant apparatuses101and102are exemplarily arranged on different sides of the base station5and are arranged such that the communication means121or122are moveable with respect to the base station5. With respect to the wireless optical signal141, a deflection element52is arranged, for example, as deflection mirror or prism structure, such that a first portion141-1is deflected in a first direction and a second portion141-2of the wireless optical signal141is deflected in another different direction. The deflection element or beam-guiding element52can hence also be used for beam splitting. Exemplarily, the directions182aand182bobtained thereby are parallel to one another, such that communication channels32aand32bcan also propagate parallel to one another but starting from the base station5in different directions, for example opposite to one another in space. Basically, any combination of directions is possible with the deflection element52. While the division of the wireless optical signal141in two directions182aand182bis illustrated, a different number of directions can be obtained, for example a single one as illustrated inFIG.5or more than two, for example by arranging additional areas in the beam-guiding element or deflection element or means for beam splitting52. The means for beam splitting52can be configured to obtain the wireless optical signal141-1and the wireless optical signal141-2by beam splitting from a common source signal emitted by the base station, and to deflect the obtained portions in different, e.g. opposite parallel directions. Alternatively or additionally, the base station5can include several wireless emitters that are configured to provide differing signals such that the wireless optical signal141-1is generated by a first emitter and the wireless optical signal141-2is generated by a different emitter. If the wireless optical signal141is provided, for example, by a single emitter, both portions141-1and141-2can have the same information content and can hence be considered as same or identical parts of the signal, which is divided along the directions182aand182b, such that the wireless optical signal (141-1) and the wireless optical signal (141-2) are the same, for example. However, it is also possible to provide two or several emitters, such that the portions141-1and141-2are generated with different information, light powers, wavelength or other signal characteristics, such that the wireless optical signals propagating in the communication channels32aand32bdiffer from one another with respect to at least one signal characteristic. The shown configuration, two-way communication with respect to the base station, allows a further degree of freedom in supplying participant apparatuses with wireless optical signals. Alternatively or additionally, in contrast to a one-sided arrangement, where the base station is arranged on one end of the communication channel and the participant apparatuses are arranged along one side or direction starting therefrom, a simple or error tolerant configuration of the wireless optical communication network can be obtained. A same length of the overall communication channel, for example 100 m, wherein any other value can be implemented, makes certain demands on the one-sided arrangement regarding precision of the adjustment and/or the optics to use the wireless optical signal. These demands can be loosened by dividing the communication channel in two subsections, for example a symmetrical separation of 50%/50%, i.e., half by half, approximately 2×50 m, but also with asymmetrical division, such as 90%/10%, 70%/30%, 60%/40% or in between. The respective subchannel is respectively shorter, such that effects like divergence might have less impact. FIG.6bshows a schematic perspective view of a wireless optical communication network600′ according to a further embodiment, wherein the communication channels32aand32bare additionally deflected, compared toFIG.6a, by arbitrarily adjustable and optional beam-deflecting elements481and482. Basically, any spatial directions of the communication channel(s) can be adjusted. In the further course, the communication channels32aand/or32bcan also be deflected again. The deflection element52and/or the beam-deflecting elements481and/or482can also be part of the base station5. FIG.6cshows a schematic perspective view of a wireless optical communication system600″, wherein the base station5is configured to emit parts141-1and141-2along different directions182aand182b. This can be obtained, for example, by integrating the deflection element52ofFIG.6ain a housing of the base station5and/or by usage of two individual optical emitters or signal sources. Here, also a different, in particular, higher number of emitters can be provided to obtain a higher number of directions. In other words, embodiments relate to multiple configurations. It is possible to form several communication channels or a base station with several transmitters and/or receivers, as illustrated inFIGS.6a,6band6c, by possibly stationary beam splitters48and/or52. The reflective area of the beam splitters can be planar, but can also have a curvature, for example to operate as Köhler integrator. In that way, placement tolerances can be compensated or the beam can be formed or deflected. If the base station has several transmitters and/or receivers as exemplarily shown inFIGS.6a,6band6c, the same can form several communication channels in any spatial directions. The participant apparatuses/trolleys can move along the same and at the same time maintain wireless optical communication. The stationary beam splitters ofFIG.6bcan be used to allow the communication from a base station with trolleys to different linear communication channels. A base station having several transmitters and/or receives according toFIG.6callows the formation of several linear communication channels on which different trolleys move. A signal source of wireless optical communication networks described herein can be configured to emit any light power. For example, signal sources in the participant apparatuses and/or the base station are configured such that an optical signal power of at least 1 mW and at most 100 W, at least 50 mW and at most 1 W or at least 90 mW and at most 400 W, approximately 100 mW, is provided to a receiver of the wireless optical signal. This means a loss of optical power is considered across the communication channel to provide the stated optical powers to the receiver to provide a high receiving quality. The wireless optical communication networks described herein can map any scenarios. Particularly suitable are industrial scenarios where raw environmental conditions can prevail. Some of the wireless optical communication networks described herein are described in the context of participant apparatuses established as trolleys. Such wireless optical communication networks can comprise, for example, a rail area, for example in a traverse, a crane or other systems where one or several elements move to and fro. Further, embodiments relate to a participant apparatus, such as the participant apparatus10. The same comprises communication means for transferring a wireless optical signal between the participant apparatus and the communication partner. Here, the transfer relates to transmitting and/or receiving the wireless optical signal or different wireless optical signals. Further, the apparatus comprises a deflection means that is configured to deflect at least part of the wireless optical signal with respect to a direction between the deflection means and the communication means. While in the receiving case coupling out of merely a part can be provided, in the transmitting case, it can be possible or even advantageous to deflect the entire wireless optical signal provided by the communication means in the direction of the communication partner. Here, the deflection means can be stationary with respect to the communication means. Deflection towards a direction or communication partner can take place while considering possible further beam-deflecting or reflecting elements. Thus, for example,FIG.6bcan also be understood such that an optical path is directed from a participant apparatus in the direction of the base station as long as the participant apparatuses101and/or102are established for transmitting wireless optical signals. Embodiments described herein relate to a communication solution for a possibly linear communication scenario that uses optical wireless communication (OWC or light fidelity, Li-Fi). In contrary to optical fiber communication, no optical fiber is used, even when embodiments can use the same for deflecting the communication channel. A spatially well-defined communication channel is formed by a medium, such as air, water or the same such that different systems at the same location do not interfere with each other, since their channels do not overlap, i.e., the same can be separated spatially and/or in frequency and/or code or the same. Obtainable data rates can range from a few bit/s up to several 10 Gbit/s or more. One advantage in this concept is the fact that multipath propagation can essentially be prevented by well defined beam guidance that can be obtained by respective configuration of the transmitters. If the base station has, for example, several transmitters distributed along the linear axis, the same can be synchronized which, however, would result in a reduction of the maximum data rate. This problem can be prevented with the embodiments described herein by preventing multipath propagation. Compared to data light barriers, embodiments do not only allow the communication between two participants but the communication between a base station to basically any number of mobile participants, which are also referred to as trolleys herein. Here, other than described in EP 2 903 407 A1 or US 2013/094927 A, embodiments can be configured without so-called daisy chain configuration which is based on receiving a signal, optionally evaluate the same and generate the same again for further participants. Embodiments allow the reception of the same wireless optical signal by the usage of beam splitters or deflection elements. Other than in apparatuses described, for example, in DE 10 2007 041 927 A1 or DE 28 46 526 A1, the wireless optical signal is here transmitted via a free medium, such as air, water or vacuum. Although some aspects have been described in the context of an apparatus, it is obvious that these aspects also represent a description of the corresponding method, such that a block or device of an apparatus also corresponds to a respective method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or detail or feature of a corresponding apparatus. While this invention has been described in terms of several advantageous embodiments, there are alterations, permutations, and equivalents, which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention. | 52,330 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.