source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
69,820
I'd like to know what "Alternative Function" refers to in the context of the IO ports of a microcontroller. I don't need to know how to activate it when connecting to a peripherial, but I'd like to know what it exactly is and why we'd need it.
Many pins of your microcontroller have different functions. The 'normal' function would refer to GPIO, General Purpose Input/Output . In that case, you can use these pins directly by writing to and reading from the relevant registers. 'Alternate' functions would refer to other functions, that may include I 2 C, SPI, USART, CCP, PWM, Clock, ADC, etc... How you control the pins when in an alternate function depends on the peripheral, but it generally comes down to writing to and reading from special function registers (SFR); the peripheral takes care of the rest. Which function is standard after a RESET depends (it is not always GPIO!), and you can find that in the relevant datasheet. Most of the time, you can select the function you want to use on-the-fly, so you can switch between peripherals. By using one pin for several peripherals, you can make microcontrollers with very much features. However, because you most of the time want that peripheral on that pin all the time (and don't want to switch functions on-the-fly) you can't use all peripherals in one program, or at least not at the same time. On the other hand, that isn't really often needed anyway. As Connor points out , 'alternate function' can refer to something else as well, in just a slightly different context: here it isn't about what function you put on a pin, but about what pin you use for a function. This is called Peripheral Pin Select, and basically means you can select which pin your peripheral is using. You could, for example, do RS232 over RA1 and RA2 or over RB1 and RB2. See Connor's answer for a more detailed description (and upvote him for this).
{ "source": [ "https://electronics.stackexchange.com/questions/69820", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/16152/" ] }
70,009
I am looking at this: http://learn.adafruit.com/photocells/using-a-photocell It's connecting VCC <-> LDR <-> resistor <-> GND . And the analog input is between the LDR and the resistor. I understand than a resistor may be necessary to control the current (in case the LDR resistance is very low). But Why can't I just do the following: VCC <-> LDR <-> resistor <-> Analog Input And forget about the pull-down?
When working with a digital circuitry that senses analog voltage, for example a microcontroller, or let's say Arduino, you are measuring voltages. However, without current, voltage cannot be present. For a voltage to be created on a component, there need to be a current flowing on it. According to Ohm's law; \$V=I*R\$, when \$I=0\$, the equation becomes; \$V=0*R=0\$. Thus, no voltage will be present, and the microcontroller will not be able to measure anything. Proper way of sensor connection Check out the schematic below. First, have a look at the left side, a proper LDR connection with a proper pull-down resistor. A current will flow through R2 and create a voltage drop on it. \$Vanalog =Isensor*R_2\$, where \$Isensor\$ is determined with the total resistance of the sensor and \$R_2\$. Since LDR's resistance changes with the light, the current, hence the voltage will change. You may have noticed that there is a resistor I drew in the input. This is called the input resistance, or impedance, of the microcontroller and is generally very big, such as \$10M\Omega\$. In this configuration input resistance and \$R_2\$ are connected in parallel, so their effective resistance is going to be; \$\dfrac{1}{\Omega_{total}}=\dfrac{1}{R_2}+\dfrac{1}{R_{in}}=9990.00999\Omega\$ which is almost equal to \$10k\Omega\$. So, there will be no change. The voltage AnalogValue is then, \$Vanalog=Isensor*10k\$, where \$Isensor\$ is \$\dfrac{Vcc}{R_1+10k}\$ Let's say our sensor \$R_1\$ is 10k at the current lighting condition. And \$Vcc=5V\$; \$Isensor=\dfrac{Vcc}{R_1+10k}=\dfrac{5}{10k+10k}=250 \mu A\$, \$Vanalog=2.5V\$ What if there was no pull-down resistor? If there was no pull-down resistor, the configuration would be as shown in below diagram. The sensor current, \$Isensor\$ would be the same as \$Iinput\$, since all the currents on a wire are the same. Our microcontroller measures the AnalogValue , the voltage on the pin. Let's calculate the values for this scenario, too: We know that \$Isensor=Iinput\$, now let's assume, again, that LDR is \$10k\$, AnalogValue is calculated as follows; \$Vanalog=Isensor*R_{in}\$, where \$Isensor\$ is \$\dfrac{Vcc}{R_1+R_{in}}=\dfrac{5}{10k+10M}\approx500*10^-9\approx500nA\$ Thus, \$Vanalog=Isensor*10M\approx(500*10^-9)(10*10^6)\approx5V\$ As you can see, since almost no current flows, there is almost no voltage dropped on the sensor and even though we could read 2.5V on the previous proper configuration, we have read 5V with the same light, i.e. when \$R_1=10k\$. This configuration will not work.
{ "source": [ "https://electronics.stackexchange.com/questions/70009", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/24178/" ] }
70,735
I have been a hobbyist solderer for about 6 years now, and my skills are very proficient. I have always used lead solder because my experience with lead free solder is awful. But I'm going to college in a few months and I just upgraded to a very nice digital soldering gun because I am making and selling aviation cables online. I am hoping to expand my little cable business in college and if I am going to be soldering often I would like to get away from lead solder. Are there lead free solders that can be used as easily as lead solder, and if so, if there really any health benefit from lead free solder? Are the fumes from solder toxic, and does lead free solder solve that problem?
Use leaded solder if you can. It is easier to work with, requires lower temperatures, and there are less quality issues with the joints. The only reason to use lead-free solder is if it is not allowed in your jurisdiction or you are want to sell soldered goods someplace (like Europe) where this is forbidden for practical purposes. No, lead in solder doesn't pose more of a health risk to you when soldering. The vapor pressure of lead is so low that there just aren't significant numbers of lead molecules in the air as a result of soldering. The predominant health danger from soldering is inhaling the vaporized flux. This is made more dangerous by lead-free solder since the temperature required for a good joint is higher. Even that is a small issue compared to different types of fluxes. If you are worried about this, use a fume extractor. In any case, avoid breathing the immediate vapors from soldering, whether leaded or lead-free and regardless of the type of flux.
{ "source": [ "https://electronics.stackexchange.com/questions/70735", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10423/" ] }
72,263
This question might be too localized, but I try. Is it possible to replace a variable resistor by a MOSFET, under conditions shown in the following schematic? If yes, can someone propose a MOSFET type or the required MOSFET parameters. simulate this circuit – Schematic created using CircuitLab Update What I am actually trying to accomplish is to replace R2a by something simple that I can control with a microcontroller (DAC). I am hacking an existing device and can not replace the resistor R1.
Yes, BUT: Technically the MOSFET can operate as a variable resistor, but there are two main issues: In the ohmic region (which is quite narrow, in terms of output voltage) the linearity is poor, and it also depends on input voltage. It won't be very easy to tune it to behave like a proper resistor. MOSFETs' output resistance is usually not an accurate value, and it will be hard to get the exact value from the datasheet. What you can do is to measure it for various input and output voltages, and to create a table with the values. But if you don't need it to be accurate, you can use the graphs in the datasheet. Another choice can be to use an integrated VCR .
{ "source": [ "https://electronics.stackexchange.com/questions/72263", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/22758/" ] }
72,582
Why the drain the source terminal of the MOSFET function differently while their physical structure is similar/symmetrical ? This is a MOSFET: You can see that the drain and source are similar. So why do I need to connect one of them to VCC and the other to GND ?
Myth: manufactures conspire to put internal diodes in discrete components so only IC designers can do neat things with 4-terminal MOSFETs. Truth: 4-terminal MOSFETs aren't very useful. Any P-N junction is a diode (among other ways to make diodes). A MOSFET has two of them, right here: That big chunk of P-doped silicon is the body or the substrate . Considering these diodes, one can see it's pretty important that the body is always at a lower voltage than the source or the drain. Otherwise, you forward-bias the diodes, and that's probably not what you wanted. But wait, it gets worse! A BJT is a three layer sandwich of NPN materials, right? A MOSFET also contains a BJT: If the drain current is high, then the voltage across the channel between the source and the drain can also be high, because \$R_{DS(on)}\$ is non-zero. If it's high enough to forward-bias the body-source diode, you don't have a MOSFET anymore: you have a BJT. That's also not what you wanted. In CMOS devices, it gets even worse. In CMOS, you have PNPN structures, which make a parasitic thyristor. This is what causes latchup . Solution: short the body to the source. This shorts the base-emitter of the parasitic BJT, holding it firmly off. Ideally you don't do this through external leads, because then the "short" would also have high parasitic inductance and resistance, making the "holding off" of the parasitic BJT not so strong. Instead, you short them right at the die. This is why MOSFETs aren't symmetrical. It may be that some designs otherwise are symmetrical, but to make a MOSFET that behaves reliably like a MOSFET, you have to short one of those N regions to the body. To whichever one you do that, it's now the source, and the diode you didn't short out is the "body diode". This isn't anything specific to discrete transistors, really. If you do have a 4-terminal MOSFET, then you need to make sure that the body is always at the lowest voltage (or highest, for P-channel devices). In ICs, the body is the substrate for the whole IC, and it's usually connected to ground. If the body is at a lower voltage than the source, then you must consider body effect . If you take a look at a CMOS circuit where there's a source not connected to ground (like the NAND gate below), it doesn't really matter, because if B is high, then the lower-most transistor is on, and the one above it actually does have its source connected to ground. Or, B is low, and the output is high, and there isn't any current in the lower two transistors.
{ "source": [ "https://electronics.stackexchange.com/questions/72582", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5271/" ] }
72,652
Let's say I wanted to make a widget containing a relay to switch between two antennas. There is a coax transmission line coming in from the transmitter, and two going out, each to a separate antenna. Inside is a relay which switches the center conductor, and the shields terminate into a metal enclosure around the relay: simulate this circuit – Schematic created using CircuitLab Let's further say that this is operating at HF, so the enclosure is very small relative to the minimum wavelength this device will encounter in operation. At point A, there is an impedance discontinuity. The coax was \$50\Omega\$, but inside, it will be something else. At point B there's another discontinuity, as we transition back to \$50\Omega\$. So, there must be some wave reflection happening here. What effect would this have on the transmitter? Would it result in a horrible SWR, or no? Why?
Probably very little effect at all as long as the dimensions are small. Coming from the left hand side, there will be a reflection from point 'A' followed closely by an (almost) equal and opposite refection from 'B'. As long as the distance from 'A' to 'B' is small, these reflections will effectively cancel-out. As an example, let's say the impedance inside the switch is 100Ω. The reflection coefficient at 'A' will be 0.333 and at 'B' it will be -0.333. If the enclosure width is say 200mm, the time between these reflections will be around 1ns (very small at HF). Reflections will continue to 'bounce' between 'A' and 'B' and each time there will be some energy coupled into the transmission line but these will occur 2ns apart and will be attenuated each time due to internal losses. We can draw a reflection diagram showing the effect of a unit step travelling down the line. The vertical axis represents time and the horizontal axis distance. With the example figures, there will be some overshoot at the transmitter lasting a few nanoseconds. Please excuse the amateurish diagram! Edit :- Following supercat's suggestion, I have added another sketch showing the resultant waveforms at the source and load. The step width is the round-trip time across the switch and back. However, whilst this kind of diagram is useful to gain an insight into what is going-on, trying to calculate the actual overshoot amplitude is not too helpful. Effects such as finite rise and fall times, multiple reflections inside the switch ( eg, each side of the relay contact) and other effects will mostly smooth the theoretical transitions. I have not even addressed line attenuation and other losses, nor have I estimated the actual impedance of the relay switch which would be non-trivial. At best you can only estimate a worst-case scenario.
{ "source": [ "https://electronics.stackexchange.com/questions/72652", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17608/" ] }
72,908
In the following pinout diagram for an ATtiny26 microcontroller, a 20-pin IC: The VCC/AVCC and GND pins aren't aligned. Surely it would be easier for PCB design to connect these by going straight across rather than having to cross (requiring vias, a second layer, or complex routing). Why would these pins be switched as so?
One very good reason, as I learned myself from a recent prototype, is reversing the physical layout of the IC in a circuit. I plugged a through-hole version of this microcontroller into a socket backwards, and spent about an hour with an oscilloscope trying to determine why pins were not behaving as expected. When I discovered the IC was in backwards (and recovered from the desire to shoot myself), I realized I was thankful that a polarity reversal hadn't rendered the IC useless. With the pins backwards in this arrangement, the chip actually receives VCC and GND correctly in both directions. In IC's with VCC on pin 1, and GND on the opposite corner, they heat up and generally fail very quickly when inserted backwards.
{ "source": [ "https://electronics.stackexchange.com/questions/72908", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2028/" ] }
73,138
1 Joule is defined as 1 Watt of power working for 1 second . When the electric bill comes, it says "you've used 5000 Watts, therefore pay us $100". However, from what I understand, if I turn on a 5000 Watt oven for even three seconds, you can say I used 5000 watts. However, for the purpose of measuring how much energy I used, wouldn't it be more correct to say I used 1500 Joules? Why then does the electric company measure in watts when really they are charging you based on a combination of the power used and the amount of time you used it for?
No, the electric bill does NOT say "you have used 5000 Watts". Look at it more closely. It says that you used 5000 kiloWatt-hours . A kilowatt-hour is one kiloWatt (1000 Watts) for one hour. That is a measure of energy, and is the same as charging for Joules. One kiloWatt-hour equals 3.6 MJoules. Or put another way, they do charge by the Joule, just that they express it in more relevant units for most homeowners.
{ "source": [ "https://electronics.stackexchange.com/questions/73138", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/20419/" ] }
73,295
I heard that D+ and D- are differential signals, does it matter if I swap them when connecting a USB device to the computer?
Summary When entering and exiting the idle state, the polarity is important and swapping the D+ and D- lines will cause problems. Data Transmission USB data is NRZ-coded such that "One" is represented by no change in physical level, and "Zero" is represented by a change in physical level (see figure below). Therefore, inverting the signal (for example, by swapping D+ and D-) results in no functional change during data transmission. But there may be problems before and after data transmission which can kill communication with the device. Exiting Idle State The host includes 15 kΩ pull-down resistors on each data line. When no device is connected, this pulls both data lines low into the so-called "single-ended zero" state (SE0 in the USB documentation), and indicates a reset or disconnected connection. A USB device pulls one of the data lines high with a 1.5 kΩ resistor. This overpowers one of the pull-down resistors in the host and leaves the data lines in an idle state called "J". For USB 1.x, the choice of data line indicates of what signal rates the device is capable; full-bandwidth devices pull D+ high, while low-bandwidth devices pull D− high. While the data is NRZI-encoded, the synchronization sequence and EoP are defined in terms of fixed states (J/K/SE0). When D+ and D- are switched, the J state is switched with K and SE0 is still SE0 (both lines low). So the sync sequence and EoP will become incorrect on inversion. In USB 1.x, if D+ and D- are swapped, a full-bandwidth devices get recognized as low-bandwidth and vice-versa. So the device will not even communicate at the same speed as the host. Entering Idle State A USB packet's end, called EOP (end-of-packet), is indicated by the transmitter driving 2 bit times of SE0 (D+ and D− both below max) and 1 bit time of J state. After this, the transmitter ceases to drive the D+/D− lines and the aforementioned pull up resistors hold it in the J (idle) state. With a D+/D- swapped driver, the host will see the sequence (SE0, SE0, K) instead of the correct (SE0, SE0, J). The host might then fail to recognize the end of packet, which would cause problems. Conclusion If the device and host adhere strictly to USB specifications, swapping the D+ and D- pins will result in a failure. Its conceivable that the designer of the host foresaw such a failure mode, and built in compatibility for it. But whether or not such a swapped cable would be functional in practice, it certainly would not adhere to the specifications. Another member, Andrew Kohlsmith, experienced this when the pins of a USB hub were accidentally swapped. The problem manifested itself as connected devices not showing up. The USB device would show it was powered but it was not recognized at all by the computer on the upstream side of the hub (which was wired correctly to the host). Source: wikipedia Edit: thank you to those who commented. I added emphasis and details from your helpful notes.
{ "source": [ "https://electronics.stackexchange.com/questions/73295", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/25239/" ] }
73,467
I'm familiar with bulk, tape and reel, cut tape, and tube, but what is "AMMO?" Source: Fairchild 2N3904 datasheet, page 2
After some additional searching, found this article " Packaging of Electronic Components " "Ammo pack is similar to cut tape. The ammo pack is a continuous strip of cut tape to a predetermined quantity. However, the cut strip is then placed into its own manufacturer box for safe keeping. Please see below picture for your reference."
{ "source": [ "https://electronics.stackexchange.com/questions/73467", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2028/" ] }
73,689
I have never understood input and output impedances of an op-amp. If anyone can explain what these two terms mean in an op-amp I'd highly appreciate it. Thank you! http://www.eecs.tufts.edu/~dsculley/tutorial/opamps/opamps5.html
The short answer: input impedance is "high" (ideally infinite). Output impedance is "low" (ideally zero). But what does this mean, and why is that useful? Impedance is the relationship between voltage and current. It's a combination of resistance (frequency-independent, resistors) and reactance (frequency-dependant, inductors and capacitors). To simplify the discussion, let's just assume that all our impedances are purely resistive, so impedance = resistance. You already know that resistance relates voltage and current by Ohm's law: $$ E = IR $$ or maybe $$ R = \frac{E}{I} $$ That is, one ohm means that for each volt, you get one ampere. We know that if we have a resistor of \$100\Omega\$, and we have a current of \$1A\$, then the voltage must be \$100V\$. The concept of "input" and "output" impedance are very nearly the same thing, except we are concerned only with the relative change in voltage and current. That is: $$ R = \frac{\partial E}{\partial I} $$ If we are talking about the input impedance of an op-amp, we are talking about how much more current will flow when voltage is increased (or how much less current will flow, when voltage is decreased). So say the input to an op-amp was \$1V\$, and you measured the current required from the signal source to develop this voltage to be \$1\mu A\$. Then you changed the source such that \$3V\$ appeared at the op-amp, and the current was now \$2\mu A\$. You can then calculate the input impedance of the op-amp as: $$ \frac{(3V-1V)}{2\mu A-1\mu A} = 2 M\Omega$$ Typically, a very high input impedance of op-amps is desirable because that means very little current is required from the source to make a voltage. That is, an op-amp doesn't look much different from an open circuit, where it takes no current to make a voltage, because the impedance of an open circuit is infinite. Output impedance is the same thing, but now we are talking about how much the apparent voltage of the source changes as it is required to supply more current. You've probably observed that a battery under load has a lower voltage than the same battery not under load. This is source impedance in action. Say you set your op-amp to output 5V, and you measure the voltage with an open circuit 1 . The current will be \$0A\$ (because the circuit is open) and the voltage you measure will be 5V. Now, you connect a resistor to the output, such that the current at the output of the op-amp is \$50mA\$. You measure the voltage across this resistor and find it to be \$4.99V\$. You can then calculate the output impedance of the op-amp as: $$ - \frac{5V - 4.99V}{0mA - 50mA} = 0.2\Omega $$ You will note that I changed the sign of the result. It will make sense why, later. This low source impedance means the op-amp can supply (or sink) a lot of current without the voltage changing much. There are some observations to be made here. The input impedance of the op-amp looks like the load impedance to whatever is proving the signal to the op-amp. The output impedance of the op-amp looks like the source impedance to whatever is receiving the signal from the op-amp. A source driving a load with a relatively low load impedance is said to be heavily loaded , and a voltage signal will require a high current. To the extent that the source impedance is low, the source will be able to supply that current without the voltage sagging. If you want to minimize voltage sagging, then the source impedance should be much less than the load impedance. This is called impedance bridging . It's a common thing to do, because we commonly represent signals as voltages, and we want to transfer these voltages unchanged from one stage to the next. A high load impedance also means there won't be much current, which also means less power. The ideal op-amp has infinite input impedance and zero output impedance because it's easy to make the input impedance lower (put a resistor in parallel) or the source impedance higher (put a resistor in series). It's not so easy to go the other way; you need something that can amplify. An op-amp as a voltage follower is one way to transform a high source impedance into a low source impedance. Lastly, Thévenin's theorem says that we can transform just about any linear electrical network into a voltage source and a resistor: In fact, "source impedance" can be defined as the Thévenin equivalent resistance, \$R_{th}\$ here. It works for loads also. But unless you already know Thévenin's theorem, that's not a useful thing to say. However, understanding what source and load impedances are, Thévenin's theorem means you can calculate an impedance for linear networks, regardless of complexity. 1: this isn't actually possible, because you must connect both leads of your voltmeter to the circuit, thus completing it! But, your voltmeter has a very high impedance, so it's close enough to an open circuit that we can consider it such.
{ "source": [ "https://electronics.stackexchange.com/questions/73689", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/24607/" ] }
73,857
I am looking to detect 12+V from a car wire using an Arduino. I have found the following schematic: I know how crazy automotive voltage can get so I just want to make sure the schematic I found above will accommodate the crazy random currents that the car could produce. Also, wouldn't I need some type of heat sink taken that I am stepping down a +12V to 5V or less? That, in my mind, would produce a pretty good amount of heat?
Knowing that all sorts of weird stuff can happen in automotive power circuits, and not being especially knowledgeable in those systems, I'd err on the side of caution and use an opto-isolator . simulate this circuit – Schematic created using CircuitLab Pin 1 = Car 12v R1 Pin 2 = Car Ground Pin 3 = NC Pin 4 = Arduino Ground Pin 5 = Arduino 5v R2 Pin 6 = NC With this scheme, your Arduino and the car aren't connected electrically at all. At worst, the optoisolator is destroyed, and you can replace it for less than a dollar. Put it in a socket and you won't even need a soldering iron to perform the repair. R1 was selected such that input voltage transients up to 120V won't exceed the maximum forward current of U1. D1 avoids exceeding the maximum reverse voltage of U1 if the input voltage is inverted. The value of R2 isn't especially critical, so it might as well be the same value as R1. You won't need any heat sink. Heat is the result of electrical energy being converted to heat, and power is the rate of energy conversion. Power \$P\$ in an electrical system is the product of current \$I\$ and voltage \$E\$: $$ P = I E $$ So, the voltage itself doesn't make heat: it also depends on how much current is flowing. In both these circuits, the current is low enough that the power is small and no heatsink is required.
{ "source": [ "https://electronics.stackexchange.com/questions/73857", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/11339/" ] }
73,991
My professor always insists that I provide power to an oscilloscope thorough an isolation transformer. What is the necessity of this? What is the risk if I don't connect it?
You should never float a scope with an isolation transformer! This is reckless and dangerous advice from your professor, and he/she needs a reality check. The accepted procedure for doing work that requires isolation is to ISOLATE THE UNIT UNDER TEST, NOT THE TEST EQUIPMENT. Why? It's much easier to remember that the unit under test is what's unsafe and needs cautious handling, not your oscilloscope If you hook a communication cable up to your floating scope (USB, GPIB, RS232), guess what - it's NO LONGER FLOATING. (All of these cables have earth-referenced returns) As soon as you connect that floating scope return to a potential, all of the exposed metal on the scope is now at that potential. Major shock hazard. If you cannot float the unit under test, use an isolated differential probe to do your measurements, and keep both the UUT and scope earthed. No measurement is worth the safety risk. A battery-operated scope may seem like a good idea in this circumstance, but only if it has dedicated isolated inputs. A battery-operated ordinary scope with non-isolated inputs will still suffer the problem of the exposed metal floating up to whatever potential you connect the ground to. That's why all of the manuals for the battery operated scopes clearly say "This scope must always be earthed, even if you're running off the battery" - if you choose to ignore this, it's at your own risk. A scope with dedicated isolated inputs should still be earthed as a good practice. It's essentially the equivalent of using external isolated differential probes with an ordinary scope. I work full-time in power electronics and have tens of thousands of dollars of lab equipment at my bench. If anyone is caught floating their scope, the float is immediately corrected by the test engineering team, the means of float is seized (most often this is a line cord with the ground prong removed) - disciplinary action is a possibility. Numerous senior/principal engineers have fried their PCs and their entire set of GPIB-connected bench instruments by trying to float the test equipment and forgetting about the GPIB interface. (No one has died yet - thankfully)
{ "source": [ "https://electronics.stackexchange.com/questions/73991", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/22885/" ] }
73,998
I don't understand how antennas radiate a signal. I understand the basics of antennas (wavelength, electron E field, etc.), but I simply don't understand how a current can go through a wire that doesn't have negative pole. Can you please explain that to me?
I'm guessing you don't understand how current can flow if there is no complete circuit. Let's take a simple quarter-wave dipole as an example: simulate this circuit – Schematic created using CircuitLab How can any current flow, since there is no complete circuit from "-" to "+" of V1? Consider this: relative to the speed at which the waves in the electromagnetic fields propagate, the dipole is long. It's true that current can't flow, but it doesn't know that until it gets to the end of the wire. As the current approaches the end of the wire but has no place to go, the charges pile up until they are pushed back in the other direction. By the time it's back, it's travelled \$\lambda/2\$ or experienced a \$180^\circ\$ phase shift. The voltage at V1 has also changed by this point, and so the current is constructively adding to the new currents being produced by V1. If it were not for some of this energy being lost as radiation, the energy in this antenna would grow without bound. Why the energy radiates is complicated. The long answer is " Maxwell's equations ". If you don't want to understand all the gritty details of that math, then here's a simple, incomplete understanding: the current in an antenna is associated with a magnetic field, and the voltage is associated with an electric field. An antenna is an arrangement such that at some distance away from the antenna (the far field ) these two fields are mutually perpendicular and in phase, and what you get is a self-propagating wave like this: Red is the electric (E) field, and blue is the magnetic (B) field. This is the sort of wave that would be emitted by a dipole aligned with the Z axis.
{ "source": [ "https://electronics.stackexchange.com/questions/73998", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/25686/" ] }
74,061
I'm very new to electronics and I am going through what must be a common difficulty in grasping voltage, current, and resistance. I'll restrict my question to current as I suspect understanding that piece may shed light on voltage and resistance. I've read a few questions here: Help with understanding Current, Voltage and Resistance Understanding voltage and current And they helped a bit but I'm still struggling. One specific part that's difficult for me to resolve mentally is that I am reading about the basic units of measurement, but I'm not entirely sure what is being measured. For example, a pound is measuring the force of gravity pulling on a collection of atoms. A gallon is the amount of liquid that can occupy a fixed amount of space. Electricity... I get lost on the details of what's being observed. Many units of measurement are a fixed quantity of something that does not change (unless acted upon). For example: 1 Gallon of milk 16 ounces of beef 30 cubic liters of air That doesn't seem to make sense with something like current that is measuring electrons constantly in motion. Alternatively we perform measurements of something as it changes over time: 35 miles per hour 128 kilobits per second 5,000 gallons per minute When it comes to current, we just say "amps", not "amps per something ". Well, I get that "amps" measure the flow of electrons, but what exactly does that "flow" mean? Is it the number of electrons (or the number of something else) passing through a location on a circuit in a second (or some other unit) of time? When I touch the leads of my multimeter to a wire, what exactly is it "looking at"? I've read that volts are a measure of potential energy related to joules and coulombs ( http://www.allaboutcircuits.com/vol_1/chpt_2/1.html ) (more confusion but that's fine) and I believe that coulombs are measured per second. Does the per-second carry over to amps as well? The only other thing I can think of is that amps might be more like pressure where you're measuring pounds per square inch . I know electricity is electricity and no analogy is perfect. I'm trying to understand electricity for what it is, I'm just not sure how these measurements are actually made. Perhaps I'm overthinking, but any deeper insight would be great. (If this has already been explained to death I apologize, I may not know the best search term to use.) Man, as someone new to this site I'm so blown away that so many people took so much time to help me understand this. Like a lot of things I think it's going to take time and a lot more reading / experience to "sink in" but all of the answers were so helpful. I'm marking the "amps include time" answer as the one that helped me the most because it answered the core of my question "amps per what ?". I'm picturing "amps" kind of like " knots " in the sense that the quantities are part of the definition of the word as opposed to being explicitly stated as they would be in another unit like "miles per hour ". Not a perfect analogy but at least it helps me understand where all the hard numbers went.
Amps includes time... Amps = Coulombs per second That says more simply that... Current = amount of charge per time interval It's a flow rate metric. Like water... liters (volume --> amount) per minute (time) In more depth In practical terms, the ampere is a measure of the amount of electric charge passing a point in an electric circuit per unit time with 6.241 × 10 18 electrons, or one coulomb per second constituting one ampere. -- Wikipedia Article Probing When I touch the leads of my multimeter to a wire, what exactly is it "looking at"? If you are in the voltage measurement mode, you are effectively measuring the "pressure" between the two leads -- the degree to which charges in one lead seek to reach the other (but can't). The reason the charge gradient can't be neutralized depends on the circuit. In a capacitor, for example, a barrier of some kind prevents it. The existence of a voltage between two points requires that such a gradient exists. If you are in a current measurement mode, the leads are installed in the current path (in series with) and the meter is measuring how much charge flows through them in unit time (it actually does this indirectly by applying Ohm's law). Further reading Bodanis, David (2005), Electric Universe, New York: Three Rivers Press, ISBN 978-0‐307‐33598‐2
{ "source": [ "https://electronics.stackexchange.com/questions/74061", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/25694/" ] }
74,179
In Eagle, I would like to include SN74HC595 shift registers in my project. I found a library online a few months ago (can't find source anymore) that has this part. The symbol looks like this: However, I noticed the symbol does not have power and ground pins, which is contrary to all of the other devices I've seen and created before. I had no idea how I would actually connect it to the same power supply I'm using for all other components in my schematic. So I opened up the device in the library, and it looks like this: It seems there's a second symbol (on the left) that goes only to VCC and GND, and those are assigned to the relevant power and GND pads on the package. My issue is that my power rails on my schematic are labeled '5V' and 'GND'. I've read that I can 'simply' rename the 'VCC' pin to '5V' and that it will automatically connect to any '5V' net in whatever schematic I put it in. This, however, didn't work. I've also consulted Eagle's awful product support pages to no avail. How do I hook this component up to my '5V' line? [ note: GND hooks up automatically to the 'GND' net in my schematic ]
"Invoke" command resolves this. I've got only Eagle 5.11.0 in front of me at the moment. But, this haven't changed in 6.3.0 @ScottSeidman had beat me to the answer, while I was annotating the screenshot.
{ "source": [ "https://electronics.stackexchange.com/questions/74179", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17082/" ] }
74,244
Learning about PCB design for power supplies, I frequently see boards with routed gaps to separate low and high voltage sections of the layout. Why go to the trouble of routing an air gap when etching the copper away should create the same level of isolation? Is the breakdown voltage of air much higher than FR4? I assume that such gaps are used to avoid situations where copper may not be etched away perfectly.
High voltage PCB design High Voltage PCB design for arc prevention A few reasons why: When arc-over occurs, it could cause carbonization (a.k.a. "burning") on the PCB surface. This could result in a permanent short. This is also irreversible damage, where-as arc-over in air isn't (unless something else goes wrong). This would be especially bad if a single high-voltage spike created a permanent short, then any "low-level" voltage source would still have a low impedance path available. You have the option of installing a high-dielectric strength shield (something much better than FR4/soldermask, and better than air). Dust/dirt can accumulate on a board surface, reducing dielectric strength. Not as much of a problem (though still could be a problem) if that surface just isn't there. In the second link, they did some experiments where humidity had a drastic effect on the breakdown voltage of the soldermask, and a smaller (though potentially still significant) effect on a slot. Their best result was from removing soldermask and cutting a slot (no significant performance hit). Any inadvertent creepage mistakes will be removed by the router, though really this should be caught in the design stage, especially with modern CAD. The PCB might not work right if tracks has unexpected open circuits, and making a high-current track smaller could cause other issues :P Required air clearance seems to be smaller than surface required surface creepage distance. A quick look at some creepage/clearance tables : clearance table III creepage table IV seems to confirm that creepage distance > clearance distance , especially with higher pollution degrees. Pollution degree is a measure of how the environment could affect your PCB. See: Design for Dust . Description of various polution degrees (table 1): No pollution or only dry, nonconductive pollution, which has no influence on safety. You can achieve pollution degree 1 through encapsulation or the use of hermetically sealed components or through conformal coating of PCBs. Nonconductive pollution where occasional temporary condensation can occur. This is the most common environment and generally is required for products used in homes, offices, and laboratories. Conductive pollution or dry nonconductive pollution, which could become conductive due to expected condensation. This generally applies to industrial environments. You can use ingress protection (IP) enclosures to achieve pollution degree 3. Pollution that generates persistent conductivity, such as by rain, snow, or conductive dust. This category applies to outdoor environments and is not applicable when the product standard specifies indoor use.
{ "source": [ "https://electronics.stackexchange.com/questions/74244", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2028/" ] }
74,277
I am going through a Verilog test case, and I found this statement: assign XYZ = PQR_AR[44*8 +: 64]; What is the "+:" operator known as? I tried to find this on google, but I didn't get any relevant answer.
That syntax is called an indexed part-select . The first term is the bit offset and the second term is the width. It allows you to specify a variable for the offset, but the width must be constant. Example from the SystemVerilog 2012 LRM: logic [31: 0] a_vect; logic [0 :31] b_vect; logic [63: 0] dword; integer sel; a_vect[ 0 +: 8] // == a_vect[ 7 : 0] a_vect[15 -: 8] // == a_vect[15 : 8] b_vect[ 0 +: 8] // == b_vect[0 : 7] b_vect[15 -: 8] // == b_vect[8 :15] dword[8*sel +: 8] // variable part-select with fixed width
{ "source": [ "https://electronics.stackexchange.com/questions/74277", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/24170/" ] }
74,459
I've seen warnings that it's bad to mix new and old batteries -- why? Is it a matter of a battery's age or a battery's voltage/current (remaining)?
A simple model of a battery is a chemical reaction which produces a constant voltage. But, this chemical reaction takes time. A simple model of the limited speed of that reaction in electrical terms is a series resistance: simulate this circuit – Schematic created using CircuitLab When the battery is fresh, R1 is small. As the chemical energy is depleted, R1 gets bigger. Why this happens is complicated, and I'm not a chemist, so I can't tell you in detail, but it has to do with the reactants being used up, and the battery plates getting covered in cruft, and so on. This resistance, even though it's a combination of electrical and chemical effects, isn't exempt from the laws of physics. It still experiences a loss according to Joule's law: $$ P = I^2 R $$ This loss of electrical energy must be accompanied by a gain of thermal energy. If you aren't mixing batteries, then as the batteries become dead, all their resistances rise about the same, so while R goes up, the increasing \$R\$ also limits the maximum current \$I\$ that the batteries can supply. Most batteries 1 are designed to be safe under any of these conditions. However, if you mix fresh and dead batteries, then you have the fresh battery which can deliver a large current, into a dead battery which has a high resistance. This results in excessive heat in the dead battery, which may then be damaged or fail, perhaps spectacularly. 1: but certainly not all batteries. Lithium ion batteries, somewhat infamously are not safe when shorted.
{ "source": [ "https://electronics.stackexchange.com/questions/74459", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8322/" ] }
74,789
On some PCB designs, specific traces are routed in curious ways. This probably has to do with high frequency design considerations and general signal behavior that I am not familiar with. Let's take this PCB (somewhere from the web) as an example. It shows part of a PCIe card with SATA routing and DDR2 RAM: I highlighted 4 areas that qualify as unusual trace layout (from my perspective). What are those shapes supposed to achieve? How do designers come up with what pattern is required? Another example of wave shaped, antenna like routing. This is fairly rare. But obviously the designer deliberately avoided 45° traces. Why? Curves again and a single "pulse" within the trace. How can this have any significant effect? So what are the use cases and benefits of this techniques? I want to be able to take those into consideration when doing future PCB designs.
1) Equalisation of length of pairs of traces From Board Design Resource Center 2) Delay (e.g. of clock for timing purposes)? See also Adding delay intentionally 3) Reduce signal reflections due to discontinuities in trace width? from Circuit Board Layout Techniques See also How should I lay out timing matched traces?
{ "source": [ "https://electronics.stackexchange.com/questions/74789", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/16051/" ] }
74,840
I'm trying to wrap my head around the watchdog timer on the ATtinyX5 series. Things I've read made it seem like you could use it for making the program do something specific ever N seconds, but never really showed how. Others made it seem like it would only reset the chip unless something in code resets its count in the meantime (which seems to be the "normal" usage). Is there any way of using the WDT like you would TIMER1_COMPA_vect or similar? I noticed that it has a 1 second timeout mode and I would really love to be able to use that to make something happen every second in my code (and preferably sleep inbetween). Thoughts? Update: Since it was asked, what I'm referring to is section 8.4 of the ATtinyX5 datasheet . Not that I fully understand it, which is my problem.
You most certainly can. According to the datasheet, the watchdog timer can be setup to reset the MCU or cause an interrupt when it triggers. It seems you are more interested in the interrupt possibility. The WDT is actually easier to setup than a normal Timer for the same reason it is less useful: fewer options. It runs on an internally calibrated 128kHz clock, meaning its timing is not effected by the main clock speed of the MCU. It can also continue to run during the deepest sleep modes to provide a wake up source. I will go over a couple of the datasheet examples as well as some code I have used (in C). Included Files and Definitions To start, you will probably want to include the following two header files for things to work: #include <avr/wdt.h> // Supplied Watch Dog Timer Macros #include <avr/sleep.h> // Supplied AVR Sleep Macros Also, I use the Macro <_BV(BIT)> which is defined in one of the standard AVR headers as the following (which might be more familial to you): #define _BV(BIT) (1<<BIT) Beginning of Code When the MCU is first started, you would typically initialize the I/O, set up timers, etc. Somewhere here is a good time to make sure the WDT didn't cause a reset because it could do it again, keeping your program in an unstable loop. if(MCUSR & _BV(WDRF)){ // If a reset was caused by the Watchdog Timer... MCUSR &= ~_BV(WDRF); // Clear the WDT reset flag WDTCSR |= (_BV(WDCE) | _BV(WDE)); // Enable the WD Change Bit WDTCSR = 0x00; // Disable the WDT } WDT Setup Then, after you have setup the rest of the chip, redo the WDT. Setting up the WDT requires a "timed sequence," but it is really easy to do... // Set up Watch Dog Timer for Inactivity WDTCSR |= (_BV(WDCE) | _BV(WDE)); // Enable the WD Change Bit WDTCSR = _BV(WDIE) | // Enable WDT Interrupt _BV(WDP2) | _BV(WDP1); // Set Timeout to ~1 seconds Of course, your interrupts should be disabled during this code. Be sure to re-enable them afterwards! cli(); // Disable the Interrupts sei(); // Enable the Interrupts WDT Interrupt Service Routine The next thing to worry about is handling the WDT ISR. This is done as such: ISR(WDT_vect) { sleep_disable(); // Disable Sleep on Wakeup // Your code goes here... // Whatever needs to happen every 1 second sleep_enable(); // Enable Sleep Mode } MCU Sleep Rather than put the MCU to sleep inside of the WDT ISR, I recommend simply enabling the sleep mode at the end of the ISR, then have the MAIN program put the MCU to sleep. That way, the program is actually leaving the ISR before it goes to sleep, and it will wake up and go directly back into the WDT ISR. // Enable Sleep Mode for Power Down set_sleep_mode(SLEEP_MODE_PWR_DOWN); // Set Sleep Mode: Power Down sleep_enable(); // Enable Sleep Mode sei(); // Enable Interrupts /**************************** * Enter Main Program Loop * ****************************/ for(;;) { if (MCUCR & _BV(SE)){ // If Sleep is Enabled... cli(); // Disable Interrupts sleep_bod_disable(); // Disable BOD sei(); // Enable Interrupts sleep_cpu(); // Go to Sleep /**************************** * Sleep Until WDT Times Out * -> Go to WDT ISR ****************************/ } }
{ "source": [ "https://electronics.stackexchange.com/questions/74840", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/13667/" ] }
74,956
This may seem like a very simple question, but I've searched all over the place and haven't found an answer. When jumping a car, we connect the + end of the charged battery to the + end of the dead battery, and the - end of the charged battery to the chassis or other metal part of the car. I always thought that you need a closed circuit for current to flow. But this circuit appears to be open: we are connecting the - end of the charged battery to the ground! Thus, how can any circuit connected to ground have a current? I believe another way to ask this question is: will jump starting a car still work if I connect the - end of the charged battery to a third (powered-off) car, instead of to the chassis of the car with the dead battery? If so, why? (I've heard people say that jump starting a car only works because the chassis is connected to the electrical components of the vehicle, thus providing a closed circuit since the battery is also connected to the electrical components of the vehicle).
"Ground" is just a code word which, in this case, refers to the "current return common" circuit node. There is a complete circuit because everything electrical in the car, such as the starter motor, also connects to ground in order to return current to the minus terminal of the battery through the ground. The car's chassis is used for this return network, and so the entire chassis is an extension of the minus terminal of the battery. During jump-starting, we connect the boosting battery to ground rather than to the dead battery's - terminal for the simple reason that this provides a more direct return path to the good battery which is powering the dead car: the return current does not have to travel through the dead battery's minus terminal hookup cable and then to the jumper cable, but can go directly from the chassis ground to the jumper cable. A more direct return path allows for better current flow and less voltage drop, like plugging a big appliance directly into an outlet, rather than via an extension cord. In case you're also wondering why the plus jumper connections are made first, then the minuses. This is because there is no harm done if you leave the minus jumper dangling in the chassis of the car. Anything it accidentally touches is likely to be ground. If you connect both alligator clips on one end before connecting the other end, the other end is now live and you can accidentally touch the clips together to create a short circuit. If you connect the minuses/grounds first and then go to connect one of the pluses, you can create a short circuit, because the opposite side plus is probably dangling and touching something that is grounded.
{ "source": [ "https://electronics.stackexchange.com/questions/74956", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/26037/" ] }
75,057
I'd like to know if soldering two wires directly on a NiMh battery is considered as safe or not. My fear is that battery would explode (right in my face) because of excessive heat caused by the soldering iron . Other possibility would be the battery slowly inflating and then spreading toxic fumes (or corrosive materials) trough a hole (like a capacitor under excessive voltage). The battery I want to use is made of 10 units of 1.2V (thus generating 12V )
It's probably safe enough from your point of view, but not from the battery's. You really shouldn't solder to batteries unless they explicitly have solder tabs for that purpose. Most batteries, and NiMH are no exception, are damaged by soldering temperatures. The way to make a permanent connection to a battery that doesn't have solder tabs is to use spot welding. This presses the battery terminal and the contact together, then zaps them by discharging a capacitor thru this connection. That heats the two parts enough for a little metal to melt and bond. However, the zap is very short and localized, so the total energy is low and high temperatures diffuse well before they get to sensitive parts of the battery. Note that no solder is evident in the picture you show. That is because the tabs were spot welded, not soldered.
{ "source": [ "https://electronics.stackexchange.com/questions/75057", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9865/" ] }
75,140
Consider this simple sketch of a circuit, a current source: I'm not sure how to calculate the power dissipation across the transistor. I'm taking a class in electronics and have the following equation in my notes (not sure if it helps): $$P = P_{CE} + P_{BE} + P_{base-resistor}$$ So the power dissipation is the power dissipation across the collector and emitter, the power dissipation across the base and emitter and a mystery factor \$P_{base-resistor}\$ . Note that the β of the transistor in this example was set to 50. I'm quite confused overall and the many questions here on transistors have been very helpful.
Power isn't "across" something. Power is the voltage across something times the current going through it. Since the small amount of current going into the base is irrelevant in power dissipation, calculate the C-E voltage and the collector current. The power dissipated by the transistor will be the product of those two. Let's take a quick stab at this making some simplifying assumptions. We'll say the gain is infinite and the B-E drop is 700 mV. The R1-R2 divider sets the base at 1.6 V, which means the emitter is at 900 mV. R4 therefore sets the E and C current to 900 µA. The worst case power dissipation in Q1 is when R3 is 0 so that the collector is at 20 V. With 19.1 V accross the transistor and 900 µA through it, it is dissipating 17 mW. That's not enough to notice the extra warmth when putting your finger on it, even with a small case like SOT-23.
{ "source": [ "https://electronics.stackexchange.com/questions/75140", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/25819/" ] }
75,151
There're some easy concepts which I don't get exactly in my mind. I'm afraid I've been studying these things for two years of my engineering but they still bother me. Capacitor is one of them. Can someone explain? What does a capacitor do? Does it store charges? If so, then how does it do? I have searched it on Google and Yahoo but didn't find any helpful thing there (for me). So I will be glad if I got my problem solved here. P.S. I hope that the question would not again be an off-topic, as it always does and moreover people don't suggest then where to go. It's a real sad thing.
If by charges you mean electric charges , then no, a capacitor does not store charges. This is a common misconception, maybe due to the multiple meanings of the word charge . When some charge goes in one terminal of a capacitor, an equal amount of charge leaves the other. So, the total charge in the capacitor is constant. What capacitors store is energy . Specifically, they store it in an electric field. All the electrons are attracted to all the protons. At equilibrium, there are equal numbers of protons and electrons on each plate of the capacitor, and there is no stored energy, and no voltage across the capacitor. But, if you connect the capacitor to something like a battery, then some of the electrons will be pulled away from one plate, and an equal number of electrons will be pushed on to the other plate. Now one plate has a net negative charge, and the other has a net positive charge. This results in a difference in electrical potential between the plates, and an increasingly strong electric field as more charges are separated. The electric field exerts a force on the charges which attempts to return the capacitor back to equilibrium, with balanced charges on each plate. As long as the capacitor remains connected to the battery, this force is balanced by force of the battery, and the imbalance remains. If the battery is removed, and we leave the circuit open, the charges can't move, so the charge imbalance remains. The field is still applying a force to the charges, but they can't move, like a ball at the top of a hill, or a spring held under tension. The energy stored in the capacitor remains. If the capacitor terminals are connected with a resistor, then the charges can move, so there is a current. The energy that was stored in the capacitor is converted to heat in the resistor, the voltage decreases, the charges become less imbalanced, and the field weakens. Further reading: CAPACITOR COMPLAINTS (1996 William J. Beaty)
{ "source": [ "https://electronics.stackexchange.com/questions/75151", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/25997/" ] }
75,216
Is the explanation that the digital equipment takes longer to propagate? For instance, a software synthesis is very slow compared to a hardware synthesis.
I assume you are not alluding to a deeper philosophical discussion about information, power and entropy, but you are just interested in the practical aspects. Very simply put, digital circuits need to measure input, digitize it, run it through some kind of processing and then transform the output into an electrical signal again. Digital circuits cannot directly manipulate analogue electrical signals. You inherently have extra latency because of signal conversion. You can stop reading here if this answered your question. From a more philosophical/physical point of view, in almost all circuits you are actually not trying to manipulate electrical energy (that is what power electronics does), but you are trying to manipulate information. In this case, technically it is not at all true that analogue is faster than digital. Why? Well, analogue signal paths are nonorthogonal information processors: there is no such thing as a perfect opamp or a perfect buffer, everything has parasitic effects that you need to filter or otherwise get rid of. Especially at very high speeds, it becomes a real problem even to build a wire that reliably transfers a voltage. Digital processing decouples the electrical aspect from the information: after it has digitized its inputs, the signal exists as a very pure form of information. You can then manipulate the information without having to think about the electrical nature of it, and only in the end stages you need to convert it back to an analogue state. Even though you are penalized with two conversion stages, in between your ADC and DAC you can employ many processing tricks to speed up processing speed and usually vastly surpass the performance of any purely analogue signal processor. A great example for this is the revolution of digital modems in cell phones, which now operate at very near the theoretical limit of information processing (tens of pJ/bit energy requirements), whereas not very long ago purely analogue GSM modems required orders of magnitude more silicon area and I think 5 or 6 orders of magnitude more processing energy.
{ "source": [ "https://electronics.stackexchange.com/questions/75216", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10456/" ] }
75,228
This is my first post here on electronics stackexchange. I am a hobbyits in electronics, and a professional in programming. I am working on a inductor circuit to heat a workpiece. I have a working setup @12VAC. In short I have the following elements in the circuit: Microcontroller to generate pulses with a DC of 50% with its own power supply, sharing ground with the transformer powering the solenoid. Two MOSFETs (100 amperes continues drain current, 150Vds) on the low side to switch the direction of the current through. A 3570 nH solenoid of 11 turns, ~5cm diameter, made of copper pipe with 1 cm diameter. (Planning to apply watercooling through the coil some time later on.) A 230VAC to 12VAC transformer that can deliver up to 35 amperes peak, or 20 amperes for a while. A MOSFET driver (TC4428A) to drive the gates of the MOSFETs. A10K resistor on each MOSFETs gate to source. 1000pF ceramic capacitor on each MOSFETs gate to source (to reduce some ringing on the gates.) Vpkpk is ~17Volts on the gates. simulate this circuit – Schematic created using CircuitLab The circuit shorts when I want to apply 48VAC to the circuit using a welding machine. The MOSFETs should be able to handle (48VAC = ~68VDC * 2 = ~~136Vpkpk). Nothing explodes, the MOSFETs are in one piece, but the resistance between the pins of the MOSFETS (gate, source, drain <-> gate, source, drain) are all 0 or very low (<20Ohms). So they broke down. What caused my MOSFETs to break down? It is hard to examine the circuit when components die. My equipment exists solely of an osscilloscope and a mutlimeter. Ringing on gates without C2 and C3, while solenoid was not powered. Sharing common ground with transformer. The wires from MCU to the TC4428A driver are, say, 5cm. From the driver to the gates, the wires are ~15cm. Does this cause ringing? Thick ~2mm wires where used from the TC4428A driver to the gates. Snubbed ringing on gates with C2 and C3, while solenoid was not powered. Sharing common ground. Looks much better than the first picture. Ringing on gates while solenoid was powered. Why is the ringing increased when the solenoid is powered on, and how can I prevent/mimize it while maintaining switching speed? Measurement on source to drain with workpiece in solenoid @ ~150Khz. Shown in the last picture, if the signal was clean, it would yield a Vpkpk of ~41 volts, but due to the spikes it is around ~63 volts. Would the latter of 150% over/undershoort Vpkpk be the problem? Would this result in a (48VAC => 68Vmax => 136Vpkpk * 150% = ) ~203Vpkpk? How would I reduce the noise on the waves measured on the source -> drain? EDIT Here I disconnected one MOSFETs gate from the driver. CH1 is the gate, CH2 is the drain of the MOSFET that was still connected. Now both waves looks fine. No/minimal current was flowing here. When I do connect both MOSFETs to the driver, and measure the resistance between the two gates, it says 24.2K ohm. Could it be that if one MOSFET is turned off by the TC4428A driver, that somehow it still picks up a signal from the other MOSFET's gate when it is turned on by the driver? Is it a meaningful idea to put a diode like so driver --->|---- gate to make sure there is no noise? Preferably a diode with low voltage drop of course.
From the driver to the gates, the wires are ~15cm. Does this cause rining? Almost certainly, and it's a fair bet that this is destroying your MOSFETs, by one or more of these mechanisms: exceeding \$V_{G(max)}\$ even for the briefest instant exceeding \$V_{DS(max)}\$ simple overheating due to slow switching and unintended conduction #3 should be pretty obvious when it occurs, but the other two can be hard to see, since they are transient conditions that may be too brief to be visible on the scope. C2 and C3 are not decreasing the ringing. You get ringing on the gates because the capacitance of the MOSFET gate (and C2, C3 which add to it) plus the inductance formed by the loop of wire through the driver and the MOSFET gate-source form an LC circuit . The ringing is caused by energy bouncing between this capacitance and inductance. You should put the driver absolutely as close to the MOSFETS as possible. 1cm is already getting to be too long. Not only does the inductance created by the long trace to the gate cause ringing, but it limits your switching speed, which means more losses in the transistors. This is because the rate of change of current is limited by inductance : $$ \frac{v}{L} = \frac{di}{dt} $$ Since \$v\$ is the voltage supplied by the gate driver and you can't make that any bigger, the time it takes to increase the current from nothing to something is limited by the inductance \$L\$. You want the current to be as much as possible, as soon as possible, so that you can switch that transistor fast. In addition to putting the gate driver close to the MOSFETs, you want to minimize the loop area of the path the current through the gate must take: simulate this circuit – Schematic created using CircuitLab The inductance is proportional to the area illustrated. The inductance limits the switching speed, and it also limits how well the gate driver can hold the MOSFET off. As the drain voltage on the MOSFET that just turned off changes (due to the other MOSFET turning on, and the mutual inductance of the coils), the gate driver must source or sink current as the internal capacitances of the MOSFET charge or discharge. Here's an illustration from International Rectifier - Power MOSFET Basics : In your case, if the gate traces are long, then \$R_G\$ is also an inductor. Since the inductor limits \$di/dt\$, the gate driver can only respond so quickly to these currents, and then there is significant ringing and overshoot in the resonance between the gate trace inductance and the MOSFET's capacitance. Your C2 and C3 just serve to change the frequency of this resonance. As the gate voltage is ringing, it sometimes crosses over \$V_{th}\$ of your MOSFETS, and one begins to conduct a little when it should be off. This changes the current and voltage of the connected inductor, which is coupled to the other inductor, which introduces these capacitive currents in the other MOSFET, which can only exacerbate the problem. But, when the coils aren't powered, then the drain voltage is at 0V regardless of the transistor switching, and these capacitive currents (and consequently, the total gate charge that must be moved to switch the transistor) are much less, so you see much less ringing. This inductance can also be coupled magnetically to other inductances, like your solenoid coils. As the magnetic flux through the loop changes, a voltage is induced ( Faraday's law of induction ). Minimize the inductance, and you will minimize this voltage. Get rid of C2 and C3. If you still need to reduce ringing after improving your layout, do that by adding a resistor in series with the gate, between the gate and the gate driver. This will absorb the energy bouncing around which causes the ringing. Of course, it will also limit the gate current, and thus your switching speed, so you don't want this resistance to be any larger than absolutely necessary. You can also bypass the added resistor with a diode, or with a transistor, to allow for turn-off to be faster than turn-on. So, one of these options (but only if necessary; it's much preferred to simply eliminate the source of the ringing): simulate this circuit Especially in the last case with Q3, you have essentially implemented half of a gate driver, so the same concerns of keeping the trace short and the loop area small apply.
{ "source": [ "https://electronics.stackexchange.com/questions/75228", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/26135/" ] }
75,368
Hmm, this seems to be just another question on line impedances. I understand that when we say "transmission line" effects we talk about things like cross talk, reflections and ringing (I guess that is just about it). These effects are not present at low frequencies where the PCB trace behaves like an "ideal" transmission medium, more like we expect a wire to behave in our early school days. I also understand that the 50 ohm value comes not from the line resistance which is going to be very small and less than 1 ohm. This value comes from the ratio of L and C on the line. Changing C by changing the trace height above ground plane or changing L by changing the trace width shall change the impedance of the line. We all know that the reactance of L and C is dependant on the signal frequency as well. Now my questions: Why should we not call this as line reactance only rather than line impedance? How can it be just 50 ohm? It has to be signal frequency dependent right? E.g 50 ohm at 1 MHz Will the world end if I chose to do a 100 ohm or 25 ohm trace instead? I know that while we like to say 50 ohm as a magic number, it will be within some range around 50 ohm and not 50.0000 ohm exactly. Is there any time when the actual resistance of a PCB trace may matter?
Let's look at the formula and equivalent circuit for a transmission line. (1) Impedance rather than reactance. Reactance refers to the opposition to the change in current (of an inductor) or voltage (for a capacitor) - single components. The transmission line has \$R,L\$ and \$C\$ components - impedance is the ratio of voltage phasor to current phasor. (2) It is \$50\Omega\$ because the ratio of inductance to capacitance per unit length produces that value. As \$R << j\omega L\$ and \$G \to 0\$, these values can be ignored and so the expression reduces to \$\sqrt{L/C}\$ (frequency independent). (3) Nope, but it's generally a good idea to keep things as standard as possible. You may find it difficult to find a suitable connector for your \$167\Omega\$ transmission line. There's also a lot of information available for designing standard transmission lines on PCBs, etc. The magic number in my book is 376.73031... the impedance of free space. Now without that one we'd live in a different universe. (4) Going back to the formula. At low frequencies \$R\$ may be significant as the reactance of the inductor will be small). At very high frequencies the dielectric losses may become significant.
{ "source": [ "https://electronics.stackexchange.com/questions/75368", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/20711/" ] }
75,448
How would one go about using a 12 V DC power source to power something which needs 4.5 V DC using resistors? Is there a way to determine how much adding a resistor would drop the voltage?
The short answer is "don't do that." The voltage dropped by a resistor is given by Ohm's Law: V = I R. So if you know exactly how much current your device will draw, you could choose a resistor to drop exactly 7.5 V, and leave 4.5 V for your device, when that current is run through it. But if the current through your device is changing, or if you want to make more than one system and not every device is exactly alike in current draw, you can't consistently get 4.5 V at the device using just a resistor. Your other options include A linear regulator. This is basically a variable resistor that will adjust it's value to keep the output where you want it. This is probably only a good solution if your device draws very little power (maybe up to 100 mA). A shunt regulator. This means using a resistor to drop the voltage like you are suggesting, but then adding an extra device in parallel with the load to control the voltage. The shunt regulator will adjust its current (within limits) to keep the current through the resistor correct to maintain the desired output voltage. A switching regulator. This uses some tricks to generate your desired output voltage with much better power efficiency than a linear regulator. This is probably the best choice if your device needs more than 10 or 20 mA of current.
{ "source": [ "https://electronics.stackexchange.com/questions/75448", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/26004/" ] }
75,707
I'm doing some project and I'm using Arduino to prototype, I have to use 10 pushbuttons (along with more things) and I don't have enough pins. One solution I could think of is to use the analog pins and use each of them for two push buttons, something like this: simulate this circuit – Schematic created using CircuitLab That way I can read the pin A0 and know which of them is being pushed by looking if the voltage is 5V or half of that. Is this a good idea? The different push buttons are NEVER supposed to be pushed at the same time, which is the only problem I can think off. Are there better ways?
Why waste multiple analog pins for two switches each, when you could do any number of buttons on a single analog pin? Two ways of doing it. One is in series, the other is parallel. This is how some car steering wheel audio controls are. And how some of the older ipod inline controllers work. Depending on the resistors you use, if you need multiple buttons pressed at the same time, and how sensitive your analog in is, you could have all 10 buttons on a single pin.
{ "source": [ "https://electronics.stackexchange.com/questions/75707", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/23444/" ] }
76,367
I'm doing a simple lab (I'm a hobby EE) to reinforce my ohm's law math and learn a little about how to do proper measurements with a multimeter. I have simple circuit with a 2.2k ohm resistor connected in series with an LED. Everything works fine up to the point I go to calculate voltage drop across the resistor and LED. My initial calculations only accounted for the 2.2k ohm resistor. As such I got the full voltage dropped across the resistor. However, when I measured the circuit for real I found the result to be nearly half of the input voltage, which would indicate to me My math is wrong There's resistance left unaccounted for The only thing left to account for is the LED. What is the best method for determining the resistance of a simple LED? I tried doing what I do with resistors (hold it up to the probes with my fingers) but I don't get a proper reading. Is there a technique I'm missing here?
LED's aren't best modeled as a pure resistor. As noted in some other answers, real LED's do have resistance, but often that's not the primary concern when modeling a diode. An LED's current/voltage relationship graph: Now this behavior is quite difficult to calculate by hand (especially for complicated circuits), but there is a good "approximation" which splits the diode into 3 discrete modes of operation: If the voltage across the diode is greater than Vd , the diode behaves like a constant voltage drop (i.e. it will allow whatever current through to maintain V = Vd ). If the voltage is less than Vd but greater than the breakdown voltage Vbr , the diode doesn't conduct. If the reverse bias voltage is above the breakdown voltage Vbr , the diode again becomes conducting, and will allow whatever current through to maintain V = Vbr . So let's suppose we have some circuit: simulate this circuit – Schematic created using CircuitLab First, we're going to assume that VS > Vd . That means the voltage across R is VR = VS - Vd . Using Ohm's law, we can tell that the current flowing through R (and thus D) is: \begin{equation} I = \frac{V_R}{R} \end{equation} Let's plug some numbers in. Say VS = 5V , R= 2.2k , Vd=2V (a typical red LED). \begin{equation} V_R = 5V - 2V = 3V\\ I = \frac{3V}{2.2k\Omega} = 1.36 mA \end{equation} Ok, what if VS = 1V , R = 2.2k , and Vd = 2V ? This time, VS < Vd , and the diode doesn't conduct. There's no current flowing through R , so VR = 0V . That means VD = VS = 1V (here, VD is the actual voltage across D , where-as Vd is the saturation voltage drop of the diode).
{ "source": [ "https://electronics.stackexchange.com/questions/76367", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/26535/" ] }
76,376
A "strong" pull (up/down) resistor would be one of a relatively low value, while a "weak" one would be of a relatively high value. For example, a pull-down resistor would be used to keep an I/O pin low, but a button connected from that pin to V CC would bring it high when pressed, because more current flows from V CC to the pin than from the pin to GND. In that situation, it seems any value of resistor could be used to keep the pin low, and a button press would always "override" it. What, then, would determine if the pull-down resistor is strong or weak? Does "strong" vs "weak" only apply when one such resistor is being compared to other resistances in the circuit, such as an internal pull-down resistor?
Strong means low resistance . Weak means high resistance . Of course low and high are relative terms, and so are strong and weak . The reference for this relationship must be inferred from context. A strong or low resistance pull-up/down resistor is good because the time constant formed the load capacitance (often, the input gate capacitance, and the PCB trace capacitance) is small, so rise/fall times will be short. A strong pull-up/down resistor is good because noise currents from unintended coupling and EMI will result in smaller noise voltages. (Think about Ohm's law) A weak or high resistance pull-up/down resistor is good because it will not require much current from the driving circuitry to work against the resistor. Batteries will thus last longer, parts can be smaller and don't get as hot. Of course, you usually want all of these things, but a resistor can't be both. A discussion about strong vs. weak is usually clarifying which of these concerns (or perhaps others) are more important for a particular application.
{ "source": [ "https://electronics.stackexchange.com/questions/76376", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2028/" ] }
76,402
I am just getting started on PCB design (for fun) and came across this term called thermal relief. It increases thermal resistance so the components can be soldered easily. But according to what I have learned, thermal and electrical resistance are always connected. So does thermal relief in any way increase the electrical resistance also? If not, what is the mistake I am making? This may sound silly but I cannot get it out of my mind.
A thermal relief pad is essentially a pad which has fewer copper connections to a plane (such as a ground plane). A normal pad would simply be connected in all directions, with the solder mask exposing the area to be soldered. However the copper plane then serves as a giant heatsink which can make soldering difficult, because it requires that you keep the iron on the pad longer and risk damaging the component. By reducing the copper connections, you limit the amount of heat transmission to the plane. It follows of course, that with reduced copper conduction paths, you also have greater electrical resistance. The increase in resistance is marginal compared to the reduction in thermal conductivity. This should not be a concern unless the pad is carrying high current such that the four traces (on a standard thermal relief) together are insufficient to carry the current; or if it is for high frequency signals where the thermal relief may cause unwanted inductance. Just to show a visual on normal vs thermal relief pads: The pad at left is connected to the copper plane (green) in all directions whereas the pad at right has had copper etched away such that only four "traces" connect it to the plane. Just for fun, I used a trace resistance calculator to estimate what the electrical resistance difference might actually be. Consider the thermal relief pad. If we assume the four "traces" to be 10 mil wide (0.010") and approximately 10 mil in length from the pad to the plane, then each of them has a resistance of about 486μΩ. The four "resistors" in parallel would give us a total resistance of : $$R_{total} = \frac{1}{\frac{1}{486\mu\Omega} \cdot 4} = \frac{486\mu\Omega}{4} = 121.5 \mu \Omega$$ If we approximate one empty space created by the thermal relief to have the equivalent of about three such traces, giving us 16 in total: $$R_{total} = \frac{486\mu\Omega}{16} = 30.375 \mu \Omega$$ Remember these values are micro ohms or \$0.0001215\$ and \$0.000030375\$ ohms, respectively. So by rough estimate, the difference in electrical resistance between our two hypothetical pads is a mere 91.125μΩ. The thermal properties, on the other hand, are significantly different. I don't know thermal conductivity formulas very well, so I won't try to calculate it. But I can tell you from experience that soldering one versus the other is highly noticeable. Values calculated assuming a 1 oz copper layer.
{ "source": [ "https://electronics.stackexchange.com/questions/76402", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/26543/" ] }
76,406
Why does a typical PCB always has rounded tracks? What harm can a sharp edged PCB track can cause? Please explain!
This is a great question, because the default answer is usually wrong for 99% of applications. The default answer is: to avoid reflections and other problems with high frequency signals. The default answer assumes that you are dealing with very high frequency signals - signals with a wavelength that is small enough to fit a couple of times in your trace. When you regard such a signal as a wave, when it hits the end of a trace or a 90 degree corner, it gets reflected back and causes destructive interference with itself, attenuating the signal. However, almost all signals you will ever route through a PCB are either DC or - in terms of these kinds of problems - very low frequency. Even 1MHz is a very low frequency and you will not run into these kinds of problems. It's 100+ MHz that starts entering into routing problems. A great example of signals that benefit from clean layout in this respect are serial buses: PCIe, USB 2.0+, etc. This does not mean that it's good practice to make sharp corners all over the place. There are a couple of reasons why you even want DC signals and basically all your routing to have nice 45 degree angles or rounded corners: First of all, board area use. 90 degree angles or worse still: more than 90 degree sharp corners will always cause longer traces (higher impedance, more copper use) than traces that try to snake around obstacles. And often, your board size is limited, so you want to use as much area for the actual components, not the traces in between. Cleanliness. A clean, nice-looking board layout is easier to optimize, transfer and troubleshoot. Manufacturability. This is much less of a concern than in the past, but still something to consider if you plan on prototyping this on a hand-etched or milled PCB. Sharp corners tend to get loose when milled or get under-etched when using crude manual etching methods. Fluent lines are easier to produce. However, if you know what you are doing, don't hesitate to use sharp corners when you need them. As always: strict rules are for beginners and dumb people, once you know what you are doing you know when it's alright to deviate from the rules.
{ "source": [ "https://electronics.stackexchange.com/questions/76406", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/26541/" ] }
76,726
If the neutral wire carries current, why do many people believe that it's safe? I've heard "You can touch the neutral wire/bar in the breaker box and not get shocked. Only the hot can hurt you." If the circuit is complete and current is flowing, can't you receive a shock?
The neutral is NOT safe to touch. When everything is working correctly, it should be at most a few volts from ground. However, and this is the big gotcha, if there is a break in the neutral line between where you are and where it is connected back to ground, it can be driven to the full line voltage. Basically in that case you are connected to the hot line via any appliances that happen to be on in that part of the circuit. Those can easily pass the few mA it takes to kill you. This is not supposed to happen, but since failures can be lethal and costly, a extra layer of safety is built into the protocol and rules. It is irresponsible and needlessly risky to consider the neutral line safe. This is why modern appliances either have two prongs and everything is insulated from the user, or three prongs and anything conductive the user can touch is connected to ground. In some past cases there have been appliances with polarized 2-prong plugs, but those are seriously frowned upon today. I don't think you're going to get UL approval for such a device unless it is fully insulated, in which case you shouldn't need a polarized plug.
{ "source": [ "https://electronics.stackexchange.com/questions/76726", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/24418/" ] }
76,773
I recently cannibalized an old thermostat for parts, and found a strange looking component with no markings at all. Here's a picture (since I can't figure out how to describe it): My first guess was that this was some sort of resistor, but I can't seem to measure any resistance across it with my multimeter... I'm very much an electronics newbie so please excuse me if the answer is obvious :)
It's a resistive hygrometer (or simply put: a humidity sensor). The resistance across the contacts varies depending on the relative humidity of the air it is suspended in.
{ "source": [ "https://electronics.stackexchange.com/questions/76773", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/26671/" ] }
76,790
I've googled almost 100 times, unable to find what transimpedance really is. Every search displayed results about transimpedance amplifiers, but didn't explain the term transimpedance.
Impedance means a circuit element that produces a voltage when a current is applied. For example, if you apply 1 A to a 1 Ohm resistor, 1 V will be generated across the resistor. Transimpedance applies to a 2-port (or n-port) device, rather than a simple single-branch circuit element, and it means the two-port produces a voltage when a current is applied, but the voltage appears on a different port than the current was applied to. Describing a device with a transimpedance means you're describing the device as a CCVS (current-controlled voltage source). Like an ordinary impedance, the units of a transimpedance are ohms (V/A). As an example, a transimpedance amplifier produces a voltage at its output when a current is provided to its input.
{ "source": [ "https://electronics.stackexchange.com/questions/76790", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/22865/" ] }
77,502
Very simply, I am controlling servos (9g micro servos) based on some data read in from elsewhere. Everything works fine except that the servos will constantly "shake." That is, they vibrate back with very subtle movements (with intermittent movements of 1/2 -> 1cm or so). I tried correcting this issue in software by doing something like: do{ delay(DTIME); positionServo(); lcd.clear(); lcd.setCursor(0,0); lcd.print("X position: "); lcd.print(xRead); lcd.setCursor(0,1); lcd.print("Y position: "); lcd.print(yRead); }while( readChange() ); //while there has been change Where the do-while is necessary initialize the variables that store the mapped servo value (using the arduino servo library.) The readChange() function is defined as: int readChange(){ int x_Temp, y_Temp; x_Temp = map(analogRead(x_axisReadPin), 0, 1023, 0, 179); y_Temp = map(analogRead(y_axisReadPin), 0, 1023, 0, 179); if( abs(x_Temp - xRead) < DEG && abs(y_Temp - yRead) < DEG ) return 0; // no change else return 1; //change } Where xRead is the value that was initialized (the first, mapped servo output.) This really is not a good approach. It requires that BOTH values must not have changed by a factor of DEG (~10 degrees, or ~0.28V in my case.) If I write the function such that either OR be less than DEG, then what if I was only changing one servo at a time? So there is a delimma. Is this simply a property of servos (perhaps cheap ones?) or is there a workaround? It would be much simpler to include a pastie link. Here is the full code. I have attached two servos together with a laser pointer to allow for two degrees of freedom (X, Y.) There are options, based on the state of several buttons, to control the servos in various ways. The first is "Motion" where I have two photoresistors that, based on the amount of light exposure, affect the position of the servos. I have not yet implemented the code to control the servos by an Xbox controller. The third option is just randomized movement.
When using the Servo library on an Arduino, a common source of servo buzz is that the interrupt-driven servo routines don't actually give a very stable output pulse. Because the AVR takes interrupts for servicing the millis() clock and other things in the Arduino runtime, the jitter in the Servo library is on the order of several microseconds, which translates to a lot of movement in the servo. The fix for this is to write your own pulse. Something like this: cli(); long start = micros(); digitalWrite(PIN, HIGH); while (micros() - start < duration) ; digitalWrite(PIN, LOW); sei(); This will turn off other interrupts, and generate a much cleaner PWM pulse. However, it will make the "millis() timer miss some clock ticks. (The "micros()" function may be called something else -- I forget exactly what.) In general, for timing critical code, you want to get rid of the Arduino runtime entirely, and write your own using the avr-gcc compiler and avr-libc library that powers the Arduino environment. Then you can set up a timer to tick 4 times per microsecond, or even 16 times per microsecond, and get a much better resolution in your PWM. Another cause of buzz in servos is cheap servos with cheap sensors, where the sensors are noisy, or when the exact position requested with the pulse can't actually be encoded by the sensor. The servo will see "move to position 1822" and try to do it, but ends up with the sensor reading 1823. The servo will then say "move back a little bit" and it ends up with the sensor reading 1821. Repeat! The fix for this is to use high-quality servos. Ideally, not hobby servos at all, but real servos with optical or magnetic absolute encoders. Finally, if the servos don't get enough power, or if you try to drive their power from the 5V rail on the Arduino, this will generate voltage-sag-induced buzz in the servos, as suggested above. You may be able to fix it with large electrolytic capacitors (which are a good idea for general filtering anyway) but you more likely want to make sure your servo power source can actually deliver several amps of current at the servo voltage.
{ "source": [ "https://electronics.stackexchange.com/questions/77502", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/26864/" ] }
77,516
Suppose I would like to insert data-cables of varying diameters -- e.g., a cable of 5 mm diameter -- into the 6 mm diameter hole of a plastic enclosure. The wires within the cable are terminated via soldering to a PCB inside the enclosure. What methods are used in the industry to ensure that pulling the cable won't make it slide in and out of the enclosure (thus preventing damage to the wire connections to the PCB inside)? Some options that I have considered: Two small lengths of thick heat shrink tubing placed around the cable, both just inside and just outside the wall of the enclosure. If the tubing is wide enough, then it will block the cable from sliding. This could work but may have to use too many layers of tubing and also the fit just by friction alone may not be strong enough. Apply a thick layer of rubber-compatible adhesive in a circle around the cable, both just inside and just outside the wall of the enclosure. The glue blob would act as sort of a bolt/washer. This is too messy in practice, and probably not usable professionally. Use rubber-and-steel-compatible adhesive to place two bolts around the cable, one just inside and one just outside the wall of the enclosure. The problem with this is that it is hard to find an adhesive that bonds well to both rubber and steel.
There are a few industry approaches to this. The first is molded cables. The cables themselves have strain reliefs molded to fit a given entry point, either by custom moulding or with off the shelf reliefs that are chemically welded/bonded to the cable. Not just glued, but welded together. The second is entry points designed to hold the cable. The cable is bent in a z or u shape around posts to hold it in place. The strength of the cable is used to prevent it from being pulled out. Similarly, but less often seen now in the days of cheap molding or diy kits, is this. The cable is screwed into a holder which is prevented from moving in OR out by the case and screw posts. Both of those options are a bit out of an individual's reach. The third is through the use of Cord Grips or Cable Glands , also known as grommets . Especially is a water tight fit is needed. They are screwed on, the cable past through, then the grip part is screwed. These prevent the cable from moving in or out, as well as sealing the hole. Most can accommodate cables at least 80% of the size of the opening. Any smaller and they basically won't do the job. Other options include cable fasteners or holders. These go around the cable and are screwed or bolted down (or use plastic press fits). These can be screwed into a pcb for example. Cable grommets are a fairly hacky way of doing it, as they are not designed to hold onto the cable. Instead they are designed to prevent the cable from being cut or damaged on a sharp or thin edge. But they can do in a pinch. As can tying a knot, though that mainly prevents pull outs, but might not be ideal for digital signals. Pushing a cable in doesn't happen too often, so you might not worry about that. Similar to the second method, is using two or three holes in a pcb to push a cable through (up, down, up), then pulling it tight. This moves the point of pressure away from the solder point and onto the cable+jacket. The other industry method is avoiding all this in the first place, by using panel mounted connectors (or board mounted connectors like Dell does for power plugs, yuck).
{ "source": [ "https://electronics.stackexchange.com/questions/77516", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/13613/" ] }
77,629
I need to synchronize two micro-controllers so that they can measure the speed of propagating waves. The time delay measurements need have microsecond accuracy (error less that 1/2 of a microsecond). I have two micro-controllers ( ATmega328 ) which use a 12MHz crystal. They are both equipped with Bluetooth transceivers. The Bluetooth transceivers send and receive packets with a jitter of ~15 millisecond. I hope to synchronize the micro-controllers using the Bluetooth transceivers, or some other creative method. I have tried synchronizing them by touching them together, but I need them to stay synchronized for about 10 minutes, and their clocks drifted too fast. Maybe if it was possible to accurately predict the clock drift, this method would work. How should I go about achieving this synchronization?
I don't mean to rain on your wireless parade. You've ran into a tough but unexpected requirement. Something like that warrants re-evaluation of the whole system design. 1st thing that comes to mind is to clock both units off one oscillator. You have Bluetooth communication, which hints that the range is on the order of 10m. You could connect your units with RG174 coax cable or an optical fiber, which would carry the clock. 2nd , there are precision oscillators. In order of increasing precision and cost. TCXO (temperature compensated crystal oscillator). 1 to 3 ppm drift, typically. OCXO (oven controlled crystal oscillator). Drift on the order of 0.02ppm. Some OCXO have drift down to 0.0001 ppm. Atomic clock ( Rubidium standard , for example). I'm mentioning atomic clock mostly to give a frame of reference. More on that here . 3rd , precision oscillator trained with GPS. Every GPS satellite has several atomic clocks on board. Usually, there are plenty of GPS satellites in view. GPS is used for precision timing a lot (less known usage compared to sat nav). Most GPS receivers have a 1PPS output (one pulse per second), which provides timing accurate to 50ns. To have a 0.5μs drift over 600s (10min), your clock (the 12MHz clock in your present design) should have drift less than 0.0008ppm. But if you can correct the timing error every so often from an low drift external source, the requirement for the drift in the clock can be more relaxed. If you can correct every second, then your clock could have a 0.5ppm drift.
{ "source": [ "https://electronics.stackexchange.com/questions/77629", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/18637/" ] }
78,255
This question is related to project planning and minimizing risks in the future. Say that Company X builds this clever device (I do not own a company - I am just curious). The clever device takes advantage of a micro controller unit (MCU) as the central most important component. This MCU is manufactured by Company Y. Today, in 2013, how does one ensure that Brand Y and model is still produced X number of years in the future, either by Company Y or someone else? Are there currently any specific brands+model families (or just general architectures), that one can rely on being available into the (un)foreseeable future? Any brands/model-families/architectures known to be uncertain? I guess Intel and Atmel must be producing certain model families, that are quite certain to remain in production for a number of years/decades. But which model or architectures exactly?
Although this is not stricty an electronic design question, it is important to most design engineers. Component sourcing is one of our biggest headaches, and most companies are smart in letting a separate person deal with it instead of causing severe depression and anxiety in the engineers. There are three ways to combat this, aimed at three different tiers of products: Not-terribly-hard-to-make products should just be adaptable. Say you are making a custom board with expected 100-1000 units production per year. Just design in whatever you want, and when you get a product change notification from the manufacturer that one of the parts is going out of production: use another component and just eat the engineering hours. Terrible as this may sound, this is often economically the best idea in this respect. Even large production runs work well with this model; just produce a new variant of your product that is functionally the same. This is being done in the consumer and professional space all the time. Small-run, specialist products that took a lot of man hours to make. For instance specialist scientific tools. The best course of action is to do a good estimate of your required components during the service life of the product and buy twice as many components as you will ever need. Cost is rarely a factor, so even though this will cost you quite some money in advance, as well as space to safely store it all, this will be alright. Don't underestimate storing cost: they need to have very specific, tightly controlled atmospheric conditions, especially to ensure solderability. Medium to large run long term support products. Here, you will want to get a direct line to the manufacturer of your chosen product and ask them to either (a) produce a special version for you with a specified service time or (b) when the PCN goes out, ask them to make those chips specially for you. All MCU companies do this last bit. If you want at least 10.000 chips, even ones that have gone out of production for 20 years, they will happily make them for you - at a nominal fee. However, this is only possible if you need at least in the order of 10 000 units, often even at least 100 000. Very few companies guarantee any kind of long term support on their components. Even so-called 'design for long term use' automotive parts from Microchip are only guaranteed production parts for 10 years, which is nothing compared to the lifetime of some specialist gear. You will always need to check in directly with manufacturers to ensure availability in the long term.
{ "source": [ "https://electronics.stackexchange.com/questions/78255", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/27219/" ] }
78,366
Another form of the question is: will joining two diodes with wires (pn-np) make a transistor-equivalent? I read that they are not equivalent, but why?
Many people think that the answer to this question is related to the width of Base region in BJT transistors - it is incorrect. The answer got quite long. You can read starting from "Tricky Question" section if you want the bottom line. I believe that you were led to ask this question due to something like this picture: This is a standard practice of teaching the basics of BJT, but it can confuse someone not familiar with semiconductors theory in details. In order to answer your question at an acceptable level, I need to assume that you're familiar with the principles of operation of PN diode. This reference contains a detailed discussion of PN junctions. The answer concerns NPN transistor, but it also applies to PNP transistors after appropriate change of polarities. NPN in forward-active mode of operation: The most "useful" mode of operation of BJT transistor is called "forward-active": NPN is in forward-active mode when: Base-Emitter junction is forward biased (usually at \$V_{BE}\approx 0.6V\$ ) Base-Collector junction is reverse biased ( \$V_{CB}>0\$ ) Due to Base-Emitter junction being forward biased, there is an injection of electrons from the Emitter to the Base ( \$I_{E_n}\$ in the image above), and the simultaneous injection of holes from the Base to the Emitter( \$I_{B1}=I_{E_p}\$ in the image above). Emitter region ( \$n^{++}\$ ) is much more heavily doped than the Base region ( \$p\$ ), therefore the current due to electrons injected into Base is much higher than the current due to holes injected into Emitter. Note that the holes injected into Emitter are supplied from Base electrode (Base current), whereas the electrons injected into the Base are supplied from Emitter electrode (Emitter current). The ratio between these currents is what makes BJT a current amplifying device - small current at Base terminal can cause a much higher current at Emitter terminal. The conventional current amplification is defined as Collector-to-Base currents ratio, but it is the ratio between the above currents which makes any current amplification possible. Due to injection of huge amount of electrons from Emitter, electrons tend to diffuse through Base to Base-Collector reverse biased junction. Once an electron reaches there, it is swept across the Collector-Base depletion region and is injected into Collector thus contributing to Collector's current ( \$I_C\$ in the image above). Now, if all these electrons injected from Emitter could diffuse to the reverse biased Base-Collector junction without being subject to other effects - there were no importance at all to the width of the Base region. However, there is recombination going on in the Base. In recombination process the injected electrons meet holes and "neutralize" each other. The injected electron is "lost" in this process and will not contribute to the current at Collector terminal. But wait, charge conservation requires that the hole which recombined with the injected electron will be supplied from somewhere, right? It turns out that the recombining holes are also supplied from the Base terminal ( \$I_{B2}\$ in the image above) thus increasing Base's current and decreasing the Emitter-to-Base currents ratio (which represents transistor's current gain, remember?). The above means that the more electrons recombine during diffusion through Base region, the lower the current gain of the transistor. It is up to manufacturer to minimize the recombination in order to provide a functional transistor. There are many factors that affect recombination rates, but one of the most important ones is Base's width. It is evident that the wider the Base, the more time it will take to injected electron to diffuse through the Base, the higher the chance that it will meet a hole and recombine. Manufacturers tend to make BJTs with very short Base's. So, why can't two PN diodes back to back function as a single NPN: The above discussion explained why Base must be short. PN diodes (usually) don't have this short regions, therefore the recombination rate will be very high and the current gain will be approximately unity. What does this mean? It means that the current at "Emitter" terminal will be equal to the current at "Base" terminal, and the current at "Collector" will be zero: simulate this circuit – Schematic created using CircuitLab The diodes are functioning as standalone devices, not a single BJT! Tricky question: To various degrees of accuracy, many people can answer your initial question as I did. However, the more interesting question is this: if we make \$p\$ sides of both diodes very short, such that the sum of their widths will be no wider than Base region of NPN transistor, will the diodes function as a transistor? This question is more difficult to answer because the straightforward answer of "no, the Base of BJT is very short" is not applicable anymore. It turns out that this approach will not make two diodes similar in behavior to a single NPN transistor. The reason is that at metal contact of the diode, where metal and semiconductor are in touch, all the excessive electrons "recombine" with the "holes" supplied by the contact. It is not the usual recombination as metals don't have holes, but the fine distinction is not that important - once the electrons enter metal, no transistor functionality can be achieved. The alternative way of comprehending the above point is to realize that Collector-Base diode is reverse biased, but still conducting high current. This mode of operation can not be achieved with standalone PN diodes which conduct a negligible currents under reverse bias. The reason for this restriction is the same - excess electrons from the P side of the forward biased diode can not be swept to the P side of the reverse biased diode through the metal wire in "BJT like diode configuration". Instead, they are swept to the power supply providing a voltage bias to the common terminal of the diodes. There was a follow-up question which asked to provide a more rigorous reasoning for the above two paragraphs. The answer concerns metal-semiconductor interfaces and can be found here . What the above means is that the discussion of the width of Base region is related to discussion of effectiveness of BJT transistors, and is completely irrelevant to the discussion of two back-to-back PN diodes as a substitute for a BJT. Summary: Two back-to-back PN diodes can't function as a single BJT because transistor's functionality requires semiconductor only Base region. Once a metal introduced in this path (which is what two back-to-back diodes represent) , no BJT functionality is possible.
{ "source": [ "https://electronics.stackexchange.com/questions/78366", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/23564/" ] }
78,448
I get confused on the low-level physics of electricity from time to time. It came up in " Which way does electricity power a circuit ," and I don't totally get it. How fast does electricity flow? Is the speed of an electron different in say a resistor than in a wire? Does it matter? Or are the effects of the electron the only important thing, with lower levels of abstraction not useful in practice? I know there are already materials on this topic, and I have read some of them. I think having the question on this site might inspire some interesting answers to the age-old question. Bonus points for: Identifying and clearing up common misconceptions Explaining in a way that someone with a high school diploma could understand, without oversimplifying it so much that its incorrect
How fast does electricity flow? This is a good question, because it seems like a simple enough question, but usually it indicates some underlying misconceptions. The first difficulty in answering the question is knowing, what is electricity? Do you mean: How fast do changes in electrical fields propagate? or... How fast do electrical charge carriers move? Usually, people asking this question actually care about the former, but are thinking about the latter. However, not having a clear understanding of the difference, their underlying concern actually can't be addressed without stepping back and addressing the underlying misconceptions which lead to the question. Understand is this: there are forces, and there are things that transmit forces, and they are not the same thing. Here's an example: I'm holding one end of a rope, and you are holding the other end. When I want to get your attention, I tug the rope. There is the rope, and there is the tug. The tug travels as a wave of force down the rope at the speed of sound in the rope. The rope itself will move at some other speed. Say I have two lookout towers, and when I see the approaching invaders, I shout to the other tower. Sound will travel as waves in the air at the speed of sound. How fast are the molecules in air moving? Do you care? Some people won't let this go until the motion of the molecules is actually explained, even though it's usually not relevant to their concerns. So here's the answer: the molecules are flying around in all random directions, all the time. They fly around because they have non-zero temperature. Some are very fast. Some are very slow. They bump into each other all the time. It's very random. When you shout, your vocal tract compresses (and rarefies, as your vocal cords vibrate) some of the air. The molecules in this compressed region want to move to a region with less pressure, so they do. But now this nearby region has too much air, and is a little more compressed than the air around it, so the compressed region expands outward a little more. This wave of compression moves through the air at the speed of sound. All of this happens superimposed on the random motion of the molecules previously mentioned. It's unlikely that the same molecules that were in your vocal tract will be the ones that vibrate in the listener's ear. If you watch individual molecules, you will observe them going in all directions. Only if you observe a lot of them will you notice that slightly more went in one direction versus another. It is true for all things we would call "sound" that the random motion of the molecules due to thermal noise is much more than their motion due to sound. When the "sound" becomes the more relevant motion, we tend to call it not "sound" but rather an "explosion". The situation with electricity is not much different. A metal conductor is full of electrons that are free to wander around the entire circuit in random directions, and they do, simply because they are warm. Things in our circuits make waves in this sea of electrons, and these waves propagate at the speed of light 1 . At the currents we typically encounter in circuits, most of the electron motion is due to thermal noise. So now we can answer the questions: How fast do changes in electrical fields propagate? At the speed of light in the medium in which they are propagating. For most cables, this is in the neighborhood of 60% to 90% of the speed of light in a vacuum. How fast do electrical charge carriers move? The velocities of individual charge carriers are random. If you take the average of all these velocities, you can get some velocity that depends on the charge carrier density, and the current, and the conductor's cross-sectional area, and it's typically less than a few millimetres per second in a copper wire. Above that, resistive losses become high in ordinary metals and people tend to make the wires bigger instead of forcing the charges to move faster. Further reading: Speed of Electricity Flow by Bill Beaty 1: The speed of light depends on the material in which the light is propagating, just as with sound. See Wave propagation speed .
{ "source": [ "https://electronics.stackexchange.com/questions/78448", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/13400/" ] }
78,638
In AC analysis, \$s=j\omega\$ when we deal with \$sL\$ or \$1/sC\$. But for a Laplace transform, \$s=\sigma+j\omega\$. Sorry for being ambiguous but I would like to connect the questions below: Why is sigma equal to zero? Is neper frequency connected to this? Is sigma equal to zero as the input signal is a sinusoid of constant \$\pm V_{max}\$?
Of course, \$s = \sigma + j\omega\$, by definition. What's happening is that \$\sigma\$ is being ignored because it is assumed to be zero. The reason for it is that we are looking at the response of the system to periodic (and thus non-decaying) sinusoidal signals, whereby Laplace conveniently reduces to Fourier along the imaginary axis. The real axis in the Laplace domain represents exponential decay/growth factors that pure signals do not have, and which Fourier does not model.
{ "source": [ "https://electronics.stackexchange.com/questions/78638", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/23564/" ] }
78,920
Having no prior experience with scopes, it seems strange to me that when the probe is not measuring anything (~ not connected to a circuit) it measures a small 50Hz (~ my mains are running at 230V 50Hz) signal instead of some random noise. Is this normal behaviour (my scope is a Rigol DS1052E)?
Yes, that's normal. Due to its high impedance the probe acts as an antenna for the 50Hz field from the mains which fills the space surrounding the wiring (i.e. any room in your house). You'll notice that touching the probe will even show a stronger signal, indicating that your body is even a better antenna.
{ "source": [ "https://electronics.stackexchange.com/questions/78920", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/27473/" ] }
79,145
In an answer by The Photon he mentions 'Manhattan routing' in regards to PCB design. I haven't found a lot of relevant information about this term on the internet; therefore the question: What is Manhattan routing?
Manhattan routing is a PCB routing strategy. You use one dedicated layer for horizontal tracks and another layer for vertical tracks. No horizontal tracks are allowed on the vertical layer, and no vertical traces are used on the horizontal layer. This means that most connections will go trough a via, but this strategy can provide surprisingly dense boards with little routing effort.
{ "source": [ "https://electronics.stackexchange.com/questions/79145", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4906/" ] }
79,279
I know mAh tells how much milliamperes a battery can deliver in an hour. But does that also tell how many hours the battery would last? Sorry but I don't really get it. If we're talking about a water tank, to my impression, mAh is like how big the faucet is and not how much water there is in the tank. I'm really confused as to why we measure battery capacity in mAh if my understanding about it is correct.
mAh (or mA·h) is not how many milliamperes a battery can deliver in an hour. That would be mA/h. Current , measured in amperes , is already a rate of stuff. Specially, one ampere is one coulomb per second. So, if current is like speed, then mA/h is like acceleration, and mAh is like distance. Rather, mAh it is a unit of charge . It is what you get when you multiply current by time. By multiplying by time, the "per time" part of the ampere is cancelled, and you get back to charge. If an ampere is a coulomb per second, then: $$ \require{cancel} 1~\mathrm{mAh} = 1\cdot10^{-3}~\mathrm{\frac{C}{s}h} $$ and by dimensional analysis : $$ \require{cancel} \frac{1\cdot10^{-3}~\mathrm{C\cancel{h}}}{\cancel{\mathrm{s}}} \frac{60\cancel{\mathrm{s}}}{1\cancel{\mathrm{min}}} \frac{60\cancel{\mathrm{min}}}{1\cancel{\mathrm{h}}} = 3.6~\mathrm{C}$$ For example, if you draw 1 mA for 1 hour from a battery, you have used 1 mA · 1 h = 1 mAh of charge. If you draw 2 mA for 5 hours, you have used 2 mA · 5 h = 10 mAh. You can approximate how long a battery will last by dividing its total charge (in mAh) by your nominal load current (in mA). Say you have a 1800 mAh battery, and you connect it to a 20 mA load: $$ \require{cancel} \frac{1800~\mathrm{mA\cdot h}}{20~\mathrm{mA}} = \frac{1800\cancel{\mathrm{mA}}\cdot\mathrm{h}}{20\cancel{\mathrm{mA}}} = 90~\mathrm{h} $$ This is an approximation because: The charge capacity (the number measured in mAh) is determined by measuring how much charge can be removed from the battery before voltage drops to some arbitrarily selected level where the battery is considered "discharged". This may or may not be the threshold at which your circuit no longer functions. Battery manufacturers, wanting to make their batteries seem as good as possible, typically select a very low threshold voltage. Assuming you are considering charge available only down to some voltage threshold, the actual charge available from the battery depends on temperature, and the rate at which you discharge it. Lower temperatures slow the chemical reaction in the battery, making it harder to extract charge. Higher rates of discharge increase losses in the battery, decreasing the voltage, thus hitting the "discharged" voltage threshold limit sooner. The electric potential difference provided by the chemicals in the battery is actually constant; what makes the voltage decrease is the depletion of the chemicals around the electrodes and degradation of the electrodes and electrolyte. This is why battery voltage can recover after a period without use . So, the point at which the threshold voltage is reached can actually be quite complex to determine. If you can find a good datasheet for your battery, it may give some insight into the parameters under which these calculations were made.
{ "source": [ "https://electronics.stackexchange.com/questions/79279", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/27593/" ] }
79,373
I want to control LED's brightness with PWM (via BJT transistor). What frequency of PWM should I choose?
For a question like this, you will probably get as many answers as there are people interested in answering. Here is my answer: It depends . Here are some of the limiting factors, first the lower limits: Persistence of vision: Different people are differently sensitive to flicker in a light source. Some would notice flicker even at 100 Hz, others perhaps not even at as low as 10 Hz. Motion of light source relative to the eye makes flicker more discernible, scaling up with speed of the motion. Human vision sensitivity at low intensity of light - both ambient and source intensity. At very low intensity, the eye is much more sensitive to any change in intensity. So an LED operated at low duty cycle / low current and in a dark environment would require a higher minimum PWM frequency. Now the upper limits: LED turn-on characteristics: An LED cannot be toggled at arbitrarily high frequency, once the pulse duration approaches the turn-on time, the LED never really turns on fully, hence linearity of PWM control is lost to begin with, and at higher frequency / shorter pulses, eventually the LED just stays dim or off. PWM provider capabilities: Your microcontroller would have its own maximum PWM rate, which sets a hard limit. Switching losses: Any switching system, MOSFET based, BJT based, or other, suffers switching losses of power as switching rate increases. At one point this become significant both in terms of heating of switching device, and efficiency of illumination. Thus, depending on these parameters, and any others affecting your specific requirement, the correct answer could be anywhere in the 50 Hz to few dozen KHz range.
{ "source": [ "https://electronics.stackexchange.com/questions/79373", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ] }
79,447
Could you use a hot wire (120v 60 hz) from earth and a copper rod inserted into another extraterrestial body (mars) to complete a circuit? If impedance wasn't something to worry about (pretend Mars is only a mile away or something). I've asked this before on other places and got some really conflicting answers.
It depends what the voltage between Mars and Earth is, which we don't know. It is unlikely that this voltage is 0, and could be enormous. Your circuit would receive a voltage of roughly the same magnitude, resulting in what would probably be a spectacular failure of your interplanetary toaster. Clearing the air The idea, "for current to flow, there must be a connection to earth," is a common one, and it is totally false . If the misconception were true, circuits on airplanes and satellites wouldn't work since there is no connection to earth. Its quite obvious that such circuits do work, and that a connection to earth is totally unnecessary for some circuits to function properly. The misconception arises from the concept of grounding , but remember that ground is simply a reference voltage. It doesn't necessarily need to be Earth. It could be Mars or any other electrical potential. Second, for current to flow, a circuit does not need to make a physically closed loop . If point A is fixed at \$ 0V \$ and point B is fixed at \$ V_b \$ , they don't need to be physically connected for this property to be true. When we connect a resistor between point A and point B, the current from B to A will be \$I_{BA} = V_b/R \$ , according to Ohm's law . It is sometimes helpful to symbolically wire every component to the ground reference voltage. This way we can think of ground connected to point A, and connected to a voltage source which connects to point B. In this case, the two points still don't have to be physically connected, but may be considered to be symbolically connected. In this framework, think of Earth as point A, with our reference voltage of \$ 0V \$ , and Mars as point B with some unknown voltage, \$ V_{mars} \$ . Hooking up with Mars Let's say we can make nearly ideal electrical connections to anywhere (suppose we have a handy little portal). The physical laws governing electricity are the same everywhere in the universe. So what happens when you hook up your circuit with your ground in the Martian crust using this portal? Actually, it depends what what \$ V_{mars} \$ is: Case 1 , \$ V_{mars} \approx V_{earth} = 0V \$ In this case your circuit works perfectly normally. Your circuit has no idea that it is connected to Mars and not Earth since they have approximately the same electrical potential. Case 2 , \$ V_{mars} \gg V_{earth} = 0V \$ or \$ V_{mars} \ll V_{earth} = 0V \$ In the case that the voltage of Mars differs significantly from the voltage of Earth, current will certainly still flow, but your circuit might not behave how you expect. It might blow a fuse, arc weld everything in the vicinity or simply vaporize our brave little toaster, depending on just how huge \$ V_{mars} \$ is. Voltage between Mars and Earth We don't really know what \$ V_{mars} \$ would be, since we don't know the net charge of Earth or Mars. There is a paper, " Discussion on the Earth's net electric charge " which gives us some clues: ...Integrated over a sufficiently long time, the net current to or from earth must be zero. If it were not, the potential of the earth would build up to such a magnitude that no force could "shoot" more charges up the potential slope- and once this state is reached, the net current would indeed be zero. This is a dynamic equilibrium. The problem of a net charge on the solid (and liquid) earth (i.e. the globe) can hardly be answered by starting from the fact that current to that body is zero (always or in the average over a long time); not even within the framework of the "classical picture of atmospheric electricity". There does not seem to be a practical method to measure it. Basically, we can assume that Earth and Mars have each reached their respective equilibrium charges, but we have no way of knowing whether these charges are net positive, neutral, negative, or what their magnitudes are. Since we don't know the net charges of the respective planets, we can't estimate the voltage between them.
{ "source": [ "https://electronics.stackexchange.com/questions/79447", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/25074/" ] }
79,498
I think I understand the operating principles of a brushless motor and a stepper motor, but I'm a little confused about the difference. Is a brushless DC motor a very basic stepper motor? With proper controls, could a brushless DC motor be operated as a stepper motor? If not, how do they differ? For an electronics newbie, can someone highlight the similarities and differences between stepper motors and DC brushless motors?
The two are largely the same, fundamentally. However, they differer in intended application. A stepper motor is intended to be operated in, well, steps. A BLDC motor is intended to be operated to provide smooth motion. Since stepper motors are used for motion control, repeatability of the steps is desirable. That is, if you start at one step, then to another, then back to the first, it should ideally return to exactly where it was previously. Various things can mess this up; slop in the bearings, friction, etc. BLDC motors are optimized for smooth torque between steps, not repeatability. Stepper motors are designed to maximize holding torque , the stepper's ability to hold the mechanical load at one of the steps. This is accomplished by keeping the winding current high even though the rotor is aligned with the stator. This wastes a lot of energy, because it generates no torque unless the load tries to turn out of position, but it does avoid the need for any feedback mechanism. On the other hand, BLDCs are typically operated with the rotor lagging the stator so that applied current always generates maximum torque, which is what a brushed motor would do. If less torque is desired, then the current is decreased. This is more efficient, but one must sense the position of the load to know how much torque to apply. Consequently, stepper motors are usually bigger to accommodate the additional heat of operating the motor at maximum current all the time. Also, for most applications, people expect a stepper to be capable of small steps for precise motion control. This means a large number of magnetic poles. A stepper motor typically has hundreds of steps per revolution. A BLDC will usually have many less. For example, recently I was playing with a BLDC from a hard drive, and it has four "steps" per revolution. Stepper motors are usually designed for maximum holding torque first, and speed second. This usually means windings of very many turns, which creates a stronger magnetic field, and thus more torque, per unit of current. However, this comes at the expense of increased back-EMF, thus reducing the speed per unit voltage. Also, stepper motors are usually driven by two phases 90 degrees apart, while BLDCs typically have three phases, 120 degrees part (though there are exceptions in both cases): stepper motor BLDC Despite these differences, a stepper can be operated like a BLDC, or a BLDC like a stepper. However, given the conflicting design intentions, the result is likely to be less than optimal.
{ "source": [ "https://electronics.stackexchange.com/questions/79498", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/27675/" ] }
79,752
Can someone explain what "common-mode" noise is, and how it can be problematic? I understand "noise" on a signal in general. If I have a "noisy" +5V rail on a circuit board, I'm not going to be getting a constant value of +5, it will be bouncing around above and below that nominal value... ... but still relative to circuit COM . My very vague understanding of "common-mode" noise is that it is where both sides are varying equally together . (This is where my understanding breaks down) That is, the pair is bouncing around with respect to ... ... to what? Earth ground?
What is common-mode noise? Practically all integrated circuits (and circuits in general) have a pin named "ground" or "GND", or the datasheet says things like "connect VSS to ground". When transmitting data "a long distance", the wires act as antennas and can easily pick up a few volts of noise, and also radiate noise. So, for example, an output pin on a chip in one box may transmit a "0" as about 0.5 V and transmit a "1" bit as about 2.5 volt, measured relative to the ground pin of that same "line driver" chip. At a distant location, the other end of the wire is often connected to a pin on a "line receiver" chip. Because of noise, the voltage on that input pin, measured relative to the ground pin of that same line receiver, might often be anywhere in the range -1.5 V to +2.5 V when the transmitter is trying to send a "0", and anywhere in the range 0.5 V to 4.5 V when the transmitter is trying to send a "1". So how can the receiver possibly know whether the transmitter is trying to send a 1 or a 0, when it gets a voltage like 0.9 or 2.2 ? Because of this, data transmitted over long distances is often sent using differential signaling over a balanced pair , often a twisted pair . In particular, USB, CANbus, and MIDI cables include a single twisted pair for data; "2-line" telephones and FireWire use two twisted pairs; CAT5e Ethernet cables include four twisted pairs; other systems use even more pairs. Often (but not always), there is some other "ground wire" in the same bundle of cables. We label one of these wires "plus" or "positive" or "+" or "p", and the other wire "minus" or "-" or "negative" or "n". So when I want to transmit a "CLK" and a "MOSI" signal from one place to another, my cable has 4 wires labeled pCLK, nCLK, pMOSI, nMOSI. The common mode voltage of CLK is the average of the two CLK wires, (pCLK + nCLK)/2, measured at the receiver -- relative to the GND pin of that receiver. The common mode voltage of MOSI is the average of the two MOSI wires, (pMOSI + nMOSI)/2, measured at the receiver -- relative to the GND pin of that receiver. People who design line drivers try to make them pull the "p" line up just as much and at the same time as the "n" line goes down, and vice versa, so the average voltage (measured at the driver) is constant -- in this example, the average at the driver is a constant 1.5 V. (Alas, they are never completely successful). If there were no noise, then the common mode voltage would also be the same constant value -- but alas, it is not. Whenever data is transmitted with differential signaling, the difference between the noise-free common mode voltage and the actual common mode voltage is entirely caused by noise. That difference is called common-mode noise. There are 3 main causes of common-mode noise: Many differential pairs are driven in ways that don't switch the "+" and "-" wires at exactly the same time, or by exactly the same voltage, or perhaps small amounts of noise on the line driver's power rail leaks onto only the "+" wire and not the "-" wire, causing some common-mode noise. (A ferrite choke on the "driver" end of the cable is commonly used to reduce common-mode noise from this source). Other wires in the cable bundle can leak more energy into one wire of the pair than the other -- typically through capacitive coupling. (Twisting each pair a different number of twists per length is commonly used to reduce common-mode noise from this source). Outside interference -- often through inductive coupling. how can common-mode noise be problematic? People try to design line receivers to reject common-mode noise. (Alas, they are never completely successful). But even in a system that uses differential signaling with such line receivers, common-mode noise can still be problematic: Long communication wires act as antennas. If the line driver sends too much common-mode noise down the wires, it causes radio-frequency interference with other devices, and causes the system to fail FCC testing or CE testing or both, for electromagnetic compatibility (EMC). Some of the common-mode noise leaks through the line receiver -- the common-mode rejection ratio is not infinite. This is a big problem with analog signals; usually not a problem with digital ones and zeros. Most integrated circuits don't work right when any pin is forced too high or two low -- voltage lower than 0.6 V below the GND pin and higher than 0.6 V above the power pin usually causes problems. Since common-mode noise can easily push the "+" or the "-" signal, or both, outside that range, line receiver circuits must either connect the wires to special integrated circuits (such as "Extended Common-Mode RS-485 Transceivers") that can handle such excursions; or connect the wires to some non-integrated circuit component that protects the ICs from such excursions -- such as the opto-isolators used in MIDI or the transformers used in Ethernet.
{ "source": [ "https://electronics.stackexchange.com/questions/79752", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17808/" ] }
80,217
Often, if older standards become obsolete, it's because they are superseded by newer technologies. In the past networking was done using coax, instead of the twisted pair used today. Why did they use the more expensive coax? It doesn't look like the twisted pair technology didn't exist back then, so technological advances don't seem to be the reason.
Coax was used for its controlled impedance, its bandwidth and its self-shielding properties. Sure, twisted-pair wiring has existed for a very long time, mostly used to carry audio frequencies in telephone wiring. That isn't where the technical advancement was required. In order to compensate for twisted-pair's lossiness and impedance issues, major technological improvements in the electronics used to interface to it (such as high-speed adaptive equalizers) were required in order to make it more cost-effective than coax.
{ "source": [ "https://electronics.stackexchange.com/questions/80217", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/28009/" ] }
80,527
How can I reverse engineer a simple through-hole board? Trying to do it by eye gets confusing because it's easy to lose track of component orientation and location while flipping the board over. Maybe there's a computer assisted technique that would make things easier?
Take the best pictures of both sides of the board that you can. You can use a scanner to scan the bottom side of the board, but not all scanners will be able to focus the top layer because of the height of the components. To this guide I used a 3MP camera of the iPhone 3GS without any special lighting or anything. You will almost always likely to have a picture taken with better equipment and conditions. Import then to a blank canvas on your favorite editing program. I'm using Adobe Fireworks , you photoshop or almost any image editing software. The images must be in separated layers. Use the Polygon Lasso tool to crop the board from the rest of the picture. Do the same for the other side. Use CMD - X to cut and then CMD - V to paste. It will lift the selection from the background. And then just delete the background. Do the same for the other side. Use the Rotation CW/CCW and Flip Horizontal/Vertical to adjust the pictures to the correct position. You will want the bottom of the board to be mirrored, so it matches the components side. 6.Decrease the opacity of the top layer down to around 50%~75% so we can se through it. 7.The 2 images are not in the same size, or angle. So we will use the Distortion tool to resize and straighten the corners of the top side so it match the bottom side. * Alignment is critical, so take your time, use the zoom/magnifier and check if everything is aligned. Look for the holes in the board, it is the easier way to check if the board is misaligned. Blending There are many methods that can be used to blend images. Not all will work on all cases. But I will go through a few of methods that may work for the majority of people. The adjustments are subjective and will depend on your board color, illumination, exposure, etc... there are many variables, play around and find what values works best for you. 1. Screen Blend Drag the bottom side to upwards the components side (copper side's layer on top of component side's layer). Darken the copper layer using Brightness/Contrast filter and set brightness to -50 Select the copper layer and select blend mode to Screen / Interpolation or Average and set to 80 2. Luminosity Blend Drag the bottom side to upwards the components side (copper side's layer on top of component side's layer). Increase the constrast of the copper layer using Levels filter, drag the pins to twords the hill. Select the copper layer and select blend mode to Luminosity and set to 50 Brush + Threshold Drag the bottom side to upwards the components side (copper side's layer on top of component side's layer). Select the copper layer and use the Brush and draw lines connecting the solder pads/holes, you can also use the path/line tool to draw straight lines instead. Use a solid color that is not used by the solder mask. In this case the solder mask is green/yellowish, so I used blue. Apply a Invert filter. Use the Levels filter or Threshold filter to extract just the solid color. Drag the left pin all the way to the right. Apply the filter Hue/Saturation and choose the color of the track of you preference rotating the Hue. Select the copper layer and select blend mode to Additive and set the opacity to around 70 to adjust the intensity of the track. Now we are ready to write down the values, and then transfer it to a CAD Software .
{ "source": [ "https://electronics.stackexchange.com/questions/80527", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/13480/" ] }
81,252
One particularly irritating variety of bug in a microprocessor-controlled system is for the microprocessor to unexpectedly reset. An important tool for debugging this kind of problem is a list of possible causes. What could cause a microcontroller to unexpectedly reset?
On PIC and dsPIC chips, I have observed the following causes of unexpected reset. Hardware: Reset pin driven low or floating. Check the obvious stuff first! ESD coupling into the reset pin. I've seen this happen when completely unrelated equipment gets turned on on the same desk. Make sure there's enough capacitance on the reset pin, possibly as much as 1 uF. ESD coupling into other pins of the processor. Scope probes in particular can act as antennae, couple noise into the chip and cause odd resets. I've heard reports of "invalid opcode" reset codes. Bad solder joint/intermittent bridge. Could be losing or shorting a power rail, either on the processor or somewhere else on the board. Power rail glitch/noise. Could be caused by any number of external problems, including a damaged regulator or a dip in the upstream supply. Make sure the power rails feeding the processor are stable. May require more cap somewhere, perhaps decoupling cap directly on the processor. Some microcontrollers have a Vcap pin, which must not be connected to VDD and must have its own capacitor to common. Failure to connect this pin properly may have unpredictable results. Driving an analog input negative past a certain limit causes a reset that reports in RCON like a brownout. The same may be true of digital inputs. Very high dV/dt in a nearby power converter can cause a brownout reset. (See this question .) I have seen this in two cases, and in one I was able to track it to capacitive coupling. An IGBT was switching 100-200 amps, and at turn-off some feedback circuits were seeing a few microseconds of noise, going from 2V to over 8V on a 3.3V processor. Increasing the filter cap on that feedback rail made the resets stop. One could imagine that adding a dV/dt filter across the transistor might have had a similar effect. Software: Watchdog timer. Make sure the watchdog timer is cleared often enough, especially in branches of your code that may take a long time to execute, like EEPROM writes. Test for this by disabling the watchdog to see if the problem goes away. Divide-by-zero. If you're performing any divide operation in your code, make sure the divisor can never be equal to zero. Add a bounds check before the division. Don't forget that this also applies to modulo operations . Stack overflow. Too many nested function calls can cause the system to run out of dynamic memory for the stack, which can lead to crashes at unusual points in code execution. Stack underflow. If you are programming in assembler, you can accidentally execute more RETURNs than you executed CALLs. Non-existent interrupt routine. If an interrupt is enabled, but no interrupt routine is defined, the processor may reset. Non-existent trap routine. Similar to an interrupt routine, but different enough I'm listing it separately. I've seen two separate projects using dsPIC 30F4013 which reset randomly, and the cause was tracked to a trap that was called but undefined. Of course, now you have the question of why a trap is called in the first place, which could be any number of things, including silicon error. But defining all trap handlers should probably be a good early step in diagnosing unexplained resets. Function pointer failure. If a function pointer does not point to a valid location, dereferencing the pointer and calling the function pointed to can cause a reset. One amusing cause of this was when I was initializing a structure, with successive values of NULL (for a function pointer) and -1 (for an int). The comma got typoed, so the function pointer actually got initialized to NULL-1. So don't assume that just because it's a CONST it must contain a valid value! Invalid/negative array index. Make sure you perform bounds checking on all array indices, both upper and lower bounds, if applicable. Creating a data array in program memory that's larger than the largest section of program memory. This may not even throw a compilation error. Casting the address of a struct to a pointer to another type, dereferencing that pointer, and using the dereferenced pointer as the LVALUE in a statement can cause a crash. See this question . Presumably, this also applies to other undefined behaviors. On some dsPICs, the RCON register stores bits indicating cause of reset. This can be very helpful when debugging.
{ "source": [ "https://electronics.stackexchange.com/questions/81252", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/7523/" ] }
81,429
Why do we need a resistor in a Zener diode circuit, like in the diagram below? I understand it is to limit the current but how so, and why do we need it for a Zener diode? Does selecting different values of resistors affect the circuit performance? So when we are selecting a Zener diode we look in the specifications at different reverse currents that can flow through them. But if those can be changed through resistors, can we select any Zener diaode at a voltage, without looking at the maximum current?
Rather than going to 'no resistor' consider what happens if we just use resistors of different (lower) values and look at the pattern. As we reduce the resistor value the current through the zener will rise. Even if the voltage source is not perfect the power dissipated by the zener will cause it to fail due to overheating.
{ "source": [ "https://electronics.stackexchange.com/questions/81429", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/27975/" ] }
81,935
I'm trying to use an Arduino to enable/disable a 12V solenoid. I used an H-bridge and got that working fine. Then, I decided to simplify things and get a single mosfet instead of a multi-channel H-bridge and have gotten myself very confused. I'm trying to understand the proper way to use a P-channel (or N-channel) mosfet in this setting, and came across this sample circuit on google: Why is there another transistor involved (the 2N3904), and why is there a diode across the load? I understand that a P-channel is activated when \$V_{gate}\$ is brought high (above \$V_{source}\$ + \$V_{drain}\$ ), hence the pull-up, but why the extra transistor? Shouldn't the MCU (in this case the PIC) be doing the same thing? Also - in the scenario when all I'm doing is turning a load on or off (like my solenoid), is there a reason to use an N-channel vs a P-channel?
Compare the actions of a P and N channel MOSFET in your circuit. (I've left the junction transistor in to aid comparison.) The PIC output does not like being connected to 12V so the transistor acts as a buffer or level switch. Any output from the PIC greater than 0.6V (ish) will turn the transistor ON. P CHANNEL MOSFET . (Load connected between Drain and Ground) When the PIC output is LOW, the transistor is OFF and the gate of the P MOSFET is HIGH (12V). This means the P MOSFET is OFF. When the output of the PIC is HIGH, the transistor is turned ON and pulls the gate of the MOSFET LOW. This turns the MOSFET ON and current will flow through the load. N CHANNEL MOSFET . (Load connected between Drain and +12V) When the PIC output is LOW, the transistor is OFF and the gate of the N MOSFET is HIGH (12V). This means the N MOSFET is ON and current will flow through the load. When the output of the PIC is HIGH, the transistor is turned ON and pulls the gate of the MOSFET LOW. This turns the MOSFET OFF. The 'improved' MOSFET circuit . We could eliminate the transistor by using a digital N MOSFET type - it only needs the 0-5V signal from the PIC output to operate and isolates the PIC output pin from the 12V supply. When the PIC output is HIGH the MOSFET is turned ON, when it is LOW the MOSFET is turned OFF. This is exactly the same as the original P MOSFET circuit. The series resistor has been made smaller to aid the turn ON, turn OFF times by charging or discharging the gate capacitance more quickly. The choice of device is basically down to your design needs although in this case the digital type N MOSFET wins hands down for simplicity.
{ "source": [ "https://electronics.stackexchange.com/questions/81935", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9983/" ] }
82,787
While I (most of the time) know from experience what components are generally considered active or passive, I have yet to come across a satisfying definition. Why do we divide electrical components into those two main categories at all? Some examples: From http://www.electricaltechnology.org/2013/06/the-main-difference-between-active-and.html Active: Those devices or components which produce energy in the form of Voltage or Current are called as Active Components Passive: Those devices or components which store or maintain Energy in the form of Voltage or Current are known as Passive Components How does a Diode "produce" energy? From http://www.differencebetween.com/difference-between-active-and-vs-passive-components/ What is the difference between active and passive components? 1. Active devices inject power to the circuit, whereas passive devices are incapable of supplying any energy 2. Active devices are capable of providing power gain, and passive devices are incapable of providing power gain. 3. Active devices can control the current (energy) flow within the circuit, whereas passive devices cannot control it. What exactly means "can control current flow"? Isn't a (passive) capacitor able to control or at least influence current flow as well? Some people argue, that it depends on the context in which the component is used to be able to consider it active or passive. This doesn't make things easier. Especially for diodes there are so many conflicting/different arguments: "In most cases (rectifier, Zener, etc.) a diode is, no doubt, a PASSIVE device. Only in some special cases like with a tunnel diode, when its negative resistance region is used, it can be considered as an ACTIVE device." "it is an active device.since its impedence is positive,or v-i chara lies in 1&2 quadrants." "Yes it is an active device since it requires an external power source, to operate it in forward or reverse bias." "Diode is an active device, since it can be used as an waveform generator (half wave rectifier, for ex)." "If the i-v characterisitics of the diode are in region I and III, then it is a passive device (always dissipating power). I think most diodes fall into this category." I am pretty sure that there is no "one rule" and you always have to ask several questions about a component that must be satisfied to classify it. But what are those criteria exactly?
There is a clear definition: Passive elements have no function of gain, or control over voltage or current: their controlling function is linear -> I/V = R in the case of a resistor. There are exactly four kinds of passive elements: Resistors, Capacitors, Inductors and Memristors. All other components are active. Source http://de.wikipedia.org/wiki/Elektrisches_Bauelement Active Elements have a function of gain or control, meaning the connection of the controlling parameters nonlinear. Diodes control current, transistors amplify current, etc. The reason for the distinction is mathematical: You can use certain mathematical approaches to solve the equations of a device that contains only passive elements, while the same approaches would not work with active elements. If you have active elements, you may have to first approximate a passive network at the working conditions before you calculate. This does not mean you cannot build advanced devices out of passive Elements. Analog filters are often made from passive elements and can be quite complicated.
{ "source": [ "https://electronics.stackexchange.com/questions/82787", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/16051/" ] }
83,426
The STM32F1's reference manual describes "regular" and "injected" ADC channels but is not clear on the difference. What is the difference between the two types and when might you use one or the other?
You can configure the ADC to read in a sequence of channels in a loop. Those channels are being converted regularly. In injected mode conversion is triggered by an external event or by software. An injected conversion has higher priority in comparison to a "regular" conversion and thus interrupts the regular conversions. The different ADC-Modes are explained in application note AN3116.
{ "source": [ "https://electronics.stackexchange.com/questions/83426", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1363/" ] }
84,287
While learning Fourier transforms, I have this little doubt about wavelength. Why is that wavelength for a sine wave, or any other wave, is measured with a unit of distance? If it travels w.r.t. time, why is it not measured as a unit of time? I think, in the below drawing, the green line is the wavelength. So, why measure it with a unit of distance?
The plot you provide gives the amplitude versus time so there is no length, other than "length" of time, i.e., period) to speak of. Without further context, what you have plotted there is a simple sinusoidal function of time, not a wave. A wave is a function of both time and space. For example : $$f(x,t) = \cos(\frac{2\pi}{\lambda}x - \frac{2\pi}{T} t) $$ where the wavelength \$\lambda \$ (measured in units of length) and period T (measured in units of time) are explicit. Often, this is written as: $$f(x,t) = \cos(kx - \omega t) $$ where k , the wavenumber is: $$ k = \frac{2\pi}{\lambda} $$ and \$\omega\$, the angular frequency is: $$\omega = \frac{2\pi}{T}$$ The wave propagates with phase velocity $$v_p = \dfrac{\lambda}{T} = \dfrac{\omega}{k} $$ Then we can write: $$f(x,t) = \cos\frac{2\pi}{T}(\frac{x}{v_p} - t) = \cos\omega(\frac{x}{v_p} - t)$$ or $$f(x,t) = \cos\frac{2\pi}{\lambda}(x - tv_p) = \cos k(x - tv_p)$$
{ "source": [ "https://electronics.stackexchange.com/questions/84287", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/29001/" ] }
85,026
The cut-off frequency of a RC filter is obtained from the well known equation, $$1\over(2 \pi RC)$$ This is one equation with two variables. For example, R=100, C=10 has the same result as R=10, C=100. Based on what should I prefer one over the other?
It's a compromise. With R at 1000 ohm and C at 100nF (cut-off frequency = 1.59kHz), the driving voltage at the input may be required to drive signals with frequencies well above 1.59kHz into what is getting close to a 1000 ohm load. Consider what the impedances are at 1.59kHz - R of course is 1000 ohm and C's impedance also has a magnitude of 1000 ohms whereas, at 10 kHz, C's impedance has a magnitude of only 100 ohms. In other words, at 10kHz, the signal feeding into the RC low pass filter "sees" an impedance of about 1000 ohms. This is due to the following formula: - Z = \$\sqrt{R^2 + X_C^2}\$ = \$\sqrt{1,000,000 + 10,000} = 1005\space \Omega\$ If the signal feeding the RC network has an output resistance of 100 ohms then this adds an error to the "R" part of the equation and distorts the "true" spectral shape of the filter. On the other hand.... The benefit in having a low R and a high C means that the output impedance is affected less by the circuit its output connects to. In the example above, even at DC the output impedance of the network is 1000 ohms. If R was (say) 10k ohms and C was 10nF, the output impedance at DC is 10k ohms and may be affected by some loads. So, you have to consider what your driving impedance is and what your RC network may have to "drive" into. There are many examples where the output will connect to an op-amp which will usually have a DC input resistance in the Gohm range but, it may have an input capacitance of 10pF. This input capacitance offsets the output capacitance by a small amount and, in the example above, would make the 100nF capacitor into 100.01nF - hardly a big deal of course but if you are designing a filter that has a cut-off at 50kHz, it's starting to become a potential source of error. Cascading RC low pass filters (or any filter types) is also a serious issue. Say you want to passively connect two RC low pass filters - if you picked both resistors to be 1000 ohm and both capacitors to be 100nF you are not going to get the same filter response should you have connected them via a high impedance buffer amp. A partial solution is to make the first network low impedance and the 2nd network high impedance. To give you an idea make the first RC network from a 1,000 ohm and 100nF and the connecting network from 10,000 ohms and 10nF - there will still be a little bit of interaction but it is far less than when both are the same impedance.
{ "source": [ "https://electronics.stackexchange.com/questions/85026", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/28144/" ] }
85,291
I recently bought a Kill-a-Watt meter to measure power consumption of my devices. This was just a fun project for me. I put this on my Mac-Mini to see power consumption during various usage states - idle, MATLAB bench mark, starting an application, screen-saver etc. and discovered that the power factor my my Mac-Mini was 0.2 when on standby and .80+ when up and running. However, while shutting down and before login the power factor was .45 I also measured this for my: Raspberry pi with a 5v 3A power supply (PF=0.68) Dell Laptop 90W charger (PF=.98) 2006 Compaq Charger (PF=.55) CFL Table Lamps (PF=.52) Tube CFL Lamps (PF=.55) Power PC Late 2005 (PF=) A 40W Audio Amplifier (PF=.66) Vacuum cleaner (PF=.97) Samsung phone charger (PF=.58). From what I learned in my classes, 0.9+ is a good power factor and utility companies fine industries if they have power factors less than this. But most household equipment seem to have terrible power factors. Yet I have never heard of utility companies fining residential users? Why? Do they correct the power factors at the transformers at our residence? Or is residential usage too meager to bother about?
The power factor of residential premises is already pretty good. From a 2002 report on power factor correction, we get the following snippet about power factor correction in the tenant spaces (offices, apartments) of a commercial building, vs. the utility parts (elevator lifts, HVAC): If we were to take an example of a typical commercial building, the main switchboard is split into two separate sections; a house services section and a tenant section... [snip] The house section normally houses the circuit breakers for the central air conditioning plant, lifts, house lighting and power. As will be highlighted in Section 7, motors account for a decrease in power quality and thus a reduction in power factor. In this particular instance it would be a valid exercise to consider the benefits of power factor on this section of the installation. In most instances power factor correction is installed providing immediate cost savings to the base building owner. As the tenant power is on a separate bus, they also have the opportunity to consider power factor correction. In most instances the tenant supply usually consists of general lighting and power with some supplementary air conditioning. The power factor for these installations is generally greater than 0.90 and as such there is no significant benefit in installation PFC units. In addition, these tenants are usually metered at a kWh rate that does not consider the power factor of the installation for billing purposes. The bulk of electricity in houses is used to either heat things up (space heaters, ovens, cooktops, water heaters) or cool things down (air conditioners, refrigerators.) These either have intrinsically good power factor (heating elements are resistive, i.e. p.f. 1.00) or they come with power factor correction in-built ( air conditioners. ) The things you measured are mostly electronic devices, so they have poor power factor, but they also don't draw much power compared to the heating/cooling devices listed above. -- Contrast this with industrial sites, where a large part of the load is AC induction motors with rated power factors between 0.80 to 0.90 (and less than that if they're less than fully loaded.) There can be 10 MW worth of induction motors in a decent size plant - I know of ore crushing and grinding mills which are driven by a 10 MW induction motor each . It is much more cost effective to target such induction motor installations before targeting consumers. In response to Lord Loh.'s comments: Consumers (and small businesses) generally have no incentive to improve power factor. In Australia, at least, billing is by kilowatt-hours (real power) and the power factor is not considered in the bill. However, Ergon Energy (the distribution authority in Queensland, Australia) is trying to drive power factor correction in small businesses. They are doing this by offering incentive payments for businesses who want to participate. The reason for pushing PFC to small businesses is not to increase efficiency, in the sense of saving a few dollars on the power bill, but rather to mitigate the exorbitant price of power at peak demand. To wit : The aim of the Queensland Government funded project is to use incentive payments to reduce peak demand by a total 4.7 MVA, with subsequent customer savings and carbon emissions reduction. Because of the way the electricity market works, at peak period the marginal price of electricity (the price for Ergon to buy " one additional kilowatt ") can be in the range of $1,000/kWh. So by shaving 4.7 MVA off the peak demand, they are actually saving thousands of dollars ($10,000? $100,000? $1,000,000?) per day . With savings like that, offering businesses an incentive to voluntarily install PFC is a no-brainer. There is also the nice effect of decreasing the MVA loading on infrastructure like transmission lines and transformers, so that Ergon can get the most capacity out of those assets before they need to be upgraded. Simplistically, deferring a $1M project by one year allows you to earn 5% interest on that $1M, so this is another significant saving.
{ "source": [ "https://electronics.stackexchange.com/questions/85291", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4561/" ] }
85,292
I am making some new through-hole parts in Eagle for components that are not already in the various libraries. I've realized that the drill size needs to be a little larger than the lead diameter, but I'm not sure by how much. With some research , I found the following information: "It depends" based on whether the component is being hand or machine soldered add 6 mil to the lead diameter 7 to 15 mil (diametric gap) for 63/37 solder 5 to 10 mil (diametric gap) for lead-free/RoHS solder Is there a rule of thumb or guide to support this information? Someone referred to the Industry Standard for Printed Board Design (IPC-2221), but the IPC apparently only provides the table of contents of the document unless you pay $100US . I'm planning on soldering the components by hand using 63/37 solder.
You need the pin or wire to be able to fit thru the hole, but otherwise tighter is better. First, you look at the specs from your board house. They will give you the tolerance of final finished hole diameters from what you specify. In some cases, they will round to the nearest drill size, with then a resulting diameter range for each such drill. In other words, it is best to stick to a set of discrete hole sizes. Check with your board house, but .020, .025, .029, .035, .040, .046, .052, .061, .067, .079, .093, .110, .125 inches is otherwise a good list to stick to. If your board house guarantees finished hole diamter is ±3 mil, for example, from one of these standard drill sizes, then the first would be .017-.023, the second .022-.028, etc. Note that these ranges overlap a little for common tolerance values. Now look at the datasheet for your part and see what the maximum lead diameter can be. If it's a round lead, it will tell you this directly. If it is a rectangular lead, you have to do the math to find the maximum possible diagonal. Either way, you end up with the minimum diameter hole the lead will fit into. Now look thru your list of hole sizes and compare the minimum guaranteed size for each of them to the maximum diameter of the lead. Specify the smallest drill size where the minimum diameter hole is larger than the maximum diamter lead. If both come out to the same value, use the next higher drill size.
{ "source": [ "https://electronics.stackexchange.com/questions/85292", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2028/" ] }
85,556
I'm going through the process of starting a bunch of new design work in Altium at the moment at my company, and am trying to decide whether it's worth going through the effort of setting up the Altium Vault infrastructure, vs. just using traditional version control. Here are the pro's and con's as I see them: Pros: Very good design release and versioning process. Easy way to create validated, trusted, versioned components. Allows for concept of "items" and design reuse of validated blocks larger than the component level. Makes it easy to share design data with vendors and verify that they're looking at the correct version of design data. Cons: Makes it very difficult for the individual designer to quickly make changes to schematic symbols and footprints for components, without going through vault release process. Process to release components and "items" into the vaults is very complex and time-consuming. In my brief research, it seems like it would take a full-time Altium librarian to keep the wheels greased on a vault and keep the release process of components and designs flowing smoothly. Do any of you out there have any thoughts and experience on the issue? Are there any other small companies or design teams (5-10 engineers working with Altium across design, manufacturing, procurement, etc.) who have found Vaults worth implementing?
I had to decide something similar years ago. At that time, Altium were selling their Vault solution which can have several different configurations: The Vault is in the Cloud: No internet access = no file access. If you don't continue with Altium and stop to pay the yearly subscription you don't have access to your data anymore. NOT ACCEPTABLE The Vault is on a server in your company. The content is encrypted. In order to access the data, you have to identify yourself to the Altium web services which unlock the access to your own server. Drawback: If you don't continue with Altium and stop to pay the yearly subscription you don't have access to your own server data anymore. No internet access = no data access! NOT ACCEPTABLE The Vault and the identification server are on a server in your company: your are 100% independent. That's good. This option was advertised, but during years Altium said that it's not available yet, or under test, but "soon" you will be able to have that. Our conclusion was: If you want to be free, to be the only master of your data access, either you use the Vault on a custom server and you control everything, or you stick with SVN and forget all their stuff such as "unified design", "release management", etc... If things have changed in between, feel free to update my answer with the latest conditions from Altium. EDIT: Things have changed in between! None of the new options require internet access to use your Vault. The data is never encrypted thus it is always possible to recover the data. The license for the Vault is now perpetual and it is still working even in the case of stopping the "subscription program" The authentication is not based on an Altium web server anymore. The data is always stored inside your company and under your full control. The release process in not fixed anymore and supports a lot of customization. SVN or any version control system is used for the version control of your day to day work. The vault is only there to store your components and the released work. Now there are two options: The personal Vault solution: only one user, data is stored locally (in a local Vault), reduced functionality. But no additional licences are required. The vault server. Several users, data is stored in the Vault server, all the functionalities. In one word: It seems that they have heard the response of the market and fixed the major issues.
{ "source": [ "https://electronics.stackexchange.com/questions/85556", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/26610/" ] }
85,633
On many boards I've seen, there are little copper dots used for the purpose of "Copper Thieving". They're small round copper dots connected to nothing and arranged in an array. Supposedly they're for balancing the copper on the boards to improve manufacturability, but no explanation I've heard has convinced me that they're needed or useful. What are they for and do they actually work? Below is an example with squares.
Copper dots (or grid/solid fill) are used mainly to balance the thermal properties of the board, to minimize twist and warp as the board goes through the thermal cycling associated with reflow and improving yield. A secondary purpose for them is to reduce the amount of copper that needs to be etched away from the board, balancing the etching rates across the board and helping to make the etching solution last longer. If the PCB designer did not explicitly "pour" copper fill into the open areas of the board's outer layers, the fabrication house will often add the small disconnected dots, because these will have the least effect on the electrical properties of the board.
{ "source": [ "https://electronics.stackexchange.com/questions/85633", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/16539/" ] }
86,489
I have become a bit confused about these topics. They've all started looking the same to me. They seem to have the same properties such as linearity, shifting and scaling associated with them. I can't seem to put them separately and identify the purpose of each transform. Also, which one of these is used for frequency analysis? I couldn't find (with Google) a complete answer that addresses this specific issue. I wish to see them compared on the same page so that I can have some clarity.
The Laplace and Fourier transforms are continuous (integral) transforms of continuous functions. The Laplace transform maps a function \$f(t)\$ to a function \$F(s)\$ of the complex variable s , where \$s = \sigma + j\omega\$. Since the derivative \$\dot f(t) = \frac{df(t)}{dt} \$ maps to \$sF(s)\$, the Laplace transform of a linear differential equation is an algebraic equation. Thus, the Laplace transform is useful for, among other things, solving linear differential equations. If we set the real part of the complex variable s to zero, \$ \sigma = 0\$, the result is the Fourier transform \$F(j\omega)\$ which is essentially the frequency domain representation of \$f(t)\$ (note that this is true only if for that value of \$ \sigma\$ the formula to obtain the Laplace transform of \$f(t)\$ exists, i.e., it does not go to infinity). The Z transform is essentially a discrete version of the Laplace transform and, thus, can be useful in solving difference equations, the discrete version of differential equations. The Z transform maps a sequence \$f[n]\$ to a continuous function \$F(z)\$ of the complex variable \$z = re^{j\Omega}\$. If we set the magnitude of z to unity, \$r = 1\$, the result is the Discrete Time Fourier Transform (DTFT) \$ F(j\Omega)\$ which is essentially the frequency domain representation of \$f[n]\$.
{ "source": [ "https://electronics.stackexchange.com/questions/86489", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/29771/" ] }
86,717
LEDs are known to have a very low, unnoticeable power-cycling latency, but how fast are they when measured? (nanoseconds?) In other words, how long does it take for an LED which is entirely off to get to its optimum brightness, and how long does it take to go from full brightness to off? I assume that the current applied makes a difference? I ask this since modern LED-backlit monitors use PWM to achieve different brightness levels, and even in backlights which flicker at thousands of Hertz , LEDs seem to respond almost instantly (unlike CFLs, which are rather slow in power cycling).
To address the question, first a distinction needs to be made between phosphor LEDs (#1) (e.g. white LEDs, possibly some green LEDs) and direct emission LEDs (e.g. most visible color LEDs, IR and UV LEDs). Direct emission LEDs typically have a turn- on time in single-digit nanoseconds , longer for bigger LEDs. Turn- off times for these are in the tens of nanoseconds , a bit slower than turn-on. IR LEDs typically show the fastest transition times, for reasons given ahead. Special purpose LEDs are available, whose junction and bond-wire geometries are designed specifically to permit 800 picosecond to 2 nanosecond pulses . For even shorter pulses, special purpose laser diodes, in many ways operationally similar to LEDs, work all the way down to 50 picosecond pulses. As pointed out by @ConnorWolf in comments, there also exist a family of LED products with specialized optical beam shaping , that boast pulse widths of 500 to 1000 picoseconds . Phosphor type LEDs have turn-on and turn-off times in the tens to hundreds of nanoseconds , appreciably slower than direct emission LEDs. The dominant factors for rapid LED switching are not just the LED's inherent emission transition times: Inductance of the traces causes longer rise and fall times. Longer traces = slower transitions. Junction capacitance of the LED itself is a factor (#2) . For instance, these 5mm through-hole LEDs have a junction capacitance of 50 pF nominal. Smaller junctions e.g. 0602 SMD LEDs have correspondingly lower junction capacitance, and are in any case more likely to be used for screen backlights. Parasitic capacitance (traces and support circuitry) plays an important role in increasing the RC time constant and thus slowing transitions. Typical LED driving topologies e.g. low-side MOSFET switching, do not actively pull the voltage across the LED down when turning off , hence turn-off times are typically slower than turn-on. As a result of the inductive and capacitive factors above, the higher the forward voltage of the LED , the longer the rise and fall times, due to the power source having to drive current harder to overcome these factors. Thus IR LEDs, with typically the lowest forward voltages, transition fastest. Thus, in practice the limiting time constants for an implemented design can be in the hundreds of nanoseconds . This is largely due to external factors i.e. the driving circuit. Contrast this with the LED junction's much shorter transition times. To get an indication of the dominance of the driving circuit design as opposed to the LEDs themselves, see this recent US government RFI (April 2013), seeking circuit designs that can guarantee LED switching time in the 20 nanosecond range . Notes : #1: A phosphor type LED has an underlying light emitting junction, typically in the far blue or ultraviolet range, which then excites a phosphor coating. The result is a combination of multiple emitted wavelengths, hence a broader spectrum of wavelengths than a direct emission LED, this being perceived as approximately white (for white LEDs). This secondary phosphor emission switches on or off far slower than the junction transition. Also, at turn-off, most phosphors have a long tail that skews the turn-off time further. #2: The junction geometry affects junction capacitance significantly. Hence, similar steps are taken for manufacturing LEDs specifically designed for high speed signaling in the MHz range, as are used for high frequency switching diode design. The capacitance is affected by depletion layer thickness as well as junction area. Material choices (GaAsP v/s GaP etc) also affect carrier mobility at the junction, thus changing "switching time".
{ "source": [ "https://electronics.stackexchange.com/questions/86717", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/30973/" ] }
86,922
I am trying to find a good rough guesstimate of how much of the maximum wattage usage of a graphics card is turned into hot air. Say a graphics card is rated at using X watts at maximum usage. How much of that is released as heat into the air via the cooling setup, and how much is not (If not where does it go? To ground?) Yes I know there are a lot of variables here, I am just looking for a guesstimate from someone who would understand better than me. I'm specifically using this GPU as the example (Radeon 6990 375 watts max)
My numbers might not be exactly accurate, but I would say that about 99.99% of the energy that enters the GPU, and even CPU is converted into heat. The other .01% , is the actual signal out of the GPU to your display. The job of the GPU is to take in a lot of data, and process it, requiring a lot of calculations. These calculations consume energy, producing heat, and eventually a result. Now it is important to note that while this says that it is a 375W card, it will not be drawing 375W the entire time it is in operation. Just like your CPU, your GPU will only do as much as you need it to, and may step down <100W. Simply browsing around your windows desktop, the card it doing next to nothing, and will draw next to nothing; but launch Crysis, and your frequency mill max out and the card will start drawing its maximum rating.
{ "source": [ "https://electronics.stackexchange.com/questions/86922", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/24778/" ] }
87,103
My question is at the end (to change the speed) we are controlling the input voltage of a DC motor in both PWM (pulse width modulation) and variable resistance cases. Is the only reason for choosing PWM to obtain a better precision or not consuming extra power? If it is the only reason it seems odd to use PWM equipment for simple demonstrations.
Power efficiency The induction of the motor will cause the current to average. At the same time the transistors in PWM mode have very low impedance and therefore a low voltage drop and low power dissipation. In case of a series resistor a lot of power is dissipated in the series resistor. Speed control behavior With PWM the motor will 'see' a very low power supply impedance, even though the power supply is constantly switching between high and low voltages. The result is that the motor has a much higher torque. With a series resistance the motor will experience a very weak power supply and it will be easy to stall the rotor. Control circuit For a control electronics (eg. a microcontroller) it is very easy to switch on/off transistors. Outputting an analog voltage or controlling a series resistor requires much more expensive circuitry and in turn will cause more power dissipation.
{ "source": [ "https://electronics.stackexchange.com/questions/87103", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/16307/" ] }
87,156
Every time I want to program my ATtiny85 using AVR Studio 6 it has the MyProject.elf file pre-selected in the "Flash" drop-down box in the Device Programming dialog. Obviously I want it to program the MyProject.hex file, but I manually have to select this every time using the drop-down list. Does anyone know a trick to get my selection to stick?
Power efficiency The induction of the motor will cause the current to average. At the same time the transistors in PWM mode have very low impedance and therefore a low voltage drop and low power dissipation. In case of a series resistor a lot of power is dissipated in the series resistor. Speed control behavior With PWM the motor will 'see' a very low power supply impedance, even though the power supply is constantly switching between high and low voltages. The result is that the motor has a much higher torque. With a series resistance the motor will experience a very weak power supply and it will be easy to stall the rotor. Control circuit For a control electronics (eg. a microcontroller) it is very easy to switch on/off transistors. Outputting an analog voltage or controlling a series resistor requires much more expensive circuitry and in turn will cause more power dissipation.
{ "source": [ "https://electronics.stackexchange.com/questions/87156", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/29920/" ] }
87,937
Apple power supply consists of a very thick, inflexible AC cable (wall outlet to converter) and a very thin, flexible DC cable (converter to computer): Why? The currents through the cables should be comparable, right? EDIT: the label on the converter says: input: 110-240V ~ 1.5A 50-60Hz output: 16.5V = 3.65A max EDIT2: cf. ThinkPad power adaptor (typical cables, similar to hp/dell &c) Which has a thicker (than Apple) DC part and thinner (than Apple) AC part and is rated input: 100-240V ~1.5A 50/60Hz output: 20V =3.25A The characteristics seem to be similar - why are the cables so dissimilar in the ratio DC cable thickness / AC cable thickness ? EDIT3: cf. AC Adapter For System76 Pangolin (which has 3 wires - including earth - in the AC part) It is rated similar to the above and has a thicker DC part and thinner AC part than the Apple cable. EDIT4: Looks like Lenovo/ThinkPad cables are under-engineered , which explains the cable thickness discrepancy observed!
The size of the cables isn't due to the size of the copper conductor inside them - that's a fairly small part of the cable. Most of the bulk comes from the electrical insulation. Electrical cable needs to be insulated so it doesn't short circuit. The higher the voltage, the thicker the insulation required. Your thick mains power cord is insulated to withstand mains voltage. In your country, that's 110 VAC; in my country it's 230 VAC. On top of that, the insulation must withstand transient voltage spikes ("surges") - AS1660.3 specifies a multi-core flexible cable must withstand a 3,000V AC hi-pot test for five minutes, so the insulation must be thick enough to withstand 3,000V RMS or 4,200 V peak . The thin DC cable, on the other hand, only has to withstand 12 VDC. There is not any chance of voltage spikes on this line because the design of the power supply won't allow them. There is minimal electrocution risk from 12 VDC. Therefore this cable doesn't need much insulation and it can be quite thin. To emphasise the relationship between voltage and insulation thickness, you can get cables like this: The copper conductor is relatively small relative to the overall diameter of the cable. Note the thickness of the insulation (the white material). This short off-cut of cable had no markings, but this is rated for at least 132,000 VAC and the insulation is thicker to match.
{ "source": [ "https://electronics.stackexchange.com/questions/87937", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/27268/" ] }
90,971
I have a very simple design that I'm now working on the PCB layout for. And at the moment I'm thinking about the issue of decoupling capacitors. The board is very simple and only consists of this: 1 x ATtiny85 3 x resistors 1 x 32.768 kHz crystal 2 x 22 pF caps for crystal 3 x LED The board is powered by 2xAAA batteries. The MCU is clocked at 32.768 kHz by the crystal. So as you can imagine it's just a real-time clock with some additional logging functions. Now, the question is this: Do I need decoupling capacitors for this circuit? If so, do I place them: 1. Between the Vcc and GND pins of the ATtiny, close to the ATtiny 2. Between the Vcc and GND traces, close to the battery 3. BOTH of those, i.e. use two capacitors, one close to the MCU and one close to the battery ...or can I simply ignore decoupling caps for a circuit as simple as this? And do you have any advice on which capacitance to use for the decoupling caps? Also, if I do need decoupling caps, it would be great if someone could explain the advantage of them. I.e. do they help improve the stability of the real-time clock? Do batteries normally have voltage dips in certain circumstances?
Yes, you need decoupling caps. Between the Vcc and GND pins of the ATtiny, close to the ATtiny make it approximately 100nF And it doesn't hurt to have one close to any other "high current" switching components, like near the LED. Don't decouple the LED itself, decouple the LED with its series transistor. Say another 100 - 220nF. simulate this circuit – Schematic created using CircuitLab For example when PWM'ing a load (LED), you introduce fast variations in current drawn. All wiring (from battery to the load) has a resistance and inductance that will be significant for high frequency switching like PWM. If you don't decouple this current near the load, then the voltage on the power supply rail may vary so much that it affects the microcontroller (it may malfunction/reset) or your circuit may start to interfere other circuits through radio waves. Between the Vcc and GND traces, close to the battery Not so much near the battery, but it is good practice to decouple the battery with a an electrolytic capacitor, say 1000uF per average Ampere current drawn. So with a microcontroller and an LED, say 50mA, then you place a 47uF electrolytic cap near the circuit. A battery when it ages increases its internal impedance and you want to counter that. Notice that the 100nF capacitor near the microcontroller can not be replaced by the larger electrolytic cap mentioned under #2. The reason for this is that the smaller cap is much better at fast transients, such as occur in a microprocessor. In general keep the traces/wires between capacitor and load as short as possible. When it comes to decoupling, remember that the power supply of your circuit is the most important thing and it is shared amongst all subcircuits. Decoupling capacitors are very cheap, it is just not worth the troubleshooting effort (for usually intermittent problems) to leave them out.
{ "source": [ "https://electronics.stackexchange.com/questions/90971", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/29920/" ] }
91,614
My syllabus shows that I have to study about transistor amplifiers - including small signal analysis. What exactly does small signal analysis mean? I googled it but I couldn't find an exact answer.
Strictly speaking, transistors are very non-linear devices. Bipolar transistors don't amplify at all until the base/emitter voltage rises high enough for the base/emitter junction to be forward biased, and you don't get significant current through a MOSFET until the gate/source voltage approaches the threshold voltage. However, we can use clever circuits to bias the transistors, which means that fairly large d.c. voltages and currents are applied. The bias conditions hold the transistor at an operating bias point such that the behavior of the transistor is fairly linear over a small range of voltages or currents surrounding the bias point. Large-signal analysis pertains to setting up the bias conditions and deals with the non-linear behavior of the transistor. Small-signal analysis assumes that the transistor is correctly biased and concentrates on the linear behavior for small signals, ignoring the messy non-linear stuff.
{ "source": [ "https://electronics.stackexchange.com/questions/91614", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/33005/" ] }
91,624
I'm trying to make a simple state diagram to understand a concept in class. There is one input and one output \$ \left(X \ \text{and}\ Y\ \text{lets say} \right)\$. The output is \$1\$ if an input is false after exactly two true inputs. For example, \$Y=1\$ if the last three inputs were \$110\$. In all other cases, the output should be \$0\$. I'm having trouble deriving what the states themselves should be (as in, the bubbles in the diagram). Once I figure that out, I can easily apply the input/output conditions. I tried setting the states to represent the current bit (\$1\$ or \$0\$, so two states), but that didn't work.
Strictly speaking, transistors are very non-linear devices. Bipolar transistors don't amplify at all until the base/emitter voltage rises high enough for the base/emitter junction to be forward biased, and you don't get significant current through a MOSFET until the gate/source voltage approaches the threshold voltage. However, we can use clever circuits to bias the transistors, which means that fairly large d.c. voltages and currents are applied. The bias conditions hold the transistor at an operating bias point such that the behavior of the transistor is fairly linear over a small range of voltages or currents surrounding the bias point. Large-signal analysis pertains to setting up the bias conditions and deals with the non-linear behavior of the transistor. Small-signal analysis assumes that the transistor is correctly biased and concentrates on the linear behavior for small signals, ignoring the messy non-linear stuff.
{ "source": [ "https://electronics.stackexchange.com/questions/91624", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3619/" ] }
91,688
I was reading this page http://www.asic-world.com/verilog/verilog_one_day3.html when I came across the following: We normally have to reset flip-flops, thus every time the clock makes the transition from 0 to 1 (posedge), we check if reset is asserted (synchronous reset), then we go on with normal logic. If we look closely we see that in the case of combinational logic we had "=" for assignment, and for the sequential block we had the "<=" operator. Well, "=" is blocking assignment and "<=" is nonblocking assignment. "=" executes code sequentially inside a begin / end, whereas nonblocking "<=" executes in parallel. I was fairly sure that nonblocking assignments were sequential while blocking assignments were parallel. After all, you can make blocking assignments with assign statements outside of always blocks, and those all run in parallel. Is this a mistake, or is the behavior different inside an always block? And, if the behavior IS different inside an always block, can nonblocking assignments be made outside an always block?
was fairly sure that nonblocking assignments were sequential while blocking assignments were parallel. Blocking assignment executes "in series" because a blocking assignment blocks execution of the next statement until it completes. Therefore the results of the next statement may depend on the first one being completed. Non-blocking assignment executes in parallel because it describes assignments that all occur at the same time. The result of a statement on the 2nd line will not depend on the results of the statement on the 1st line. Instead, the 2nd line will execute as if the 1st line had not happened yet.
{ "source": [ "https://electronics.stackexchange.com/questions/91688", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8961/" ] }
93,061
Reference Second post on EdaBoard.com Time response of a system is the time evolution of the variables. In circuits, this would be the waveforms of voltage and current versus time. Natural response is the system's response to initial conditions with all external forces set to zero. In circuits, this would be the response of the circuit with initial conditions (initial currents on inductors and initial voltage on capacitors for example) with all the independent voltages set to zero volts (short circuit) and current sources set to zero amps (open circuit). The natural response of the circuit will be dictated by the time constants of the circuit and in general roots of the characteristics equation (poles). Forced response is the system's response to an external stimulus with zero initial conditions. In circuits, this would just be the response of the circuit to external voltage and current source forcing function... continue reading Questions How can there be even a natural response? Something has to be inputted to create an output? The way I see it is like turning of the main water line and then turning on your faucet and expecting water to come out. How can we v(t) (from the link above) be solved for if we don't know dv(dt) in order to find the natural response? If you can please expand on the 2 concepts (natural response and forced response) by explaining their differences in Layman's terms, it would be lovely. @Felipe_Ribas Can you please confirm this and answer some of the questions? (you can just edit this directly if you want) Given an an equation 10dy/dt + 24y = 48 means rate of change of output + 24 * output = 48 . The initial conditions are y(0)=5 and dy/dt=0 . That would mean that the input is 48/(24*5) Is that a correct assumption? The solution to that is 0.4 which is the constant input?
Think about a simple mechanical system like an elastic bar or a block attached to a spring against gravity, in real world. Whenever you give the system a pulse (to the block or to the bar), they will begin an oscillation and soon they will stop moving. There are ways that you can analyze a system like this. The two most common ways are: Complete solution = homogeneous solution + particular solution Complete response = Natural resopnse (zero input) + forced response (zero state) As the system is the same, both should result the same final equation representing the same behavior. But you can separate them to better understand what each part means physically (specially the second method). In the first method, you think more from the point of view of a LTI system or a mathematical equation (differential equation) where you can find its homogeneous solution and then its particular solution. The homogeneous solution can be viewed as a transient response of your system to that input (plus its initial conditions) and the particular solution can be viewed as the permanent state of your system after/with that input. The second method is more intuitive: natural response means what is the system response to its initial condition. And forced response is what is the system response to that given input but with no initial conditions. Thinking in terms of that bar or block example I gave, you can imagine that at some point you pushed the bar with your hands and you are holding it there. This can be your initial state. If you just let it go, it will oscillate and then stop. This is the natural response of your system to that condition. Also you can let it go but still keeps giving some extra energy to the system by hitting it repeatedly. The system will have its natural response as before but will also show some extra behavior due to your extra hits. When you find your system complete response by the second method, you can see clearly what is the system natural behavior due to those initial conditions and what is the system response if it had only the input (with no initial conditions). They both together will represent all the system's behavior. And note that the Zero State response (Forced response) also may consist of a "natural" portion and a "particular" portion. That is because even with no initial conditions, if you give an input to the system, it will have a transient response + permanent state response. Example response: imagine that your equation represent the following circuit: Which your output y(t) is the circuit current. And imagine your source is a DC source of +48v. This way, making the summation of element's voltage in this closed path, you get: \$\epsilon=V_L+V_R\$ We can rewrite the inductor voltage and resistor voltage in terms of current: \$\epsilon=L\frac{di}{dt} + Ri\$ If we have a power source of +48VDC and L = 10H and R = 24Ohms, then: \$48=10\frac{di}{dt}+24i\$ which is exactly the equation you used. So, clearly your input to the system (RL circuit) is your power supply of +48v only. So your input = 48. The initial conditions you have are y(0) = 5 and y'(0) = 0. Physically it represents that at a t=0 moment, my current of the circuit is 5A but it is not varying. You may think that something happened previously in the circuit which left a current in the inductor of 5A. So in that given moment (initial moment) it sill has those 5A (y(0)=5) but it is not increasing or decreasing (y'(0) = 0). Solving it: we first assume the natural response in the format: \$Ae^{st}\$ and then we will find the system behavior due to its initial condition, just as if we had no power supply ( \$\epsilon=0\$ ) which is the Zero-Input response: \$10sAe^{st} + 24Ae^{st} = 0\$ \$Ae^{st}(10s + 24)=0\$ \$s=-2.4\$ So, \$i_{ZI}(t)=Ae^{-2.4t}\$ Since we know that i(0) = 5: \$i(0)=5=Ae^{-2.4(0)}\$ \$A=5\$ \$i_{ZI}(t)=5e^{-2.4t}\$ Note that until now everything is consistent. This last equation represents the system response with no input. If I put t=0, I find i=5 which correspond to the initial condition. And if I put \$t=+\infty\$ I will find i=0 which also makes sense if I do not have any source. Now we may find the particular solution to the equation which will represent the permanent state due to the power supply presence (input): we assume now that \$i(t)=c\$ where \$c\$ is a constant value which represents the system output in the permanent state since the input is also a constant. For each system, the output format depends on the input format: if the input is a sinusoidal signal, the output also will be. In this case we have only constant values which makes things easier. So, \$\frac{di}{dt}=0\$ then, \$48 = 10\cdot0 + 24c\$ (using the differential equation) \$c=2\$ \$i(\infty)=2\$ which also makes sense because we have a DC power supply. So after the transient response of turning the DC power supply ON, the inductor will behave as a wire and we will have a resistive circuit with R=24Ohms. Then we should have 2A of current since the power supply has 48V across it. But note that if I just add both results to find the complete response, we will have: \$i(t) = 2 + 5e^{-2.4t}\$ Now I messed things up in the transient state because if I put \$t=0\$ we no longer will find \$i=5\$ as before. And we have to find \$i=5\$ when \$t=0\$ because it is a given initial condition. This is because the Zero-State response has a natural term which is not there and also has the same format as we found before. Adding it there: \$i(t) = 2 + 5e^{-2.4t} + Be^{st}\$ The time constant is the same so it only left us B: \$i(t) = 2 + 5e^{-2.4t} + Be^{-2.4t}\$ And we know that: \$i(t) = 2 + 5 + B = 5\$ (t=0) So, \$B=-2\$ Then, your complete solution is: \$i(t) = 2 + 5e^{-2.4t} - 2e^{-2.4t}\$ you may think of this last term we find as a correction term of the forced response to match the initial conditions. Another way to find it is imagining the same system but no with no initial conditions. Then solving all the way again, we would have: \$i_{ZS}(t) = 2 + Ae^{-2.4t}\$ But as we now are not considering the initial conditions (i(0)=0), then: \$i_{ZS}(t) = 2 + Ae^{-2.4t} = 0\$ And when t=0: \$A=-2\$ so the forced (Zero-State) response of your system is: \$i_{ZS}(t) = 2 - 2e^{-2.4t}\$ It is a bit confusing but now you can view things from different perspectives. -Homogeneous/Particular solutions: \$i(t) = i_p(t)+i_n(t) = 2 + 3e^{-2.4t}\$ The first term (2) is the particular solution and represents the permanent state. The rest of the right side is the transient response, also called homogeneous solution of the equation. Some books call this also Natural response and Forced response since the first part is the forced part (due to the power supply) and the second part is the transient or natural part (system's characteristic). This is the fastest way to find the complete response I think, because you only have to find the permanent state and a natural response once. But may not be clear what is representing what. -Zero input / zero state: \$i(t) = i_{ZS}(t)+i_{ZI}(t) = 2 - 2e^{-2.4t} + 5e^{-2.4t}\$ note that is the same equation but the second term is splitted in two. Now, the first two terms ( \$2 - 2e^{-2.4t}\$ ) represent the Zero-State response. In other words, what would happen to the system if there was no initial current and you turned ON the +48V power source. The second part ( \$5e^{-2.4t}\$ ) represent the Zero-Input response. It shows you what would happen to the system if no input was given (power source remained in 0v). It is only an exponential term which would go to zero since it has no input. Some people also call this Natural/Forced response format. The natural part would be Zero-Input and the Forced part would be the Zero-State, which by the way is composed by a natural term and particular term. Again, they all will give you the same result which represents the whole situation behavior including the power source and initial conditions. Just note that in some cases it might be useful to use the second method. One good example is when you are using convolutions and you may find the impulse response to your system with Zero-State. So breaking those terms might help you to see things clearly and also using an adequate term to convolve.
{ "source": [ "https://electronics.stackexchange.com/questions/93061", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/19374/" ] }
93,067
Problem description is as follows: Design an amplifier with maximum voltage gain without output saturation. The following diagram was given: I've tried doing a T-model for the first part that goes up to C2, and I got the following: Vo=gm Vgs (RE1) Vin=gm Vgs (1/gm) Resulting in the following gain: Av = vo / vi = gm*RE1 I have a feeling this result doesn't seem right. For the sake of honesty, to say that I'm not sure what direction to take or how to attack this problem would be an extremely mild way to put it. Another issue (?) I'm having is that I'm looking on the manufacturer's datasheet for the MOSFET I'm using, and there is no value for the transconductance paramater kn*(W/L), which I need when calculating the drain current in DC analysis, and 'gm' in small-signal analysis.
Think about a simple mechanical system like an elastic bar or a block attached to a spring against gravity, in real world. Whenever you give the system a pulse (to the block or to the bar), they will begin an oscillation and soon they will stop moving. There are ways that you can analyze a system like this. The two most common ways are: Complete solution = homogeneous solution + particular solution Complete response = Natural resopnse (zero input) + forced response (zero state) As the system is the same, both should result the same final equation representing the same behavior. But you can separate them to better understand what each part means physically (specially the second method). In the first method, you think more from the point of view of a LTI system or a mathematical equation (differential equation) where you can find its homogeneous solution and then its particular solution. The homogeneous solution can be viewed as a transient response of your system to that input (plus its initial conditions) and the particular solution can be viewed as the permanent state of your system after/with that input. The second method is more intuitive: natural response means what is the system response to its initial condition. And forced response is what is the system response to that given input but with no initial conditions. Thinking in terms of that bar or block example I gave, you can imagine that at some point you pushed the bar with your hands and you are holding it there. This can be your initial state. If you just let it go, it will oscillate and then stop. This is the natural response of your system to that condition. Also you can let it go but still keeps giving some extra energy to the system by hitting it repeatedly. The system will have its natural response as before but will also show some extra behavior due to your extra hits. When you find your system complete response by the second method, you can see clearly what is the system natural behavior due to those initial conditions and what is the system response if it had only the input (with no initial conditions). They both together will represent all the system's behavior. And note that the Zero State response (Forced response) also may consist of a "natural" portion and a "particular" portion. That is because even with no initial conditions, if you give an input to the system, it will have a transient response + permanent state response. Example response: imagine that your equation represent the following circuit: Which your output y(t) is the circuit current. And imagine your source is a DC source of +48v. This way, making the summation of element's voltage in this closed path, you get: \$\epsilon=V_L+V_R\$ We can rewrite the inductor voltage and resistor voltage in terms of current: \$\epsilon=L\frac{di}{dt} + Ri\$ If we have a power source of +48VDC and L = 10H and R = 24Ohms, then: \$48=10\frac{di}{dt}+24i\$ which is exactly the equation you used. So, clearly your input to the system (RL circuit) is your power supply of +48v only. So your input = 48. The initial conditions you have are y(0) = 5 and y'(0) = 0. Physically it represents that at a t=0 moment, my current of the circuit is 5A but it is not varying. You may think that something happened previously in the circuit which left a current in the inductor of 5A. So in that given moment (initial moment) it sill has those 5A (y(0)=5) but it is not increasing or decreasing (y'(0) = 0). Solving it: we first assume the natural response in the format: \$Ae^{st}\$ and then we will find the system behavior due to its initial condition, just as if we had no power supply ( \$\epsilon=0\$ ) which is the Zero-Input response: \$10sAe^{st} + 24Ae^{st} = 0\$ \$Ae^{st}(10s + 24)=0\$ \$s=-2.4\$ So, \$i_{ZI}(t)=Ae^{-2.4t}\$ Since we know that i(0) = 5: \$i(0)=5=Ae^{-2.4(0)}\$ \$A=5\$ \$i_{ZI}(t)=5e^{-2.4t}\$ Note that until now everything is consistent. This last equation represents the system response with no input. If I put t=0, I find i=5 which correspond to the initial condition. And if I put \$t=+\infty\$ I will find i=0 which also makes sense if I do not have any source. Now we may find the particular solution to the equation which will represent the permanent state due to the power supply presence (input): we assume now that \$i(t)=c\$ where \$c\$ is a constant value which represents the system output in the permanent state since the input is also a constant. For each system, the output format depends on the input format: if the input is a sinusoidal signal, the output also will be. In this case we have only constant values which makes things easier. So, \$\frac{di}{dt}=0\$ then, \$48 = 10\cdot0 + 24c\$ (using the differential equation) \$c=2\$ \$i(\infty)=2\$ which also makes sense because we have a DC power supply. So after the transient response of turning the DC power supply ON, the inductor will behave as a wire and we will have a resistive circuit with R=24Ohms. Then we should have 2A of current since the power supply has 48V across it. But note that if I just add both results to find the complete response, we will have: \$i(t) = 2 + 5e^{-2.4t}\$ Now I messed things up in the transient state because if I put \$t=0\$ we no longer will find \$i=5\$ as before. And we have to find \$i=5\$ when \$t=0\$ because it is a given initial condition. This is because the Zero-State response has a natural term which is not there and also has the same format as we found before. Adding it there: \$i(t) = 2 + 5e^{-2.4t} + Be^{st}\$ The time constant is the same so it only left us B: \$i(t) = 2 + 5e^{-2.4t} + Be^{-2.4t}\$ And we know that: \$i(t) = 2 + 5 + B = 5\$ (t=0) So, \$B=-2\$ Then, your complete solution is: \$i(t) = 2 + 5e^{-2.4t} - 2e^{-2.4t}\$ you may think of this last term we find as a correction term of the forced response to match the initial conditions. Another way to find it is imagining the same system but no with no initial conditions. Then solving all the way again, we would have: \$i_{ZS}(t) = 2 + Ae^{-2.4t}\$ But as we now are not considering the initial conditions (i(0)=0), then: \$i_{ZS}(t) = 2 + Ae^{-2.4t} = 0\$ And when t=0: \$A=-2\$ so the forced (Zero-State) response of your system is: \$i_{ZS}(t) = 2 - 2e^{-2.4t}\$ It is a bit confusing but now you can view things from different perspectives. -Homogeneous/Particular solutions: \$i(t) = i_p(t)+i_n(t) = 2 + 3e^{-2.4t}\$ The first term (2) is the particular solution and represents the permanent state. The rest of the right side is the transient response, also called homogeneous solution of the equation. Some books call this also Natural response and Forced response since the first part is the forced part (due to the power supply) and the second part is the transient or natural part (system's characteristic). This is the fastest way to find the complete response I think, because you only have to find the permanent state and a natural response once. But may not be clear what is representing what. -Zero input / zero state: \$i(t) = i_{ZS}(t)+i_{ZI}(t) = 2 - 2e^{-2.4t} + 5e^{-2.4t}\$ note that is the same equation but the second term is splitted in two. Now, the first two terms ( \$2 - 2e^{-2.4t}\$ ) represent the Zero-State response. In other words, what would happen to the system if there was no initial current and you turned ON the +48V power source. The second part ( \$5e^{-2.4t}\$ ) represent the Zero-Input response. It shows you what would happen to the system if no input was given (power source remained in 0v). It is only an exponential term which would go to zero since it has no input. Some people also call this Natural/Forced response format. The natural part would be Zero-Input and the Forced part would be the Zero-State, which by the way is composed by a natural term and particular term. Again, they all will give you the same result which represents the whole situation behavior including the power source and initial conditions. Just note that in some cases it might be useful to use the second method. One good example is when you are using convolutions and you may find the impulse response to your system with Zero-State. So breaking those terms might help you to see things clearly and also using an adequate term to convolve.
{ "source": [ "https://electronics.stackexchange.com/questions/93067", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ] }
93,452
From seeing a few schematics where the flyback or snubber diode has been placed across the transistor C-E terminals (Right Configuration), instead of what I typically seen as the the flyback being placed across the coil terminals (Left Configuration). Which of these are "correct"? Or does each have a separate purpose? As a note, the diodes are normally listed as external 1N400x type diodes (on TIP120 Darlingtons), not the internal body diode of the BJT or Mosfet. Final note, I have seen a few schematics that have both diodes, one across the coil and another across the CE terminals. I assume that one is just redundant without really affecting the circuit in that case, is that a wrong assumption? simulate this circuit – Schematic created using CircuitLab The answer to When/why would you use a Zener diode as a flywheel diode (on the coil of a relay)? touches on this slightly, by showing a regular Diode in the above left configuration, while showing a Zener Diode in the right configuration. It doesn't say that the opposite isn't true ( or why ) So as a second part , can a Zener work in the left configuration, and a regular diode in the right configuration? If so, how does it change how it operates?
Consider the operation of the circuit. When the transistor is on current is flowing in the coil from top to bottom as the circuit is drawn we now switch the transistor off. The current in the coil still wants to flow. For the circuit on the left this current can now flow back to Vcc via the diode the voltage across the coil has reversed direction and is limited by the diode the current can decay to zero safely. For the circuit on the right the diode does not help. The current flowing in the coil will force the voltage on the collector to rise to the point where the transistor (or possibly the diode) breaks down and starts to conduct. At this point the current can start to decay in the coil but the energy in the broke down transistor (or less likely diode) will be excessive and may well result in the transistors death. Note a zener diode here will work because you allow the voltage on the coil to reverse so the current can decay to zero while limiting the voltage across the transistor to a safe value. It should be noted the allowing the voltage across the coil to reverse to an higher voltage means the current can decay more quickly which is why you sometimes see a zener in the right hand circuit or more than one diode in series in the left hand one.
{ "source": [ "https://electronics.stackexchange.com/questions/93452", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17178/" ] }
93,713
I've just started studying computer engineering, and I'm having some doubts regarding the behavior of the XOR gate. I've been projecting circuits with Logisim, whose XORs behave differently from what I've learnt. To me, it should behave as a parity gate, giving a high output whenever the inputs receives an odd combination. It doesn't, though, for more than two inputs. How should it behave? I also read in a book that XOR gates are not produced with more than two inputs. Is that correct? Why?
There are different points of view regarding how an exclusive-OR gate with more than two inputs should behave. Most often such an XOR gate behaves like a cascade of 2-input gates and performs an odd-parity function. However, some people interpret the meaning of exclusive-OR more literally and say that the output should be a 1 if and only if exactly one of the inputs is a 1. I do seem to recall that Logisim uses the latter interpretation, and somewhere in my rusty memory I have seen it in an ASIC cell library. One of the the international standard symbols for an XOR gate is a rectangle labelled with =1 which seems to be more consistent with the "1 and only 1" definition. EDIT: The definition of exclusive-OR as "1 and only 1" is uncommon but it can be found. For example, IEEE-Std91a-1991 gives the symbol for the exclusive-OR on p. 62 with the note: "The output stands at its 1-state if one and only one of the two inputs stands at its 1-state." For more than 2 inputs the standard recommends using the "odd parity" symbol instead. Web sites that discuss this confusing situation include XOR: The Interesting Gate and gate demos at TAMS . A google search will also turn up sites that claim that, strictly speaking, there is no such thing as an XOR gate with more than two inputs.
{ "source": [ "https://electronics.stackexchange.com/questions/93713", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/34156/" ] }
93,725
I simulated and built current circuit. What I got was a rectangular wave generator with a duty cycle significantly lower than 50%. Than I made R3 = R4 and got 50% duty cycle. I managed to think that RC constant is defined only by C1 and R1. It appears I was wrong. How can I define RC constant for this circuit? Can anybody shed some light on the problem?
There are different points of view regarding how an exclusive-OR gate with more than two inputs should behave. Most often such an XOR gate behaves like a cascade of 2-input gates and performs an odd-parity function. However, some people interpret the meaning of exclusive-OR more literally and say that the output should be a 1 if and only if exactly one of the inputs is a 1. I do seem to recall that Logisim uses the latter interpretation, and somewhere in my rusty memory I have seen it in an ASIC cell library. One of the the international standard symbols for an XOR gate is a rectangle labelled with =1 which seems to be more consistent with the "1 and only 1" definition. EDIT: The definition of exclusive-OR as "1 and only 1" is uncommon but it can be found. For example, IEEE-Std91a-1991 gives the symbol for the exclusive-OR on p. 62 with the note: "The output stands at its 1-state if one and only one of the two inputs stands at its 1-state." For more than 2 inputs the standard recommends using the "odd parity" symbol instead. Web sites that discuss this confusing situation include XOR: The Interesting Gate and gate demos at TAMS . A google search will also turn up sites that claim that, strictly speaking, there is no such thing as an XOR gate with more than two inputs.
{ "source": [ "https://electronics.stackexchange.com/questions/93725", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/34159/" ] }
93,892
I'm implementing a CC-CV algorithm for charging a li-ion battery. I'm confused what is the maximum allowed charging voltage during CC (constant current) phase. All application notes and datasheets, I've found state that charging in the CC mode continues until cell voltage reaches 4.2V per cell. In order to maintain constant current the charging voltage has to be increased as the cell voltage rises. So, when the cell voltage is close to 4.2V the charging voltage must be higher e.g. 4.5V, and this should not cause any damage to the cell. Is my understanding correct? I'm asking because the power control module in the battery pack I'm trying to charge seems to cut off the circuit when charging voltage is above 4.5V. Edit: Some clarification after Russell's comment. The control algorithm I've implemented is basically taken from Atmel's app note - AVR458: Charging Lithium-Ion Batteries with ATAVRBC100. A similar algorithm is described in app note AVR450 - AVR450: Battery Charger for SLA, NiCd, NiMH and Li-Ion Batteries. Both are simple buck regulators with PWM controlled by MCU. Now let's say that we measure two voltages: Vcell -> this is the voltage measured at battery connection when PWM is OFF Vchg -> this is the voltage measured at battery connection when PWM is ON To be clear I've checked the source code implementation of both. During CC charge phase the algorithm periodically checks if the Vcell > Vmax=4.2V. If not then Ichg is regulated, so that it is ~1C. If so, then CV mode starts. I've asked the question because I see that during charging Vchg is higher than 4.2V, e.g. Vbat=3.9V is and Vchg is 4.3V. Edit2: ** I went to check the source code of both app notes again. In fact the above is true for source code of app note AVR450. The impl in app note AVR458 changes to CV when Vchg >= Vmax (4.2). Since this is consistent with all information I've found and with Russell's answer I think that the algorithm in AVR450 is incorrect.
So, when the cell voltage is close to 4.2V the charging voltage will must be higher e.g. 4.5V, and this should not cause any damage to the cell. Is my understanding correct? No. Your understanding is incorrect and your charger is suspect. And/or your description is not quite complete and unambiguous. For information on battery matters for most battery chemistries a good starting point is often the excellent site at Battery University . NB: What I have written below is based both on experience and on input from a wide range of sources, including battery university. Assume for following discussion a manufacturers spec of Maximum current = CCmax (usually 1C for LiIon but may be other for specific cells). Assume CCmax is 1C for the cell in question for convenience. Actual spec will be as per datasheet and is temperature dependant and also depends on how many charge/discharge cycles you wish to achieve before the battery turns to mush and/or is reduced to say 70% of original capacity. Maximum voltage of Vmax - usually 4.2V or less. Say 4.2V for now. As for current, the maximum Voltage applied will affect cell longevity (and capacity on a given charge). Charging at a terminal voltage of much above 4.2V will shorten you cell life, may lead to metallic lithium plating out and can lead to the exciting and equipment eating "vent with flame" battery meltdown phenomenon. Minimum current of Icv_min when charging at Vmax. This is the minimum that current should be allowed to fall to when charging in CV mode. When in CV mode, charging is terminated when current drops to this level. Icv_min is typically set at somewhere between 25% of Icc (early charge termination) and say 10% of Icc (maybe sometimes even 5% of Icc). The lower Icv_min is set the longer current trickles into the battery at Vmax in CV mode. Setting a low value of Icv_min adds slightly to the energy that can be stored in the battery on a given cyccle AND utterly tears the battery apart inside and shortens it life. These two important points apply: The maximum voltage AT the battery (1 cell) under maximum constant current CCmax is Vmax = 4.2V in this case. BUT the maximum voltage AT the battery (1 cell) under ANY current is also Vmax. If the battery will not accept Imax when Vmax is applied then CC mode is no longer appropriate. Charging should be CV (or terminated if Icharge at Vmax is <= Icv_min - see below) An important point here is where you measure what you call "the charging voltage". This is properly measured at the cell electrodes as close to the cell internals as possible. In practice anywhere on the (usually) weld-attached tabs should be OK as at the max allowed current the voltage drop across the tabs should be minimal. As long as the voltage at the actual cell is <= Vmax then the voltage at other points in the charger may be > Vmax if the charger design requires it. Consider: Apply a "true" constant current source to a discharged LiIon cell. There will be lead resistance external to the cell so the voltage elsewhere to the system may be higher than at the battery terminals. Ignore that for now - comment on this at end. For a discharged LiIon battery the terminal voltage will be somewhere around 3V and will slowly rise as CC is applied. After about 40 to 50 minutes of charging a LiIon cell at 1C (= CCmax in this case) from fully discharged the TERMINAL voltage will reach 4.2V. This is where you stop applying CC and apply a CV of Vamx (= 4.2V in this case) at whatever current it takes to keep the voltage at 4.2V (up to a maximum of CCmax.) The following paragraph may sound a little complex but it is important. It does make sense - read and understand if you care about the answer to the question that you asked. It is a fallacy to think that you must apply a higher voltage at the cell to get it to accept CCmax when Vcell is at Vmax. This IS true if the battery is fully charged or is charged above the point in the cycle where Vcell first reaches Vmax when charging at CCmax. BUT that is because you are then trying then to do something which is outside the proper charging "envelope". IF a LiIon cell will not accept CCmax when Vmax is applied it should be charged at not above Vmax until Ibattery falls to Icv_min. If you apply Vmax and Ibattery is below Icv_min then the battery is fully charged and you should remove Vcharge. Leaving a battery connected indefinitely to a voltage source of Vmax when Icharge is less than Icv_min will damage the battery and reduce or greatly reduce its cycle life. Charging voltage is removed when Icharge falls below Icv_min to prevent potentially irreversible electrochemical reactions and to prevent Lithium metal "plating out". If Vmax is set at 4.15V then charge capacity is reduced noticeably but cycle life is extended. If Vmax is set at 4.1V charge capacity is significantly reduced and cycle life is significantly extended. The loss of capacity per cycle that occurs when Vmax is reduced leads to an overall INCREASE in total lifetime capacity as the extension in life cycles rises faster than the per cycle capacity falls. If you care mainly about highest capacity per charge set Vmax as high as allowed and accept low cycle life. If you can tolerate say 80% to 90% of max possible capacity per cycle, set Vmax lower and get more overall energy storage before replacement. The graph below from Battery University article How to Prolong Lithium-based Batteries shows what happens when Vmax is increased above 4.2V. At the end are 3 tables from the same battery University page which show the effects on cycle life from varying various parameters (depth of discharge, temperature, Vmax) Internal voltage versus terminal voltage: There will be internal resistance in the cell so the "real" potential in the cell proper during charging at CC will be less than at the terminals. At CV the internal voltage will approach the external voltage as Icharge "tapers off". IF you want to play 'fast and loose' with all manufacturers' specs and all advice given you can assume that you can 'allow' for this resistance and guestimate a true internal voltage which is lower than the terminal voltage. May the force be with you and with your battery, and may it live long and prosper - but, it probably won't. Three excellent tables from Battery University showing how cycle life varies with various parameters.
{ "source": [ "https://electronics.stackexchange.com/questions/93892", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/34250/" ] }
94,354
Even with predominantly digital circuits, I am using inductors much more often than I used to, generally because of all the buck or boost converters (a recent board I was involved in has 12 different voltage rails -- six of them needed just by the TFT LCD). I've never seen a standard digital multimeter (DMM) with an inductance range. So I ended up buying a separate meter that does LC measurements. However a lot of DMMs have a capacitance scale. Since capacitors and inductors can be thought of as mirror images of each other with voltage and current flipped, why don't DMMs include an inductance scale also? What's so difficult about measuring inductance that it is left off of DMMs and relegated to specialty meters? Since inductance meters are usually LC meters (even LCR), do they measure capacitance in a different way than DMMs? Are they more accurate than the capacitance scale of a DMM?
The only reason DMMs can't measure inductances is that it is more difficult to measure inductance than resistance or capacitance: this task requires special circuitry, which is not cheap. Since there are relatively few occasions when inductance measurements are required, standard DMMs do not have this functionality, which allows for lower cost. Simple DMMs can measure capacitance by just charging the capacitor with a constant current and measuring the rate of voltage build-up. This simple technique provides surprisingly good accuracy and wide dynamic range, therefore it can be implemented in almost any DMM, without significant cost penalties. There are other techniques as well. Theoretically, one could measure inductance by applying a constant voltage across an inductor and measuring the current build-up; however, in practice this technique is much more complicated to implement, and the accuracy is not that good as for capacitors due to the following reasons: Inductors may have relatively high parasitic resistance and capacitance Core losses (in cored inductors) EMI (incl. stray inductance and capacitance) Frequency dependent effects in inductors More There are few techniques for measuring inductances (some of them are described here ). LCRs are special meters designed for inductance measurements and containing the required circuitry. These are costly tools. Since the hardware for measuring the inductance may also be used for accurate measurement of R and C, LCRs also employ this circuitry in order to improve the accuracy of capacitance and resistance measurements (for example: AC resistance, AC capacitance, ESR etc.). I believe that the difference between measuring inductance and capacitance with LCR is just a matter of different firmware algorithms, though it is just a guess. Therefore, the general answer to your question is "yes, LCRs are usually more accurate in RC measurements than DMMs, and they can measure a wider range of measurable quantities". However, this is just a rule of thumb - there are many superb DMMs and lousy LCRs out there... Read specs.
{ "source": [ "https://electronics.stackexchange.com/questions/94354", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1322/" ] }
94,382
I'm going to start developing a USB 1.1 device using a PIC microcontroller. I'm going to keep one of the USB ports of my PC connected to a bread board during this process. I don't want to destroy my PC's USB port by a short circuit or connecting \$\pm\$Data lines to each other or a power line accidentally. How can I protect the USB ports? Does a standard USB port has built-in short circuit protection? Should I connect diodes, resistors, fuses on/through/across some pins?
This is to expand on Leon's suggestion to use a hub. The USB hubs are not all created equal. Unofficially, there are several "grades": Cheap hubs. These are cost optimized to the point where they don't adhere to the USB spec any more. Often, the +5V lines of the downstream ports are wired directly to the computer. No protection switches. Maybe a polyfuse, if lucky. edit: Here's a thread where the O.P. is complaninig that an improperly designed USB hub is back-feeding his PC. Decent hubs. The downstream +5V is connected through a switch with over-current protection. ESD protection is usually present. Industrial hubs. There's usually respectable overvoltage protection in the form of TVS and resettable fuses. Isolated hubs. There's actual galvanic isolation between upstream port and downstream ports. Isolation rating tends to be 2kV to 5kV. Isolated hubs are used when a really high voltage can come from a downstream port (e.g. mains AC, defibrillator, back EMF from a large motor). Isolated hubs are also used for breaking ground loops in vanilla conditions. What to use depends on the type of threat you're expecting. If you're concerned with shorts between power and data lines, you could use a decent hub. In the worst case, the hub controller will get sacrificed, but it will save the port on the laptop. If you're concerned that a voltage higher than +5V can get to the PC, you can fortify the hub with overvoltage protection consisting of TVS & polyfuse. However, I'm still talking about relatively low voltages on the order of +24V. If you're concerned with really high voltages, consider isolated hub, gas discharge tubes. Consider using a computer which you can afford to lose.
{ "source": [ "https://electronics.stackexchange.com/questions/94382", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5542/" ] }
94,659
Since I've started electronics I'm using this kind of board for permanent projects: But sometimes it's a bit annoying, especially when I need a line going from the top to bottom of the board. I've seen this kind of board: My question is, how can I cut the strips? By cutting the strip I don't mean cutting the board itself, just the copper strip. I've tried with a precision knife but I'm not sure about the method, the blade gets damaged really quickly and it's really hard to cut the copper.
There are specific tools that are designed to cut holes in this material, which is either called "stripboard" or "veroboard". These tools are basically a drill bit in a moulded handle made of plastic or wood and look something like this: (photo from here ) Because it is basically a drill bit you could use any high speed steel drill bit. There are some good instruction at Instructables that show how to cut neat holes. However if you plan on using stripboard often then it is worth buying a tool with a handle, they are quite inexpensive.
{ "source": [ "https://electronics.stackexchange.com/questions/94659", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/27489/" ] }
95,073
I have written plenty of bare metal code for PIC and x86 processors. Can someone tell me how and when should I need an operating system? Conversely, what application or situation can be dealt with or without an operating system as well?
My rule of thumb is that you should consider an operating system if the product requires one or more of the following: a TCP/IP stack (or other complex networking stack), a complex GUI (perhaps one with GUI objects such as windows and events), or a file system. If you've done some bare metal coding then you're probably familiar with the super-loop program architecture. If the product's firmware requirements are simple enough to be implemented with a super-loop that is maintainable (and hopefully somewhat extensible) then you probably don't need an operating system. As the software requirements increase, the super-loop gets more complex. When the software requirements are so many that the super-loop becomes too complex or cannot fulfill the real-time requirements of the system then it is time to consider another architecture. A RTOS architecture allows you to divide the software requirements into tasks. If done properly, this simplifies the implementation of each task. And with task prioritization an RTOS can make it easier to fulfill real-time requirements. An RTOS is not a panacea, however. An RTOS increases the overall system complexity and opens you up to new types of bugs (such as deadlocks). As an alternative to the RTOS you might consider and event-based state machine architecture (such as QP ). If your product has networking, a complex GUI, and a file-system then you might be at the point where you should consider full featured operating systems such as VxWorks, Windows, or Linux. Full featured operating systems will include drivers for the low-level details and allow you to focus on your application.
{ "source": [ "https://electronics.stackexchange.com/questions/95073", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/20711/" ] }
95,140
I'm hooking up a small DC motor to an arduino using an NPN transistor using the following diagrams I found online: The circuit works, and I'm successfully able to make the motor run. Now, I'm seeking to understand why it works the way it does. In particular, I'd like to understand: Why are the diode and capacitor hooked up in parallel to the motor? What role do they serve here? Why is a resistor needed between the transistor and the digital PWM pin on the arduino? Would it be safe to run the circuit without it?
The diode is to provide a safe path for the inductive kickback of the motor. If you try to switch off the current in an inductor suddenly, it will make whatever voltage is necessary to keep the current flowing in the short term. Put another way, the current thru an inductor can never change instantaneously. There will always be some finite slope. The motor is partially an inductor. If the transistor shuts off quickly, then the current that must still flow thru the inductor for a little while will flow thru the diode and cause no harm. Without the diode, the voltage across the motor would get as large as necessary to keep the current flowing, which would probably require frying the transistor. A small capacitor across the motor will reduce the speed of the possibly fast voltage transitions, which causes less radiation and limits the dV/dt the transistor is subjected to. 100 nF is excessive for this, and will prevent efficient operation at all but low PWM frequencies. I'd use 100 pF or so, perhaps to up 1 nF. The resistor is to limit current the digital output must source and the transistor base must handle. The transistor B-E looks like a diode to the external circuit. The voltage will therefore be limited to 750 mV or so. Holding a digital output at 750 mV when it is trying to drive to 5 V or 3.3 V is out of spec. It could damage the digital output. Or, if the digital output can source a lot of current, then it could damage the transistor. 1 kΩ is again a questionable value. Even with a 5 V digital output, that will put only 4.3 mA or so thru the base: the voltage drop at the B-E junction ("diode") is 0.7 V, leaving the 4.3 V at the resistor. You don't show specs for the transistor, so let's figure it has a minimum guaranteed gain of 50. That means you can only count on the transistor supporting 4.3 mA x 50 = 215 mA of motor current. That sounds low, especially for startup, unless this is a very small motor. I would look at what the digital output can safely source and adjust R1 to draw most of that. Another issue is that the 1N4004 diode is inappropriate here, especially since you will be turning the motor on and off rapidly, as implied by "PWM". This diode is a power rectifier intended for normal power line frequencies like 50-60 Hz. It has very slow recovery. Use a Schottky diode instead. Any generic 1 A 30 V Schottky diode will do fine and be better than a 1N4004. I can see how this circuit can appear to work, but it clearly wasn't designed by someone that really knew what they were doing. In general, if you see an Arduino in a circuit you find on the 'net someplace, especially a simple one, assume it was posted because the author considers it a great accomplishment. Those that know what they are doing and draw out a circuit like this in a minute don't consider it worth writing up a web page on. That leaves those that took two weeks to get the motor to spin without the transistor blowing up and they're not really sure what everything does to write these web pages.
{ "source": [ "https://electronics.stackexchange.com/questions/95140", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/33382/" ] }
95,217
Sometimes when I order PCBs from a board house, I omit the bottom silkscreen for budgetary reasons. When I place surface-mount chips on the bottom of the board, I then end up with a footprint that doesn't indicate the chip orientation. This is annoying because it means that I need to verify the component placement and orientation during assembly, and this allows for errors when placing the parts. How can I clearly indicate pin 1 with the remaining layers in a way that will be clear but not significantly impact the PCB size or cause issues when soldering? I'm assuming that I always have access to a solder mask layer and a copper layer.
Have a differently shaped solder mask on pin 1. For surface mount processors, you could have the pin 1 pad be noticably longer than the others.
{ "source": [ "https://electronics.stackexchange.com/questions/95217", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/638/" ] }
95,575
I have an android phone to which i have plugged an earphones. So at the top of the phone, I get the headphone symbol which indicates that the earphone is connected (In other words, the circuit at the 3.5 mm jack is closed). Then I cut the two earphones (transducers) from it, and still the headphone symbol shows . When I later cut this cable, below where it branches out, even then it shows circuit completion. So my question is this: How does the phone detect circuit completion at the 3.5 mm jack and thus trigger all sound and music to be directed through the 3.5mm jack?
Headphone jacks have extra contacts inside, which act as switches. The the drawing below, pins 4 and 5 are intended for sensing that the plug was inserted. They are not intended for audio signal. When the plug is not present, the switche, which are formed by 2 & 4 and 3 & 5, are closed. When the plug is inserted, these switches are open. The plug flexes 2 and 3 slightly, and they break contact with 4 and 5. You could insert a 3.5mm plastic rod [a dummy] into the jack, which will open the contacts, and the phone might think that earphones are plugged in. Source: datasheet for a typical stereo jack.
{ "source": [ "https://electronics.stackexchange.com/questions/95575", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/34709/" ] }
95,874
In my (extremely rudimentary) understanding, the amount of current flowing in a circuit is determined by a) its resistance, and b) the voltage of the power source (voltage from beginning to end), which forces the charge to flow through. Why then, do people talk about a device "drawing" extra current when e.g. a motor encounters a heavy force? If anything, I would expect this to increase the resistance in the circuit, and thus decrease the current that flows through. What say does a load in the circuit have in how much charge is forced through? How can it draw more out? Alternately: where is my understanding of these interactions flawed? :)
Think of it as "drawing" extra breath whilst jogging as opposed to walking. A circuit under normal conditions will appear as a certain impedance. For instance, a DC motor operating without a mechanical load will spin at a rate determined by the number of its windings, contacts, permanent magnets etc. As a load is applied to the shaft, the rotor decelerates, reducing the impedance of the windings being contacted. Simplistically, the impedance is determined by the speed (frequency) at which it spins. As the windings are inductive, the reduction in angular frequency reduces the impedance. As a result, the current increases, so it "draws more breath", so to speak.
{ "source": [ "https://electronics.stackexchange.com/questions/95874", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/35073/" ] }
96,574
I found many discussions on bypass capacitors and their purpose. Usually, they come as a pair of 0.1uF and 10uF. Why does it have to be a pair? Does anyone have a good reference to a paper or an article, or could provide a good explanation? I wish to get a little theory on why TWO and the purpose of EACH.
http://www.ti.com/lit/an/scba007a/scba007a.pdf You'll see the big capacitor referred to a "bank" or "bulk" capacitors. The smaller ones are of course also "bypass" capacitors. The basic idea is that, in the real world, the parasitics of a capacitor aren't ideal. Your "bank" capacitor will help for transient power draw (changes in real current change) but, due to real world issues, if RF noise (EMI) gets on the line, the smaller bypass capacitor will let that noise short to ground before it gets to your IC. Additionally, both of these capacitors will be helping to suppress switching transients as well as improving intercircuit isolation. Even though the physics is the same, the terminology is altered to their function. The "bank" capacitors "provide" a little extra charge (like a charge bank). The "bypass" ones allow the noise to bypass your IC without harming the signal. "Smoothing" capacitors reduce power supply ripple. "Decoupling" capacitors isolate two parts of a circuit. So, in practice, you put a bank cap next to a bypass cap and there's your 10uF and 0.1uF. But two is just arbitrary. You have some RF on your board? Might need a 1nF cap, too. A simple example of realworld impedance can be seen in this picture. An ideal cap would just be a large downward slope forever. However, smaller caps are better at higher frequencies in the real world. So, you stack TWO (or THREE, or HOWEVER MANY) next to each other to get the lowest total impedance. I have, however, read dissenting opinions on this, saying that the self resonance between the two actually creates a HIGH impedance at certain frequencies and should be avoided, but that's for another question.
{ "source": [ "https://electronics.stackexchange.com/questions/96574", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/35526/" ] }
96,805
I was looking for a cheapest possible option to get arduino and wireless comms for a dimmable light and come across this ebay item when searching for Arduino Nano clone. It has no usb port so how can it be programmed? Edit: I have discovered that there is a new device called "Arduino Pro Micro" which is similar to Pro Mini and Nano but have usb port in-built. The best thing is you can buy Pro Micro for under 4 euros! Excellent for a dimmable LED light...
It's similar to an arduino but with the USB to UART converter chip removed to be cheaper. In order to program it you have to use an external converter and connect it to the Rx/Tx pins. Please note that these boards don't use a crystal as a clock source but a 16MHz resonator which has higher tolerance (0.5%) You'll need to get an external USB to serial board (or cable), like Note that there are two "versions" of USB to serial boards. One version outputs Tx pin to Tx header and Rx pin to Rx header and the other version outputs Tx pin to Rx header and Rx pin to Tx header. If your board outputs Tx pin to Rx header and Rx pin to Tx header (the signals are already crossed) then you should connect Rx of the USB board to Rx of Arduino, and Tx of the USB board to Tx of Arduino (like shown below) If your board outputs Tx pin to Tx header and Rx pin to Rx header then you should connect Rx of the USB board to Tx of Arduino, and Tx of the USB board to Tx of Arduino (cross connect like shown below)
{ "source": [ "https://electronics.stackexchange.com/questions/96805", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/16641/" ] }
96,996
When building a circuit to power an LED, we use Ohms law to calculate the required resistance, then to calculate the required wattage of the resistor. Suppose this formula dictates a 1/8 W resistor, and I instead use a 1W or a 100W resistor of the correct resistance. What will happen?
In your example, you've paid too much money and the part is huge (especially the 100W part). If the LED current is an AC signal (PWM or whatever) the 100W resistor may have a lot of inductance, which will change the way the circuit behaves. On the other hand, the ratings of resistors are conditional on a bunch of things, such as mounting, surrounding PCB patterns (especially SMT resistors) and, of course, ambient temperature. The number in the part description is just sort of a rough guide, you really have to drill down using datasheets and part series manuals and other manufacturer's data to get the details. You should also consider odd conditions such as extremes of input voltage, what happens if an LED shorts out, and so on. It is also not a good idea to get too close to the maximum ratings if you want long life. Using a 1/4W part for 1/8W actual dissipation is not a bad idea. Using a 1W part might make sense if it's a high-reliability design or you have a bunch of them on hand. Using a 100W part is silly.
{ "source": [ "https://electronics.stackexchange.com/questions/96996", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/33454/" ] }
97,241
I started writing firmware for my product and I'm a rookie here. I went through many articles about not using global variables or functions. Is there any limit for using global variables in an 8 bit system or is it a complete 'No-No'. How should I use global variables in my system or should I completely avoid them? I would like to take valuable advice from you guys on this topic to make my firmware more compact.
You can use global variables successfully, as long as you keep in mind @Phil's guidelines. However, here are some nice ways to avoid their issues without making the compiled code less compact. Use local static variables for persistent state that you only want to access inside one function. #include <stdint.h> void skipper() { static uint8_t skip_initial_cycles = 5; if (skip_initial_cycles > 0) { skip_initial_cycles -= 1; return; } /* ... */ } Use a struct to keep related variables together, to make it clearer where they should be used and where not. struct machine_state { uint8_t level; uint8_t error_code; } machine_state; struct led_state { uint8_t red; uint8_t green; uint8_t blue; } led_state; void machine_change_state() { machine_state.level += 1; /* ... */ /* We can easily remember not to use led_state in this function. */ } void machine_set_io() { switch (machine_state.level) { case 1: PIN_MACHINE_IO_A = 1; /* ... */ } } Use global static variables to make the variables visible only within the current C file. This prevents accidental access by code in other files due to naming conflicts. /* time_machine.c */ static uint8_t current_time; /* ... */ /* delay.c */ static uint8_t current_time; /* A completely separate variable for this C file only. */ /* ... */ As a final note, if you are modifying a global variable within an interrupt routine and reading it elsewhere: Mark the variable volatile . Make sure it is atomic for the CPU (i.e. 8-bit for an 8-bit CPU). OR Use a locking mechanism to protect access to the variable.
{ "source": [ "https://electronics.stackexchange.com/questions/97241", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/31060/" ] }
97,738
From my own experience, burning microcontrollers is quite easy. Put the 5V at ground, GND at V CC and in an instant your chip is burned. What exactly goes on internally that causes it to stop functioning entirely? For instance, if I were magically able to open a chip and rearrange all its semiconductor connections and fix it, where exactly would I need to look, and what would I need to do? If this is chip-specific, please choose any that could answer my question or give me an idea at least.
Most commercial IC circuits are isolated from the substrate material by a reverse-biased P-N junction (including CMOS parts). The substrate is usually tied to the voltage expected to be most negative. If it isn't, then that junction becomes forward biased and can conduct a great deal of current, melting metal or heating the junction to the point where it no longer acts as a diode. That is typically at a voltage of about 0.6V, but the IC makers play it safe usually by telling you not to go lower than -0.3V. (referring to the below diagram, but not shown, the substrate would be tied to pin 5) Most CMOS parts have another twist that if part of the chip has a normal Vdd and another part sees a big negative current it will trigger a big parasitic SCR that is a side effect of the structure, then the device's power supply draws a large current which causes overheating, melting etc. if the current is not externally limited. That is called latch-up.
{ "source": [ "https://electronics.stackexchange.com/questions/97738", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/23906/" ] }
97,889
How do I interface 3.3 V Input/Output to 5 V Output/Input? I need it primarily for an Arduino Due but any general purpose bidirectional circuit (or IC) would work. Some people advised me to use the SN74AHC125 and CD4050 ICs, but I don't understand how they work or how to interface with them.
A very simple bidirectional level translator can be made with a single N-mosfet: The mosfet used should be a model with a low Vgs threshold, so that it can have a relatively low Rds-ON (ON resistance) at the intended input voltage level (3.3v in this case). BSS138 in one such example, it has a Vgs-th of 1.5v max and is specified to have a low drain-source resistance with Vgs voltages as low as 2.5v (maybe slightly lower too). The shown example uses 3.3v <-> 5v translation but it can also work with 2.5v <-> 3.3v or 2.5v <-> 5v, even between 2.5v <-> 12v. The range is only limited by the characteristics of the mosfet used. The shown circuit is based on an application note from NXP AN97055 Bi-directional level shifter for I2C-bus and other systems New shorter version: AN10441 Level shifting techniques in I2C-bus design When L1 is high (3v3) or floating R1 keeps the mosfet off so R2 pulls the drain side high (to 5v). When L1 is pulled low then the mosfet conducts and the drain becomes low. When a low level (0) is applied to H1 then that voltage is transferred through the substrate diode to the source side (L1) Please note that the resistance size can affect the speed ( image source ) Alternative transistor solution Relevant articles you may find useful: Don't pay for level translators in systems using multiple power-supply voltages - EDN 3V Tips 'n Tricks - Microchip
{ "source": [ "https://electronics.stackexchange.com/questions/97889", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/36210/" ] }
97,909
I am describing a system in VHDL. This system already contains a processor, a DDR SDRAM controller and a VGA controller. VGA reads pixels from SDRAM (already validated and proven in FPGA). Although VGA and SDRAM are already communicating with each other, I still need to implement the connection between the processor and SDRAM. In the end, what I intend to have is a processor that draws in the framebuffer stored in SDRAM. Then, a page flip occurs and VGA starts to fetch the new picture that was drawn by the processor. To instruct the VGA to fetch from the new location, I would like to inform the VGA controller (using memory mapped io) the new address of the new image. A simple strategy that I though was to put a mux and verify if the address range falls on the VGA controller's registers or in the cache's address range. Also, would I need to care about different clock domains? If yes, what possible problems should I care about? For example, sometime in the past I saw code from x86 that writes (using outb instruction) and the next instruction was a inb to the same and/or a related location. In this case, would I need to modify the processor logic to stall on such operations? If yes, how many cases to implement? How many interfaces to care about? Also, at bootup, how cache is used if all entries are invalidated? I believe there is a ROM image with the startup code. Could it also exist a temporary local RAM for writings made by code stored in ROM (sw instructions)? Resume: I need information on how to implement memory hierarchy circuitry: caches, memory mapped io, TLBs, virtual memory etc. And how all this communicate together with each other. I know how to implement caches and TLBs, for instance. But I am not sure on how to connect them together. I could just use something that works (like the mux idea). But I want to follow designs that are established in industry. What I've already studied: - how to run mips - computer architecture (Patterson) - MIPS manuals - ARM's manuals - Intel's manuals But none explains in detail. If there are many ways to implement, just show me one that you know of, please. Even if it is source code or a block diagram. Again, I don't need the explanation of how it works internally. I just need to know the interfaces between the modules. Thank you all
A very simple bidirectional level translator can be made with a single N-mosfet: The mosfet used should be a model with a low Vgs threshold, so that it can have a relatively low Rds-ON (ON resistance) at the intended input voltage level (3.3v in this case). BSS138 in one such example, it has a Vgs-th of 1.5v max and is specified to have a low drain-source resistance with Vgs voltages as low as 2.5v (maybe slightly lower too). The shown example uses 3.3v <-> 5v translation but it can also work with 2.5v <-> 3.3v or 2.5v <-> 5v, even between 2.5v <-> 12v. The range is only limited by the characteristics of the mosfet used. The shown circuit is based on an application note from NXP AN97055 Bi-directional level shifter for I2C-bus and other systems New shorter version: AN10441 Level shifting techniques in I2C-bus design When L1 is high (3v3) or floating R1 keeps the mosfet off so R2 pulls the drain side high (to 5v). When L1 is pulled low then the mosfet conducts and the drain becomes low. When a low level (0) is applied to H1 then that voltage is transferred through the substrate diode to the source side (L1) Please note that the resistance size can affect the speed ( image source ) Alternative transistor solution Relevant articles you may find useful: Don't pay for level translators in systems using multiple power-supply voltages - EDN 3V Tips 'n Tricks - Microchip
{ "source": [ "https://electronics.stackexchange.com/questions/97909", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/25957/" ] }
97,922
During a recent kitchen renovation, the electrician installed LED under-counter "pucks" and drove them with a 120VAC-to 12V, 40 kHz transformer labelled "low voltage halogen supply." I'm fully aware that LEDs are happy operating at 40 kHz (one transformer blew & I have to replace it), but is there any reason to change the driving frequency? Obviously, if I were to change, I'd stay above 100Hz or so to avoid any visual flicker, and there may be a complete death of commercial devices at other output frequencies. FWIW, the 40kHz drivers cause two minor problems: AM radios nearby are not happy, and the fancy circuitry in my exhaust hood's lighting system tends to flicker (even though normally turned off) when the LEDs are on. I'm guessing a low-frequency LED driver might mitigate these side effects.
A very simple bidirectional level translator can be made with a single N-mosfet: The mosfet used should be a model with a low Vgs threshold, so that it can have a relatively low Rds-ON (ON resistance) at the intended input voltage level (3.3v in this case). BSS138 in one such example, it has a Vgs-th of 1.5v max and is specified to have a low drain-source resistance with Vgs voltages as low as 2.5v (maybe slightly lower too). The shown example uses 3.3v <-> 5v translation but it can also work with 2.5v <-> 3.3v or 2.5v <-> 5v, even between 2.5v <-> 12v. The range is only limited by the characteristics of the mosfet used. The shown circuit is based on an application note from NXP AN97055 Bi-directional level shifter for I2C-bus and other systems New shorter version: AN10441 Level shifting techniques in I2C-bus design When L1 is high (3v3) or floating R1 keeps the mosfet off so R2 pulls the drain side high (to 5v). When L1 is pulled low then the mosfet conducts and the drain becomes low. When a low level (0) is applied to H1 then that voltage is transferred through the substrate diode to the source side (L1) Please note that the resistance size can affect the speed ( image source ) Alternative transistor solution Relevant articles you may find useful: Don't pay for level translators in systems using multiple power-supply voltages - EDN 3V Tips 'n Tricks - Microchip
{ "source": [ "https://electronics.stackexchange.com/questions/97922", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/35826/" ] }
98,114
I've read that there are four types of passive elements: resistances, capacitors, inductors and memristors. The memristor was predicted 30 years before it was produced. But why couldn't you invent other type of passive element? Is there a proof? The definition I'm using of passive elements is something with no gain, no control and linear.
There are four physical quantities of interest for electronics: voltage, flux, charge, and current. If you have four things and want to pick two, order not mattering , there are 4C2 = 6 ways to do that. Two of the physical quantities are defined in terms of the other two. (Current is change in charge over time. Voltage is change in flux over time.) That leaves four possible relationships: resistance, inductance, capacitance, and memristance. If you want another fundamental component, you need another physical quantity to relate to these four. And while there are many physical quantities one might measure, none seem so tightly coupled as these. I'd suppose this is because electricity and magnetism are two aspects of the same force . I'd further suppose that since electromagnetism is now understood to be part of the electroweak force, one might be able to posit some relationships between the weak nuclear interaction and our four elements of voltage, current, charge, and flux. I haven't the first clue how this would be physically manifested, especially given the relative weakness of the weak nuclear force at anything short of intranuclear distances. Perhaps in the presence of strong magnetic or electrical fields affecting the rates of radioactive decay? Or in precipitating or preventing nuclear fusion? I'd yet further suppose (I'm on a roll) that the field strengths required would be phenomenal, which is why they're not practical for everyday engineering. But that's a lot of supposition. I am a mere engineer, and unqualified to speculate on such things.
{ "source": [ "https://electronics.stackexchange.com/questions/98114", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/24361/" ] }
98,277
For the past few years I have been running my Fluke 79III without a fuse in its F44.100A 1kV fuse holder. This has meant that I've been unable to use the 40mA circuit. When I blew the old fuse, I discovered that it had been previously 'repaired' with another fuse soldered to the original Buss DMM-44/100 fuse. Then I went to look for replacement fuses and was aghast to find them selling for £10 each, so I wasn't surprised that the previous owner had 'repaired' the fuse rather than replacing it. What I now wonder is what the consequences of once more 'repairing' this fuse might be. I don't play with three phase, and I'm unlikely to play with more than about 260VAC, so could I safely use a 250VAC, 500mA fast acting fuse in its place? Watch Big Clives Things you should know about fuses. (including a 15kV one) video if Spehro Pefhany's answer hasn't already convinced you not to try this.
It's the same with disabling or bypassing any safety device, I believe you could be completely safe, but what if someone else picks it up and uses it? The danger is of arc flash , of course . Perhaps if you could mark it (and cover the model/Cat/IEC markings that would lead one to believe that it's safe to use on 600VAC? " Do not use on mains ". Not sure if that is legally necessary or sufficient in the UK, but it might reduce the possibility of injury. Here's what's left of a multimeter that was involved in an accident that killed two people. Evaluation of the meter circuit showed that it used a small glass 8AG fuse rated 0.5A at 250V for circuit protection on some functions. According to Underwriters Laboratories, the interrupting capacity of this style of fuse is only 35A at 250V. It has no specified interrupting rating above 250V. An estimate of the fault current through the meter shows that it could have been from several hundred to as much as 1,000A at 277V The whole story is here . Note that the circuit was not even an industrial circuit, and "only" 277VAC phase-to-ground, but 480V phase-to-phase. The available fault current was not small. I once tested some 5A/250VAC rated ordinary 5x20mm fuses on a light industrial 240V circuit (50A circuit). Almost every time they arced from end cap to end cap, and the glass tube literally exploded. Molten metal was found to have solidified in a layer on the glass shards, so there was a cloud of it after the tube ruptured. A plastic housing would have contained the shards, but anyone foolish enough to be closely observing without a face shield or safety glasses could have been injured or blinded. Interrupting (current) capacity is an important factor, and it's not marked on fuses generally. Wow, there's a huge price range on that fuse- I see everything from $5 to $36.
{ "source": [ "https://electronics.stackexchange.com/questions/98277", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3774/" ] }
98,556
Maybe this is a stupid question, but here goes: I just got a piezo buzzer and on the top there is a sticker that says "Remove After Washing". My question is why would I ever want to wash an electronic part? I have no idea. Is there some manufacturing step where this makes sense?
The industrial PCB assembly process usually leaves residues — mostly soldering flux — on the circuit board. One step in the process is to wash the board (by dipping or spraying) with a solvent to remove those residues for long-term reliability and for the sake of appearance. Some devices (such as sound or pressure transducers) have openings for their functioning, and their performance would be adversely affected if the solvent or the residues got washed into the opening and lodged there. Therefore, such devices often have a sticker that covers the opening(s) that should not be removed until after the washing. Removing the stickers adds an extra step to the process, so for really high-volume manufacturing, it is often worthwhile to select parts that are declared "washable" to begin with.
{ "source": [ "https://electronics.stackexchange.com/questions/98556", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/35276/" ] }
98,779
Consider the following circuit... simulate this circuit – Schematic created using CircuitLab Now suppose the resistor has infinite resistance. Then obviously the current through the resistor will be zero. Now if we apply Ohm's law to this situation then the voltage drop across the resistor will be zero (since the current through the resistor is zero). So it means that the points A and B are at the same potential. But that's not possible since a resistor with infinite resistance will drop all the voltage across it. Isn't it? So is Ohm's law violating itself?
You are confused about what the concept of infinity means. Infinity isn't a number that can ever actually measure a quantity of something, like resistance, because it's not a real number . As Wikipedia aptly puts it: In mathematics, "infinity" is often treated as if it were a number (i.e., it counts or measures things: "an infinite number of terms") but it is not the same sort of number as the real numbers. When we talk about an "infinite" resistance, what we are really considering is this: as the resistor gets arbitrarily large , what does something (current, voltage, etc) approach ? For example, we can say that as the resistance gets arbitrarily large, current gets arbitrarily small. That is, it approaches zero: $$ \lim_{R\to\infty} \frac{15\mathrm V}{R} = 0\mathrm{A} $$ That's not the same as saying the current is zero. We can't ever increase R all the way to infinity, so we can't ever decrease current to zero. We can just get arbitrarily close. That means you can't now do this: $$ \require{cancel} \cancel{0\mathrm A \cdot \infty \Omega = ?}$$ This is a bit of a mathematical contradiction by most definitions of infinity, anyhow. Most numbers, when multiplied by an arbitrarily large number, approach infinity. But, anything multiplied by zero is zero. So when you multiply zero by an arbitrarily large number, what do you get? I haven't a clue. Read more about it on Mathematics.SE: Why is Infinity multiplied by Zero not an easy Zero answer? You could ask, as the current becomes arbitrarily small, what does the resistance approach? $$ \lim_{I\searrow 0} \frac{15\mathrm V}{I} = \infty \Omega $$ However, if you look closely, you will notice that if \$I = 0\$, then you are dividing by zero , which is your hint you are approaching something that can't happen. This is why we must ask this question as a one sided limit . Leaving the realm of mathematics, and returning to the realm of electrical engineering, what do you really get if you remove the resistor from that circuit, and leave it open? What you have now is more like this circuit: simulate this circuit – Schematic created using CircuitLab C1 represents the (extremely small) capacitance between the two wires that aren't connected. Really, it was there all along but wasn't significant until the resistance went away. See Why aren't wires capacitors? (answer: they are) and everything has some capacitance to everything else .
{ "source": [ "https://electronics.stackexchange.com/questions/98779", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/34847/" ] }