source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
12,533
This is probably the dumbest question ever, but I am an electronics nublet. I understand what capacitors do, and I've been reading beginner Electronics books and such, but I don't quite understand when to use them? Sometimes in these books they just seem kind of thrown in. I understand they are meant to smooth out current, but I still am not sure I understand when to use them. Like I said, this is probably a nublet question to the max. But most information I find is more about what they are rather than when to use them. Edit: For clarity, I mean in SMALL electronic applications. Think simple circuits and such.
When I first started out in electronics I struggled with the same question. The problem is that capacitors are used in a vast number of different ways. However, as you're just starting out in electronics you probably only need to know about a few of these to start with. The most widely used and basic of these are: Power Supply Smoothing This is the easiest and very widely used application of a capacitor. If you stick a big beefy electrolytic capacitor (the bigger the better), it will fill in all the gaps created by rectifying an AC waveform, to create a relatively smooth DC. It works by repeatedly charging during the peaks, and discharging during the gaps. However, the more load you put on it, the quicker it will drain the capacitor and the more ripple you'll get. Timing If you supply power to a capacitor through a resistor, it will take time to charge. If you connect a resistive load to a capacitor, it will take time to discharge. The key thing to understand here about timing circuits is that capacitors appear as though they are short circuit while they are charging, but as soon as they are charged, they appear to be open circuit. Filtering If you pass DC through a capacitor, it will charge and then block any further current from flowing. However, if you pass AC through a capacitor, it will flow. How much current flows depends on the frequency of the AC, and the value of the capacitor.
{ "source": [ "https://electronics.stackexchange.com/questions/12533", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ] }
12,600
I see recommendations of 2-4 mils for solder mask expansion. But why is it necessary?
If the solder mask expansion were 0, in theory -- assuming everything aligned perfectly -- the board would work fine. In practice, things never align perfectly. The actual hole punched in the solder mask may be slightly less than what you specified ("shrinkage"), and that hole is always be placed in a slightly different location than what you specified ("movement"). If your solder mask expansion is too small, then these misalignments cause the solder mask to partially or completely overlap SMT pads and through-hole pads. If the solder mask completely covers most or all of the pad, the SMT part will be completely disconnected from that pad. Then the board will immediately fail the end-of-line go-nogo test. Many people specifically design the pads of a footprint to comply with IPC's fillet recommendations. If the solder mask even partially covers some of that pad, then the fillet of solder will be smaller than a person looking only at the copper might expect. If the fillet of solder is too small, then the (SMT or through-hole) part will not be mechanically attached as well. After a few thousand cycles of vibration, the solder may eventually crack, and the part will be completely disconnected from that pad or hole. Then your customer will notice the problem. (This is much worse than a board failing the end-of-line go-nogo test). Daniel Grillo gives an excellent explanation of what happens if the solder mask is too big.
{ "source": [ "https://electronics.stackexchange.com/questions/12600", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1278/" ] }
12,774
I have a huge led matrix which will draw 3.1Amps of current. However I only have a 15v 1A psu> Would it be possible to use a transistor to amplify the current after I've lowered the voltage? Could you recommend such a transistor.
15V * 1A is 15 watts. 12V * 3.5A is 42 watts. You cannot create power out of nothing.
{ "source": [ "https://electronics.stackexchange.com/questions/12774", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2824/" ] }
12,801
I need a 12V 2.56A regulator. I happened to have 2x 2A 12V linear regulators I brought and did nothing with. I was wondering could I wire them together and get 12V 4A or would I have divide my circuit amongst the two regulators? A schematic of how I would wire them would be much appriciated.
It all depends on the regulator, and how "balanced" the two should be. And by "balanced" I mean should each regulator be supplying exactly 50% of the current, or can it be more like 20%/80%? The normal method is to simply run the output of the two regulators through a diode (one each), and then connect the outputs of the diodes together. In this case, this is called a "Wired OR" and the diodes referred to as "OR-ing Diodes". The diodes are there mainly to isolate one regulator from the other. While simple, this method doesn't always work well. First, you have the voltage drop of the diodes to contend with. And second, the regulators are not well balanced. In practice, it is in the 80%/20% range. As the load increases, the balancing gets better but it's never perfect. Because of this balancing issue, the max load is not quite double what one regulator can do. The better method is called "active load balancing". It still uses the OR-ing diodes, but this time a small circuit dynamically adjusts the output voltage of the regulators to keep the load balanced. In this case, the regulators can be kept very close to 50% of the total load. And thus, the max current can be double what any one regulator can achieve by itself. Doing this kind of circuit is tricky and challenging, but not beyond an experienced hobbyist. A third method requires the use of regulators with an isolated output. In this case, you set up the regulators to provide all the current, but half the voltage (6v @ 2.56A in your case). Then wire the regulators in series to give you the proper voltage. When you do this the regulators will be balanced fairly close to 50%/50% without any tricky balancing circuit. Of course, this isn't appropriate for every situation. The fourth method is simple. Just wire up the two regulators in parallel. Without knowing a great deal about the regulators and their behavior in this situation I wouldn't even attempt this, since if it doesn't work then it'll probably fail catastrophically. But some regulators are designed with this in mind and have it documented in the datasheets. If your regulator is one of these, then go for it. The danger here is that the combination of the two regulators could cause a control loop instability, forcing the output to be very noisy and one regulator to take most of the load. Personally, I'd either buy a single regulator with the current you require, or partition up your circuit so one regulator powers half and the other regulator powers the other half.
{ "source": [ "https://electronics.stackexchange.com/questions/12801", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2824/" ] }
12,851
For three phase electricity the wave is offset by 120 degrees(2\$\pi/3\$ Rad). Why aren't the phases closer together? Is it because it will affect the frequency of the phases? How was this 120 degrees chosen?
When there's 120° between phases the sum of the voltages at any time will be zero. This means that with a balanced load no current flows in the return line (neutral). Also, if each phase is 230V with respect to the neutral (star operation), then there will be 230V \$\times\$ \$\sqrt{3}\$ = 400V between any two phases (triangle or delta operation), and they're also equally spaced, i.e. at 120° angles. (images from http://www.electrician2.com/electa1/electa3htm.htm )
{ "source": [ "https://electronics.stackexchange.com/questions/12851", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1623/" ] }
12,865
For blue LEDs with a forward voltage of 3.3 V and supply voltage of 3.3 V, is a series resistor still needed to limit current? Ohm's Law in this case says 0 Ω, but is this correct in practice? Perhaps just a small value like 1 or 10 Ω just to be safe?
No, it's not correct, if only because neither the LED nor the power supply are 3.3V. The power supply may be 3.28V, and the LED voltage 3.32V, and then the simple calculation for the series resistor doesn't hold anymore. The model of a LED is not just a constant voltage drop, but rather a constant voltage in series with a resistor, the internal resistance. Since I don't have the data for your LED let's look at this characteristic for another LED, the Kingbright KP-2012EC LED: For currents higher than 10mA the curve is straight, and the slope is the inverse of the internal resistance. At 20mA the forward voltage is 2V, at 10mA this is 1.95V. Then the internal resistance is \$R_{INT} = \dfrac{V_1 - V_2}{I_1 - I_2} = \dfrac{2V - 1.95V}{20mA - 10mA} = 5\Omega\$. The intrinsic voltage is \$V_{INT} = V_1 - I_1 \times R_{INT} = 2V - 20mA \times 5\Omega = 1.9V.\$ Suppose we have a power supply of 2V, then the problem looks a bit like the original, where we had 3.3V for both supply and LED. If we would connect the LED through a 0\$\Omega\$ resistor (both voltages are equal after all!) we get a LED current of 20mA. If the power supply voltage would change to 2.05V, just a 50mV rise, then the LED current would be \$ I_{LED} = \dfrac{2.05V - 1.9V}{5\Omega} = 30mA.\$ So a small change in voltage will result in a large change in current. This shows in the steepness of the graph, and the low internal resistance. That's why you need an external resistance which is much higher, so that we have the current better under control. Of course, a voltage drop of 10mV over, say, 100\$\Omega\$ gives only 100\$\mu\$A, which will be hardly visible. Therefore also a higher voltage difference is required. You always need a sufficiently large voltage drop over the resistor to have a more or less constant LED current.
{ "source": [ "https://electronics.stackexchange.com/questions/12865", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2654/" ] }
12,869
We are currently using the Wiz108SR device and need to provide this as a consumer ready component with a power supply. We would rather not build and package the powersupply but instead purchase something that is already done and packaged. Does anybody know of a similar product that we can buy, rather than this DIY component?
No, it's not correct, if only because neither the LED nor the power supply are 3.3V. The power supply may be 3.28V, and the LED voltage 3.32V, and then the simple calculation for the series resistor doesn't hold anymore. The model of a LED is not just a constant voltage drop, but rather a constant voltage in series with a resistor, the internal resistance. Since I don't have the data for your LED let's look at this characteristic for another LED, the Kingbright KP-2012EC LED: For currents higher than 10mA the curve is straight, and the slope is the inverse of the internal resistance. At 20mA the forward voltage is 2V, at 10mA this is 1.95V. Then the internal resistance is \$R_{INT} = \dfrac{V_1 - V_2}{I_1 - I_2} = \dfrac{2V - 1.95V}{20mA - 10mA} = 5\Omega\$. The intrinsic voltage is \$V_{INT} = V_1 - I_1 \times R_{INT} = 2V - 20mA \times 5\Omega = 1.9V.\$ Suppose we have a power supply of 2V, then the problem looks a bit like the original, where we had 3.3V for both supply and LED. If we would connect the LED through a 0\$\Omega\$ resistor (both voltages are equal after all!) we get a LED current of 20mA. If the power supply voltage would change to 2.05V, just a 50mV rise, then the LED current would be \$ I_{LED} = \dfrac{2.05V - 1.9V}{5\Omega} = 30mA.\$ So a small change in voltage will result in a large change in current. This shows in the steepness of the graph, and the low internal resistance. That's why you need an external resistance which is much higher, so that we have the current better under control. Of course, a voltage drop of 10mV over, say, 100\$\Omega\$ gives only 100\$\mu\$A, which will be hardly visible. Therefore also a higher voltage difference is required. You always need a sufficiently large voltage drop over the resistor to have a more or less constant LED current.
{ "source": [ "https://electronics.stackexchange.com/questions/12869", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1960/" ] }
13,063
I understand that in "saturation mode", a BJT functions as a simple switch. I've used this before driving LEDs, but I'm not sure I understand clearly how I got the transistor into that state. Does a BJT become saturated by raising Vbe above a certain threshold? I doubt this, because BJTs, as I understand them, are current-controlled, not voltage-controlled. Does a BJT become saturated by allowing Ib to go over a certain threshold? If so, does this threshold depend on the "load" that is connected to the collector? Is a transistor saturated simply because Ib is high enough that the beta of the transistor is no longer the limiting factor in Ic?
A transistor goes into saturation when both the base-emitter and base-collector junctions are forward biased, basically. So if the collector voltage drops below the base voltage, and the emitter voltage is below the base voltage, then the transistor is in saturation. Consider this Common Emitter Amplifier circuit. If the collector current is high enough, then the voltage drop across the resistor will be big enough to lower the collector voltage below the base voltage. But note that the collector voltage can't go too low, because the base-collector junction will then be like a forward-biased diode! So, you will have a voltage drop across the base-collector junction but it will not be the usual 0.7V, it will be more like 0.4V. How do you bring it out of saturation? You could reduce the amount of base drive to the transistor (either reduce the voltage \$V_{be}\$ or reduce the current \$I_b\$), which will then reduce the collector current, which means the voltage drop across the collector resistor will be decreased also. This should increase the voltage at the collector and act to bring the transistor out of saturation. In the "extreme" case, this is what is done when you switch off the transistor. The base drive is removed completely. \$V_{be}\$ is zero and so is \$I_b\$. Therefore, \$I_c\$ is zero too, and the collector resistor is like a pull-up, bringing the collector voltage up to \$V_{CC}\$. A follow-up comment on your statement Does a BJT become saturated by raising Vbe above a certain threshold? I doubt this, because BJTs, as I understand them, are current-controlled, not voltage-controlled. There are a number of different ways to describe transistor operation. One is to describe the relationship between currents in the different terminals: $$I_c = \beta I_b$$ $$I_c = \alpha I_e$$ $$I_e = I_b + I_c$$ etc. Looking at it this way, you could say that the collector current is controlled by the base current . Another way of looking at it would be to describe the relationship between base-emitter voltage and collector current, which is $$I_c = I_s e^{\frac{V_{be}} {V_T}}$$ Looking at it this way, the collector current is controlled by the base voltage . This is definitely confusing. It confused me for a long time. The truth is that you cannot really separate the base-emitter voltage from the base current, because they are interrelated. So both views are correct. When trying to understand a particular circuit or transistor configuration, I find it is usually best just to pick whichever model makes it easiest to analyze. Edit: Does a BJT become saturated by allowing Ib to go over a certain threshold? If so, does this threshold depend on the "load" that is connected to the collector? Is a transistor saturated simply because Ib is high enough that the beta of the transistor is no longer the limiting factor in Ic? The bold part is basically exactly right. But the \$I_b\$ threshold is not intrinsic to a particular transistor. It will depend not only on the transistor itself but on the configuration: \$V_{CC}\$, \$R_C\$, \$R_E\$, etc.
{ "source": [ "https://electronics.stackexchange.com/questions/13063", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3240/" ] }
13,091
Quick question: is using a capacitor rated for high voltage (let's say 35 V) in a system that, let's say, supplies 5 V (like for LEDs or what have you) dangerous? Since it can store up to 35 V, will it like somehow store a bunch and then release it at once, damaging the system, or it is OK to use a higher-rated capacitor than the voltage being supplied?
While not a perfect analogy, think of the voltage on the capacitor similar to the liter capacity of a tank. It will hold "35 V" but you needn't fill it completely. But like @JustJeff said, you'd be wise to ensure the container can hold more than necessary to prevent spills (and in an electrolytic capacitor's case, the electrolyte can expand and quite literally "spill" out). Note that a better analogy to capacity would be the farad unit, since that's a measure of a capacitor's charge capacity, so don't get that confused with voltage, which is the potential to do work.
{ "source": [ "https://electronics.stackexchange.com/questions/13091", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ] }
13,205
What's the use of the teardrop shapes around some PCB pads?
There are primarily two reasons to use teardrops: It avoids a pocket (where the trace meets the pad) that could collect acid from the PCB etching process which would later do bad things. It reduces mechanical & thermal stress resulting in less hairline cracks in the trace. That being said, in professionally made PCB's teardrops are rarely needed. It's almost more of an aesthetic thing than a solution to a real problem. I've done many boards with and without teardrops and I have yet to notice a difference. In my opinion, they are more trouble than they are worth.
{ "source": [ "https://electronics.stackexchange.com/questions/13205", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3920/" ] }
13,472
Can we make something like chip reader, which can understand chip design and generate blueprint of it?
ChipWorks has an excellent blog about doing exactly this, with lots of great pictures here . FlyLogic also has an excellent blog. It is here . The short answer is it is absolutely possible. IC DIEs are basically really small circuit boards. You can reverse engineer them pretty easily, it just takes a different tool set. I want to particularly call attention to some posts flylogic did on reverse-engineering ICs (how topical!) here and here . Image from flylogic website
{ "source": [ "https://electronics.stackexchange.com/questions/13472", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3982/" ] }
13,746
Please be kind, I am an electronics nub. This is in reference to getting an LED to emit photons. From what I read (Getting Started in Electronics - Forrest Mims III and Make: Electronics) electrons flow from the more negative side to the more positive side. In an example experiment (involving a primary dry cell, a SPDT switch, a resistor and an LED) it states that the resistor MUST be connected to the anode of the LED. In my mind, if the electrons flow from negative to positive, wouldn't the electron flow run through the LED before the resistor; thereby making the resistor pointless?
The resistor can be on either side of the LED, but it must be present. When two or more components are in series, the current will be the same through all of them, and so it doesn't matter which order they are in. I think the way to read "the resistor must be connected to the anode" as "the resistor cannot be omitted from the circuit."
{ "source": [ "https://electronics.stackexchange.com/questions/13746", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4070/" ] }
13,873
Once a chip overheats it can start malfunctioning - for example many programs may start failing once some or all parts in a computer overheat. What exactly happens that makes chips malfunction when they overheat?
To expand on other answers. Higher leakage currents: this can lead to more heating issues and can easily result in thermal runaway. Signal to noise ration will decrease as thermal noise increases : This can result in a higher bit error rate, this will cause a program to be misread and commands to be misinterpreted. This can cause "random" operation. Dopants become more mobile with heat. When you have a fully overheated chip the transistor can cease being transistors. This is irreversible. Uneven heating can make the crystalline structure of Si break down. A normal person can experience by putting glass through temperature shock. It will shatter, a bit extreme, but it illustrates the point. This is irreversible. ROM memories that depend on a charged isolated plate will be able to lose memory as temperature increases. The thermal energy, if high enough, can allow electronics to escape the charged conductor. This can corrupt program memory. This regularly happens to me during soldering of ICs that are already programmed when someone overheats the chip. Loss of transistor control: With enough thermal energy your electrons can jump the bandgap. A semiconductor is a material that has a small bandgap so that it is easily bridged with dopants but large enough that the required operating temperature does not turn it into a conductor where the gap is smaller then the thermal energy of the material. This is an oversimplification and is the basis of another post, but I wanted to add it and put it in my own words. There are more reasons, but these make an important few.
{ "source": [ "https://electronics.stackexchange.com/questions/13873", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3552/" ] }
13,912
What is the reverse recovery time in a diode?
If a diode is conducting in a forward condition and immediately switched to a reverse condition, the diode will conduct in a reverse condition for a short time as the forward voltage bleeds off. The current through the diode will be fairly large in a reverse direction during this small recovery time. After the carriers have been flushed and the diode is acting as a normal blocking device in the reversed condition, the current flow should drop to leakage levels. This is just a generic description reverse recovery time. It can affect quite a few things, depending on context, as mentioned in the comments.
{ "source": [ "https://electronics.stackexchange.com/questions/13912", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3916/" ] }
14,146
My first Arduino project: I have made a headlight sensor that will activate an outside light when a car's headlights hit it. I know I could go buy one, but where is the fun in that? Now I would like to make a "real" one to mount outside my house so I can reuse my Arduino. I really have no idea where or how to start. I have had a look but can't find any info on this. I am sure someone here will be able to suggest something.
I assume from your question you want the essential parts of an Arduino along with your circuit in a permanent form. Here are my steps: Build the entire circuit on a breadboard or two. Then you know it all works. Transfer it to a permanent form of circuit, testing as you go. Take it a step at a time, and it'll work out nicely. 1. Build the entire circuit on a breadboard or two. Then you know it all works, and that you have all of the components ready to make your permanent circuit This will involve making and programming part of your Arduino on the breadboard. I don't think you'll need the USB part, so it can be through-hole components which are relatively straightforward to begin with. Here is a link to instructions which show how to make an Arduino without USB. It can be programmed using your existing Arduino, you don't need a programmer. For example: http://arduino.cc/en/Tutorial/ArduinoToBreadboard I'd make a sketch of the circuit schematic to ensure I understood what I had on the breadboard, and to guide the next step. 2. I'd recommend you transfer the whole thing to 'veroboard' (stripboard) rather than make a PCB. Veroboard/stripboard will be much quicker and cheaper. You could easily spend 10x longer learning to use Eagle well enough to get a PCB made, than it would take to design and make the entire circuit on veroboard/stripboard. You can do a design on squared paper, but there are some CAD programs to help if you google for veroboard CAD. I have never used them, I use paper and a soft pencil or a vector drawing package. A friend used PowerPoint because that's what he had to hand. Here is an example of someone who has built an Arduino on veroboard/stripboard. It shows the design he produced for veroboard/stripboard. Try to get a reasonable layout for your design before making it. This is where the soft pencil and eraser come in :-) Typically the first couple of attempts are too big or too small. Make it easy for yourself and get plenty of squared of graph paper :-) This link shows what the process will look like. You can follow veroboard/stripboard arduino design, and test that it works. Then focus on your extra circuit. Use a socket for the microcontroller, and so don't solder it in directly. Most of the other parts are a few $'s total, so I'd get a few of each part for spares and practice (To put it in context, some electronics companies charge more for delivery than those parts will cost, so getting several sets of parts makes sense, especially if you intend to make some more things). Total cost for the Arduino part should be under $10. Good luck, and I hope you enjoy it.
{ "source": [ "https://electronics.stackexchange.com/questions/14146", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4017/" ] }
14,149
We have been working with servo's in some our field equipment, however require a slightly different application. We will be deploying some underwater cameras for an extended period (3 months) and to prevent fouling of the camera face will need to make a wiper unit. Is there a product, or does anyone have a schematic which will allow for timed control of the servos? We are suggesting every hour for the servo to complete a return cycle or thereabouts. I'm sure this would be very straight forward for the right person, however i'm a bit lost. I can make up a PCB if shown what needs to go where, but don't really understand the ins and outs of how it all works. Thanks for your assistance. Mark
I assume from your question you want the essential parts of an Arduino along with your circuit in a permanent form. Here are my steps: Build the entire circuit on a breadboard or two. Then you know it all works. Transfer it to a permanent form of circuit, testing as you go. Take it a step at a time, and it'll work out nicely. 1. Build the entire circuit on a breadboard or two. Then you know it all works, and that you have all of the components ready to make your permanent circuit This will involve making and programming part of your Arduino on the breadboard. I don't think you'll need the USB part, so it can be through-hole components which are relatively straightforward to begin with. Here is a link to instructions which show how to make an Arduino without USB. It can be programmed using your existing Arduino, you don't need a programmer. For example: http://arduino.cc/en/Tutorial/ArduinoToBreadboard I'd make a sketch of the circuit schematic to ensure I understood what I had on the breadboard, and to guide the next step. 2. I'd recommend you transfer the whole thing to 'veroboard' (stripboard) rather than make a PCB. Veroboard/stripboard will be much quicker and cheaper. You could easily spend 10x longer learning to use Eagle well enough to get a PCB made, than it would take to design and make the entire circuit on veroboard/stripboard. You can do a design on squared paper, but there are some CAD programs to help if you google for veroboard CAD. I have never used them, I use paper and a soft pencil or a vector drawing package. A friend used PowerPoint because that's what he had to hand. Here is an example of someone who has built an Arduino on veroboard/stripboard. It shows the design he produced for veroboard/stripboard. Try to get a reasonable layout for your design before making it. This is where the soft pencil and eraser come in :-) Typically the first couple of attempts are too big or too small. Make it easy for yourself and get plenty of squared of graph paper :-) This link shows what the process will look like. You can follow veroboard/stripboard arduino design, and test that it works. Then focus on your extra circuit. Use a socket for the microcontroller, and so don't solder it in directly. Most of the other parts are a few $'s total, so I'd get a few of each part for spares and practice (To put it in context, some electronics companies charge more for delivery than those parts will cost, so getting several sets of parts makes sense, especially if you intend to make some more things). Total cost for the Arduino part should be under $10. Good luck, and I hope you enjoy it.
{ "source": [ "https://electronics.stackexchange.com/questions/14149", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4206/" ] }
14,250
I found a circuit where in the classical Graetz rectifier capacitors were added in parallel to each diode. It looked something like this: After the rectifier itself, there was the usual huge capacitor and regulator and so on. So why the smaller capacitors near the diodes?
Power supply transformers have leakage inductance and parasitic capacitance, and when the diodes in a bridge rectifier switch off these "non-ideal" elements form a resonant circuit that can oscillate at high frequency. This high frequency oscillation can then couple into the rest of the circuitry. Snubber circuits are used in an attempt to mitigate this problem. Just using capacitors doesn't damp the ringing completely, but does cause it to drop to a lower frequency where the coupling effect is less. An RC circuit across the diodes can damp the ringing almost completely. You can read more in the following excellent paper: http://www.hagtech.com/pdf/snubber.pdf
{ "source": [ "https://electronics.stackexchange.com/questions/14250", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1240/" ] }
14,404
I have seen a few different ways of adding DC bias to an audio signal. I have simulated them and they all give me similar results, but I can't figure out why choose A over B or C. My audio source will be a Line Level Audio -2V to +2V AC passed through a 220uF coupling cap and then a low pass filter(RC, 2 pole). The signal will be read by an ADC. First way is using a Voltage divider: Simple Biasing Circuit This is pretty self-explanatory and I understand how it works. I have also seen this same design using a diode but could not find an example. Next example: How to read an audio signal using ATMega328? - picture is from endolith's answer. Another one that I have seen is: I don't quite understand this FET-BJT preamp circuit And the schematic is for a pre-amp, and there are 2 versions and both add a bias. My question is what is the best practice for adding the bias to an audio signal? What are some of the other ways to add a DC bias to the signal? Edit/Update: Looking at the answers - using the second one looks like it will work best for my application, using something like this. Are there any other improvements I can make? Other then Stable Vref/power rails.
Don't use the first circuit. Any noise or spikes on the power supply will be mixed with your signal. Because the bias point is connected directly to the signal, you can't filter out power supply noise without also filtering out the signal. Do use the second circuit. It produces a mid-point voltage that is tightly coupled to ground, so the DC component is half the supply, but the AC component (noise and spikes) is filtered out by the capacitor. That's not a complete circuit, though, you still need to connect it to your signal. This is what you're trying to do : The output is the same as the input, just shifted upward by 2.5 V. The resistor on the input ensures that the input side of the capacitor is already at 0 VDC bias when an external circuit is connected, to prevent "pop" sounds (if the voltage suddenly jumped from 2.5 V to 0 V). The resistor on the output side of the AC coupling cap biases that side to the DC bias voltage. If your circuit already has a clean, low impedance DC bias voltage source, connect to that. Otherwise, you can use circuit #2 to generate the bias, like this : (The simulation takes a loong time to reach the DC bias value, though. Hit the "Find DC operating point" menu entry to settle it. ) The DC bias voltage is produced by a voltage divider and capacitor to filter out power supply noise. Note that if you use the same Vbias point for multiple signals, they can crosstalk through this point. Larger bias cap reduces crosstalk. Larger coupling capacitor improves low frequency response. But make them too large and they'll take a long time to charge when you flip the power switch. The 3rd diagram is not a biasing circuit; it's a microphone preamplifier.
{ "source": [ "https://electronics.stackexchange.com/questions/14404", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2060/" ] }
14,660
I have an almost dead laptop battery and read in this forum that "The problem with most of the broken batterys is that they are exhaustive discharged. the trick to solve this is to give them a high voltage electricity source like a laptop charger (20V). I did that to all the 3 lion batteries I found inside the macbook battery but just for a few seconds to reactivate them and now they are working fine" I am curious whether this is theoretically possible if nothing else.
This is a known effect of NiCad/NiMH batteries. Doing it to any other battery risks setting something on fire. Basically, Nickel-metal batteries, when over-discharged, can grow little metal whiskers or "dendrites" between the internal plates, shorting the cell out. Applying a high voltage to the cell causes enough current to flow that the dendrite fuses and melts, and therefore, the cell is not longer internally shorted, and can hold a charge. (This is what the guy in avra's response was doing to the batteries) However, letting a cell go completely flat (0V) is quite bad for it, so the battery will never completely recover. However, it may hold some charge afterwards. Note that what the guy in the quote is doing is not applying a high voltage to the cells, but applying a high charge-current to the cells he is referring to. The laptop charger he describes is likely only good for a few amps, and the output voltage is probably dropping massively when it is connected. The only thing I can think of is that some laptop batteries have a built-in protection system. If the battery is discharged fully, and then let sit and self-discharge for a while, the protection system may not even get enough power to turn itself on, and therefore, the laptop will not realize a battery is even present. When he manually charges the batteries a small amount with his external adapter, by taking the battery apart, it may be just enough to activate the battery protection circuit, so it can charge normally afterwards. Applying a significant over-voltage to ANY cell chemistries for any period of time can be dangerous. On lithium cells, you will get metallic lithium plating out of the electrolyte when the cell voltage is above 4.3V. Metallic lithium can catch on fire when exposed to (the moisture in) the air. In Lead-Acid batteries, you will begin to electrolyze the electrolyte, causing the battery to vent hydrogen and oxygen. This is EXTREMELY EXPLOSIVE.
{ "source": [ "https://electronics.stackexchange.com/questions/14660", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4397/" ] }
14,743
Obviously the panel needs fiducials, and I also place three of them on each PCB. Some IC footprints however also show local fiducials. I've seen them for instance for certain TQFP footprints. When are they required?
Fiducials are used by the pick and place machine to provide better accuracy when placing components on the PCB. There is a camera that recognizes the fiducials and uses it as a registration point to calibrate where the machine thinks it is on the PCB. There are two types of fiducials: Global and Local. Normally a PCB will have 3 global fiducials per side (top & bottom), and usually in the corners of the PCB. This is so it can recognize the boards overall orientation and position. Local fiducials are located near some of the critical parts. Usually there are two fiducials for each part, in opposite corners. IF you have several critical parts that are close together then a fiducial can be shared by two or more parts-- reducing the number of fiducials required and the the PCB space taken up by them. Where you need local fiducials really depends on the pick and place machine that will be used, and the placement accuracy required by the component. Chips with a finer pin pitch will need fiducials more. It's interesting to note that TQFP's need fiducials more than most BGA's. Most TQFP's have a pin pitch of around 0.5mm, while most BGA's are 0.8 to 1.27mm. BGA's also have a cool ability to somewhat self-align due to the surface tension of the melted solder. But I need to stress that this is very component and machine dependent, so check with your assembly shop. Also machine dependent is going to be the construction of the fiducial. Things like how big the pad is, and how much the soldermask is pulled back. Usually the fiducial is round, but sometimes square or bow-tie shaped. Another thing is that some assembly shops will request fiducials to just feel good about things-- but don't really need them. My second to last PCB had had lots of fine pitch BGA's, QFN's, and TQFP's and had no fiducials on it, but there were no issues with parts placement. My current board is nowhere near as difficult but they are requesting fiducials. Go figure. I'll humor them and put the fiducials on it.
{ "source": [ "https://electronics.stackexchange.com/questions/14743", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2064/" ] }
14,964
Laptops are often shipped with switching power supplies that among other have a phrase for use with information equipment only on their body. Why can't I use the same power supply to power a motor or a set of LEDs or a lamp given it's output voltage and its wattage are suitable? I mean it outputs say 36 volts of direct current and can supply say 50 watts - okay, that would do for my motor/LEDs/lamp, why can't I use that supply? What's so special in those power supplies that they bear the "IT equipment only" mark?
It is part of the safety regulations. When deciding which specifications apply for testing, the product application is taken into account as the way of deciding which spec applies. So if a laptop manufacturer provides an external PSU, they will have it tested to the relevant specs, which includes the product category. There is no guarantee it will be suitable for other applications. For example, if you drive a motor which has accessable metal parts, and that laptop PSU doesn't give you an earth connection back to the socket, unless you make a seperate connection to earth, there isn't naturally one back for you.
{ "source": [ "https://electronics.stackexchange.com/questions/14964", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3552/" ] }
15,056
I am building high-speed (10-20ns on BC847-class transistors) digital "buffer"/"inverter" out of BJTs. Scheme is attached. While I can prevent saturation of low-side BJT by adding Schottky diode, it's not going to work for high-side. Any hints except decreasing resistance of base-resistor?
Anti-saturation diodes are connected in parallel to the C-B-diode of the transistor that is to be kept from saturation. You are doing this correctly at the npn (anode at base and cathode at collector), and it should be done exactly the same way at the pnp, just that the diode is the other way round in this transistor: cathode at base, anode at collector. I am not really sure how you chose your base resistors. I assume you have a supply voltage of 5 V and a rectangular base drive signal (0 V, 5 V). I would suggest you use identical values for both base resistors. With 5 k\$\Omega\$, it is likely that the high value of the base resistor does more harm than an anti-sat-diode would do good. Something in the range of 200...500 \$\Omega\$ for each resistor seems better to me. If you want to push the speed even further, you can try paralleling the base resistors with small (approx. 22 pF) capacitors. The trick about finding the right value for the capacitor would be to make it somewhat equal to the effective capacitance at the base, thus forming a 1:1 voltage divider for the high frequency part of the rising or falling voltage edge. Edit #1: Here is the schematic I used to check with LT Spice. The input signal (rectangular, 0 V and 5 V) is fed into three similar BJT inverters, each using a complementary BC847 and BC857 pair. The one on the left has no special tricks to speed it up, the one in the middle uses Schottky diodes for anti-saturation and the one on the right also features a high-speed bypass along each base resistor (22 pF). The output of each stage has an identical load of 20 pF, which is a typical value for some trace capacitance and a subsequent input. The traces show the input signal (yellow), the slow response of the circuit on the left (blue), the response with anti-saturation diodes (red) and the response of the circuit that also uses capacitors (green). You can clearly see how the propagation delay gets less and less. The cursors are set at 50 % of the input signal and at 50 % of the fastest circuit's output and indicate a very small difference of 3 ns only. If I find the time, I might also hack the circuit and add real scope pictures. Careful layout will definitely be necessary to achieve sub-10 ns delay times in reality. Edit #2: The breadboard works nicely and shows a delay of < 10 ns on my 150 MHz scope. Pictures will follow later this week. Had to use my good probes, because the cheapo ones showed not much more than ringing... Edit #3: Ok, here's the breadboard: A 1 MHz square wave with 5 V (pkpk) enters the board from the left through the BNC connector and gets terminated into 50 \$\Omega\$ (two paralleled 100 \$\Omega\$ resistors, upper one hidden by probe). Base resistors are 470 \$\Omega\$, capacitors are 30 pF, Schottky diodes are BAT85, transistors are BC548/BC558. The supply is bypassed with 100 nF (ceramic) and a small electrolytic capacitor (10 \$\mu\$F). The first screenshot shows the input and output waveforms at 100 ns/div and with 2 V/div for both traces. (Scope is a Tektronix 454A.) The second and third screenshot show the transitions from low to high and from high to low at the input with 2 ns/div (20 ns time base with additional 10 x horizontal magnification). The traces are now centered vertically on the screen for an easier display of the propagation delay with 1 V/div. The symmetry is very good and shows a difference of < 4 ns between input and output. I would argue that we can actually trust the simulated results. The rise and fall times are very likely faster in reality and just limited by the scope's rise time, but I can think of no reason why the delay between the two signals should not be displayed correctly. There is one thing to pay attention to: With every low-to-high and high-to-low transition, the two transistors tend to cross-conduct very briefly. At higher frequencies of the input signal (approx. > 2 MHz), the inverter circuit starts to take a lot of current and does weird things...
{ "source": [ "https://electronics.stackexchange.com/questions/15056", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2062/" ] }
15,063
Many years ago most electronic devices had internal power supplies only - there was a mains voltage cable running into the unit where mains AC would be converted and distributed for consumption. That was typical for shavers, TV sets, monitors, printers, other stuff. Now I see more and more devices that have external powers supplies. Either it's a box with two prongs that is plugged right into the outlet or it's a separate box with a mains cable running into it. Either way it has some 12V to 36V DC output cable that is then plugged into the device. I could see the following reasons for such design: easier to suite for different voltages and outlets - one single model of device can be equipped with an adapter suitable for the market it targets less wire with mains voltage - less metal and insulation less wire with direct mains connection - lower risk of electric shock. What are actual reasons for making power supplies external?
Yes, I mostly agree with Martin. I've been in early design meetings where we wanted to provide a direct line cord, but that eventually got shot down due to the hassle and expense of getting regulatory approval. We know that consumers don't like wall warts, but unfortunately the compliance issues in getting a product to market force this tradeoff. It's actually not a legal requirement. There are surprisingly few of those, at least here in the US. However, in reality you can't have a consumer product that uses wall power in some form without UL or equivalent certification. You can follow all the best design rules and know your product is at least as safe as others with approval, but nobody wants to gamble on the liability of not having their butt covered by UL. Major retailers, for example, wouldn't touch it without formal approval. If your product sells in the millions, sooner or later someone is going to do something stupid and get zapped. It may even be deliberate fraud just to try to extract a settlement, but that matters little. It helps tremendously in the legal process to say that your product followed "accepted safety practises" and was certified to that effect by UL or equivalent. If you use a external approved power supply so that low voltage only goes to your unit, you are pretty much off the hook safety-wise. The external power supply provides the isolation, and as long as voltages in your unit are 48V or less and limited to a particular current (I forget the limit), you're basically fine. For moderate product drawing 10s of Watts or more, it's usually worth it to put the line cord on it directly. Plenty of manufacturers make pre-certified power bricks you can embed into the product. You still will want certification for the whole product, but that's a lot easier and cheaper if you are using a power brick that has already been certified. In that case they usually just look for overall insulation and spacing, that the proper fuse is before the power brick, the mechanics of how the power enters the unit, etc. If the product is intended for international distribution (and more are these days), you put a standard line cord socket on the product, then provide localized line cords. Power bricks that work over the worldwide range of roughly 90-240 VAC 50/60 Hz are pretty common these days. After a few 10s of Watts, most will have power factor control too.
{ "source": [ "https://electronics.stackexchange.com/questions/15063", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3552/" ] }
15,135
I guess I've been somewhat ignorant when it comes to the finer details of pcb layout. Lately I've read a couple of books that try their best to lead me on the straight and narrow. Here is a couple of examples of a recent board of mine, and I have highlighted three of the decoupling caps. The MCU is a LQFP100 package and the caps are 100nF in 0402 packages. The vias connect to ground and power plane. The top cap (C19) is placed according to best practices (as I understand them). The other two are not. I haven't noticed any problems. But then again the board has never been outside the lab. I guess my question is: How big a deal is this? As long as the tracks are short, does it matter? The Vref pins (reference voltage for the ADC) also have a 100nF cap across them. Vref+ comes from an onboard TL431 shunt regulator. Vref- goes to ground. Do they require special treatment like shielding or local ground? EDIT Thanks for great suggestions! My approach has always been to rely on an unbroken ground plane. A ground plane will have the lowest possible impedance, but this approach may be too simplistic for higher frequency signals. I've made a quick stab at adding local ground and local power under the MCU (The part is an NXP LPC1768 running at 100MHz). The yellow bits are the decoupling caps. I'll look into paralleling caps. The local ground and power are connected to the GND layer and the 3V3 layer where indicated. The local ground and power are made with polygons (pour). It's going to be a major rerouting job to minimize the length of the "tracks". This technique will limit how many signal tracks can be routed under and across the package. Is this an acceptable approach?
Proper bypassing and grounding are unfortunately subjects that seem to be poorly taught and poorly understood. They are actually two separate issues. You are asking about the bypassing, but have also implicitly gotten into grounding. For most signal problems, and this case is no exception, it helps to consider them both in the time domain and the frequency domain. Theoretically you can analyse in either and convert mathematically to the other, but they each give different insights to the human brain. Decoupling provides a near reservoir of energy to smooth out the voltage from very short term changes in current draw. The lines back to the power supply have some inductance, and the power supply takes a little time to respond to a voltage drop before it produces more current. On a single board it can catch up usually within a few microseconds (us) or tens of us. However, digital chips can change their current draw a large amount in only a few nanoseconds (ns). The decoupling cap has to be close to the digital chip power and ground leads to do its job, else the inductance in those leads gets in the way of it delivering the extra current quickly before the main power feed can catch up. That was the time domain view. In the frequency domain digital chips are AC current sources between their power and ground pins. At DC power comes from the main power supply and all is fine, so we're going to ignore DC. This current source generates a wide range of frequencies. Some of the frequencies are so high that the little inductance in the relatively long leads to the main power supply start becoming a significant impedance. That means those high frequencies will cause local voltage fluctuations unless they are dealt with. The bypass cap is the low impedance shunt for those high frequencies. Again, the leads to the bypass cap must be short else their inductance will be too high and get in the way of the capacitor shorting out the high frequency current generated by the chip. In this view, all your layouts look fine. The cap is close to the power and ground chips in each case. However I don't like any of them for a different reason, and that reason is grounding. Good grounding is harder to explain than bypassing. It would take a whole book to really get into this issue, so I'm only going to mention pieces. The first job of grounding is to supply a universal voltage reference, which we usually consider 0V since everything else is considered relative to the ground net. However, think what happens as you run current thru the ground net. It's resistance isn't zero, so that causes a small voltage difference between different points of the ground. The DC resistance of a copper plane on a PCB is usually low enough so that this is not too much of a issue for most circuits. A purely digital circuit has 100s of mV noise margins at least, so a few 10s or 100s of μV ground offset isn't a big deal. In some analog circuits it is, but that's not the issue I'm trying to get at here. Think what happens as the frequency of the current running across the ground plane gets higher and higher. At some point the whole ground plane is only 1/2 wavelength across. Now you don't have a ground plane anymore but a patch antenna. Now remember that a microcontroller is a broad band current source with high frequency components. If you run its immediate ground current across the ground plane for even a little bit, you have a center-fed patch antenna. The solution I usually use, and for which I have quantitative proof it works well, is to keep the local high frequency currents off the ground plane. You want to make a local net of the microcontroller power and ground connections, bypass them locally, then have only one connection to each net to the main system power and ground nets. The high frequency currents generated by the microcontroller go out the power pins, thru the bypass caps, and back into the ground pins. There can be lots of nasty high frequency current running around that loop, but if that loop has only a single connection to the board power and ground nets, then those currents will largely stay off them. So to bring this back to your layout, what I don't like is that each bypass cap seems to have a separate via to power and ground. If these are the main power and ground planes of the board, then that's bad. If you have enough layers and the vias are really going to local power and ground planes, then that's OK as long as those local planes are connected to the main planes at only one point . It doesn't take local planes to do this. I routinely use the local power and ground nets technique even on 2 layer boards. I manually connect all the ground pins and all the power pins, then the bypass caps, then the crystal circuit before routing anything else. These local nets can be a star or whatever right under the microcontroller and still allow other signals to be routed around them as required. However, once again, these local nets must have exactly one connection to the main board power and ground nets. If you have a board level ground plane, then there will be one via some place to connect the local ground net to the ground plane. I usually go a little further if I can. I put 100 nF or 1 μF ceramic bypass caps as close to the power and ground pins as possible, then route the two local nets (power and ground) to a feed point and put a larger (10μF usually) cap across them and make the single connections to the board ground and power nets right at the other side of the cap. This secondary cap provides another shunt to the high frequency currents that escaped being shunted by the individual bypass caps. From the point of view of the rest of the board, the power/ground feed to the microcontroller is nicely behaved without lots of nasty high frequencies. So now to finally address your question of whether the layout you have matters compared to what you think best practices are. I think you have bypassed the power/ground pins of the chip well enough. That means it should operate fine. However, if each has a separate via to the main ground plane then you might have EMI problems later. Your circuit will run fine, but you might not be able to legally sell it. Keep in mind that RF transmission and reception are reciprocal. A circuit that can emit RF from its signals is likewise susceptible to having those signals pick up external RF and have that be noise on top of the signal, so it's not just all someone else's problem. Your device may work fine until a nearby compressor is started up, for example. This is not just a theoretical scenario. I've seen cases exactly like that, and I expect many others here have too. Here's a anecdote that shows how this stuff can make a real difference. A company was making little gizmos that cost them $120 to produce. I was hired to update the design and get production cost below $100 if possible. The previous engineer didn't really understand RF emissions and grounding. He had a microprocessor that was emitting lots of RF crap. His solution to pass FCC testing was to enclose the whole mess in a can. He made a 6 layer board with the bottom layer ground, then had a custom piece of sheet metal soldered over the nasty section at production time. He thought that just by enclosing everything in metal that it wouldn't radiate. That's wrong, but somewhat of a aside I'm not going to get into now. The can did reduce emissions so that they just squeaked by FCC testing with 1/2 dB to spare (that's not a lot). My design used only 4 layers, a single board-wide ground plane, no power planes, but local ground planes for a few of the choice ICs with single point connections for these local ground planes and the local power nets as I described. To make a long story shorter, this beat the FCC limit by 15 dB (that's a lot). A side advantage was that this device was also in part a radio receiver, and the much quieter circuitry fed less noise into the radio and effectively doubled its range (that's a lot too). The final production cost was $87. The other engineer never worked for that company again. So, proper bypassing, grounding, visualizing and dealing with the high frequency loop currents really matters. In this case it contributed to make the product better and cheaper at the same time, and the engineer that didn't get it lost his job. No, this really is a true story.
{ "source": [ "https://electronics.stackexchange.com/questions/15135", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3876/" ] }
15,184
I know about SparkFun.com, but their product line is limited to general appeal for the average hobbyist. I appreciate them for that. But when I want a specialized component, I'm not sure if vendors don't sell or if I'm looking in the wrong spots. I find sites for chip manufacturers that make components I'd like to buy, but they never seem to publish their prices (why??), and some seem to have minimum quanities or minimum purchase totals. I'm intimidated to ask for a quote on a single item. Where does the hobbyist source high-end parts? Should we expect a full product catalog with prices? Any tricks to the ordering process?
Most chip companies don't publish prices because they are useless to most of their customers. And by "customers", I mean people that buy chips in volume. For example, TI publishes prices at 1,000 units per year. If I buy more than that, I can get a better price-- and most manufacturers will buy more than 1k/year. But on top of the normal volume discount, I can negotiate a better price. Let's say that I want 1k/year of a part, I can usually get 5k/year pricing. I could also make a case for a "package deal", where if I need 1k/year each of 5 different chips then I could request 5k/year pricing on everything. In the end, the price on the web site is almost never the price that I actually pay. Further, the price I pay is almost never the price that you will pay. Maybe you got a better price, or maybe I did. Either way, TI doesn't want us to know each others price because then we could use that as leverage to get a lower price. The other thing is that these chip companies don't make any money selling small quantities. Their overhead is quite large, and they need customers buying large quantities at a time. The point is, they have no motivation to sell direct to the small customer. That's why there are places like Digikey. Digikey will buy large lots, divide them up, and sell them 1 or 10 at a time. Some chip companies know that selling small qty is a loosing proposition, and they are actually better off giving them away-- if in "good will credit" if not actual money. That's what Maxim, National, TI, and Microchip do. TI actually contracted with Digikey for their sample program. If you ask for samples from TI, Digikey will be the one to ship it to you. So, when buying specialized components in small qty you'll frequently be out of luck. The chip manufacturers won't sell it to you, and probably won't sample it either. The wholesale distributors won't talk with you for similar reasons. And Digikey and places like that won't help either. One thing to look for is each manufacturer will list, somewhere, places that sell their parts. Sometimes it's a link off of the individual chip's page, or sometimes it is somewhere else. But check out that list. Most of those suppliers will be wholesalers who won't talk to you, but they might mention places like Digikey or Mouser. Failing that, you could email their main sales people and just be up-front about what you're doing and how much you will be buying and ask them where you should go to buy it.
{ "source": [ "https://electronics.stackexchange.com/questions/15184", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4204/" ] }
15,455
I always use the internal oscillator that pics have as I have never found the need to run anything at higher frequency than 8 MHz (which is the fastest the pics I use tend to be able to go). Are there any reasons, beyond going above 8 MHz, that mean I should use an external oscillator? It just seems like one more thing to go wrong to me but I'd be interested to hear what others do.
As others have said, accurate frequency and frequency stability are reasons to use a external ceramic resonator or crystal. A resonator is several times more accurate than the internal R-C oscillator and good enough for UART communication. A crystal is much more accurate, and necessary if you are doing some other types of communication like CAN, USB, or ethernet. Another reason for a external crystal is choice of frequency. Crystals come in a wide range of frequencies whereas the internal oscillator is usually one frequency with maybe a choice of 4x PLL enabled. Some newer 24 bit core PICs have both a multiplier and divider in the clock chain so you can hit a wide choice of frequencies from the single internal oscillator frequency. There are of course various applications that inherently require accurate frequency or timing other than communications. Time is the property in electronics that we can measure most accurately cheaply, so sometimes the problem is transformed into one of measuring time or producing pulses with accurate timing. Then there are applications which require some long term synchronization with other blocks. A 1% oscillator would be off by over 14 minutes per day if used as the basis for a real time clock. Accurate long term time may also be needed without having to know real time. For example, suppose you want a bunch of low power devices to wake up once every hour to exchange data for a few seconds and then go back to sleep. A 50ppm crystal (very easy to get) will be off no more than 180ms in a hour. A 1% R-C oscillator could be off by 36 seconds though. That would add significant on-time and therefore power requirements to the devices that only needed to communicate for a couple of seconds every hour.
{ "source": [ "https://electronics.stackexchange.com/questions/15455", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4096/" ] }
15,535
I'm no electrical engineer (just mechanical) but I would like to apply some of my hobbyist experience to my job and implement various automated systems in an industrial (manufacturing) environment. Traditionally, automation in the industrial setting consists of either engineered systems or PLCs. Engineered systems are too expensive and PLCs lack flexibility (and they can get pretty expensive as well). I would like to replace traditional PLCs with more flexible, powerful, and cheaper Arduinos but I'm concerned about the Arduino's reliability. PLCs evolved in the industrial setting and are thus very rugged and reliable but how does the Arduino platform compare? Assuming that proper measures are taken to protect the Arduino from mechanical and electrical damage, how reliable is the platform? Would you trust it to replace a traditional PLC that controls say a machine's safety interlock system to prevent people from getting too close to a machine in operation? Edit: What about non safety critical systems? For example, introducing intelligence into say a fixture which a PLC would not be capable of?
PLC manufacturers would like you to believe that their software is more reliable and more thoroughly tested. My impression is that the core OS components of PLCs are usually quite well designed and tested, but the drivers for external hardware (motion systems and the like) are often libraries hacked together by applications engineers and then passed around the company. The hardware in PLCs is often antiquated-- a lot of them are running old, hot Geode processors. When you buy a PLC from Allen-Bradley, B&R, Siemens, or any of the other big players, you're mostly paying for support when things go wrong. Their hardware is made with the same manufacturing processes as Arduinos, and there's nothing magical about the real-time operating systems running on PLCs that makes them bug-free. But, I think that support is often worth paying for. If it's a machine that costs the company $1M every day it's not operating, I'd be damn sure that when something went wrong, there was a team of professionals who could help fix it, not just me and Google. For the specific case of light curtains or other safety interlocks, I would want to make sure that the manufacturer had a hefty insurance policy in place, rather than a statement that tries to disclaim all merchantability for any particular purpose. Even so, if I were designing (for example) a bit of simple pneumatic actuation for some fixture, and I was willing to shoulder the support burden of fixing the machine when it broke (or if I wasn't able to get the resources allocated to pay for the PLC), and safety wasn't an issue, I'd happily use an Arduino. I'd probably prototype the system with an Arduino and then rewrite the code in pure C once it was working, so that my code was the only code on the microcontroller.
{ "source": [ "https://electronics.stackexchange.com/questions/15535", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1319/" ] }
15,539
I know that when the frequency is 0, the voltage will be pure DC. But in DSP and Digital Communication, I have seen mentioning of negative frequencies which I dont quite understand. For example, like \$ -f_{0}\$ to \$ f_{0}\$ frequency range. How could frequency become negative?
The derivation of \$cos(\omega t) = \frac{1}{2} \left(e^{j\omega t} + e^{-j\omega t}\right)\$ is all very nice and such (thanks, Mark), but it's not very intuitive. A sine can be presented in the complex plane as a rotating vector: You can see how the vector consists of a real and an imaginary part. But what you see when you watch the signal on your scope is a real signal, so how can you get rid of the imaginary part, such that the vector stays on the x-axis, increasing and decreasing? The solution is to add a mirror image of the rotating vector, rotating clockwise instead of counterclockwise. The imaginary parts have the same magnitude, but opposite signs, so when you add both vectors the imaginary parts cancel each other, leaving a purely real signal. If counterclockwise rotation stands for positive frequency, clockwise rotation has to stand for negative frequency.
{ "source": [ "https://electronics.stackexchange.com/questions/15539", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4277/" ] }
15,700
In my project I want to use some ceramic and electrolytic capacitors, I will need the capacitors be at least 10V rated, but what will happen if I use much higher rated capacitors (just to make sure in case of something went bad they don't explode!)?
In general, the voltage rating of a capacitor is the maximum it can take and still stay within specs. Unpolarized caps, like ceramics, can take any voltage +- the voltage spec value. Polarized caps, like electrolytics and tantalum, can take any voltage from 0 to the voltage spec value. That said, different things happen to different cap types as their voltage gets near the maximum. With electrolytics, the lifetime goes down. In theory with a reputable manufacturer, the rated lifetime is at max voltage and temperature unless stated otherwise. You could therefore say the lifetime goes up if you operate the cap below its rated max voltage. The two major stressers of electrolytic caps are voltage and temperature. Large currents can also hurt them, but this is due to heating so is really a temperature issue. Ceramics have different properties. Voltage doesn't effect lifetime of SMD multilayer caps much, assuming of course you don't exceed specs. Some ceramics however do not linearly store charge as a function of applied E field. They hold less additional charge for the same voltage increment at high voltage than at low voltage. This means the apparent capacitance goes down with voltage. The cheap ceramics, particularly those with "Y" in their names and a few others exhibit this effect more strongly than others. If you are just bypassing a digital chip, this doesn't matter much. If however the cap is used in a analog filter, then this probably matters and you generally want to stick to ceramics with "X" in their name and look over the datasheet carefully. There are issues with too low a voltage too, especially with electrolytics. They work on a thin oxide layer on the alumimum. This can get degraded when there is no charge accross it. So to finally give you a concrete answer, if you are going to use electrolytic caps try to aim at running them around 3/4 or 2/3 of their rated voltage. It's OK to have occasional spikes up to the maximum, but don't ever exceed it. It's OK for them to be off too, but it's better that they're not completely discharged for years on end.
{ "source": [ "https://electronics.stackexchange.com/questions/15700", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3757/" ] }
15,851
I'm a student and I'm working on a low power communication project. I am trying to design a PCB using the TI CC2540 sample design . There is a MC-306 (32.768kHz, 12.5pf, and 20/50ppm). I don't know what the 20/50ppm rating is. For me, the size is very important, so I decided to replace it with the FX135A , but its ppm is -20/+20. Will it cause a problem if I use this one instead? What is the ppm rating in the crystal oscillator?
Like Olin said, ppm stands for parts per million , and it indicates how much your crystal's frequency may deviate from the nominal value. The MC-306 exists in a 20 ppm and a 50 ppm version. For the 20 ppm version this means that the frequency will be between 32.7673 kHz (32.768 - 20 ppm, or x 0.999980) and 32.7687 kHz (32.768 + 20 ppm, or x 1.000020). These numbers may give you a comfortable feeling, but remember that a month is 2.6 million seconds, so if you want to use a 20 ppm crystal to build an alarm clock, it may have an error of 1 minute per month. Crystals are available is different precisions, +/-20 ppm is more or less standard, for 10 ppm you'll pay more. Also, this is basic precision. This frequency may deviate depending on environment factors, mainly temperature.
{ "source": [ "https://electronics.stackexchange.com/questions/15851", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4705/" ] }
15,999
I've seen a number of different configurations for instrumentation amplifiers, including 2 opamp versions. This is also one. But it's just a differential amplifier preceded by input buffers. When do you call it an instrumentation amplifier, in other words, what's so special about it that it deserves a separate name?
"An instrumentation amplifier is a precision differential voltage gain device [...]." One of the important words here is "gain". An OpAmp has infinite gain (in theory) and only gets a defined gain by adding circuitry around it. Usually, when using one OpAmp only, at least one of the inputs loses its extremely high input impedance because external resistors are necessary. If you need two (differential) inputs with both a very high input impedance and a defined gain, you can use the two-OpAmp-InAmp you are talking about or the three-OpAmp-InAmp-configuration your picture shows. There are also readymade IC InAmps by such companies as Linear Technology or Analog Devices. The three-OpAmp-InAmp circuit in in the picture of your question shows that two OpAmps are used as buffers, where they still have a high impedance at their otherwise unconnected non-inverting input pins ("+"). By feeding their outputs into another OpAmp, the upper non-inverting input ("+") becomes an inverting input ("-") because it is connected to the 3rd OpAmp's inverting ("-") input. The lower non-inverting input ("+") remains non-inverting due to its connection with the 3rd OpAmp. Common three-OpAmp-InAmps use a slightly different configuration compared to your picture to set the gain with one resistor only (the external gain resistor in the case of completely integrated InAmps). Please refer to the links I've provided for more details. With the three-OpAmp-InAmp, you get both a very high input impedance at two differential inputs (while you would get only one input with such a high input impedance with a regular OpAmp buffer) and you get a very good rejection of common-mode signals (that is achievable with one OpAmp, too, but at the cost of lowering the input impedance with the resistors you have to use to turn the OpAmp into a difference amplifier). The two-OpAmp-InAmp circuit needs less parts, but at the cost of a not-so-good common mode rejection ratio (CMRR). Here is a link to a very good book about InAmps by Analog's Charles Kitchin and Lew Counts where you can find a more in-depth look onto all these issues.
{ "source": [ "https://electronics.stackexchange.com/questions/15999", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3920/" ] }
16,003
Right, so I have ripped the 1.7" (ish) TFT module out of an HP PhotoSmart C4780. Despite HP's best efforts I can plainly see that it is made by AUO. No model number on it though. Now, I have never had anything to do with TFT screens before now, but I want to have a play. The screen has a single mylar strip connector, and I can see which of the connections break off to the back-light, so that's not a problem. From what I can see the rest of it doesn't need anything 'special' - the board it normally plugs into is powered by a PIC16F727, so it must be possible for me to interface it to some other PIC instead (I have a selection to hand). But how do I get started? I don't even know what connections to expect there to be on the mylar strip, and how to work out what should be where. I have found some AUO data sheets, but none for a 1.7", only 1.5" and 2" and upwards, though I have identified one possible 1.7" model - A017CN01, but I can't find a data sheet for it anywhere. Any pointers would be most appreciated. ****Update:**** I think I may have just found a data sheet for this device - I'm still none the wiser though... I see it has a serial data (SPI-like) interface to it - ideal for PIC usage. Nothing in the data sheet about how to use it. Would this interface be likely to be common to all AUO devices?
"An instrumentation amplifier is a precision differential voltage gain device [...]." One of the important words here is "gain". An OpAmp has infinite gain (in theory) and only gets a defined gain by adding circuitry around it. Usually, when using one OpAmp only, at least one of the inputs loses its extremely high input impedance because external resistors are necessary. If you need two (differential) inputs with both a very high input impedance and a defined gain, you can use the two-OpAmp-InAmp you are talking about or the three-OpAmp-InAmp-configuration your picture shows. There are also readymade IC InAmps by such companies as Linear Technology or Analog Devices. The three-OpAmp-InAmp circuit in in the picture of your question shows that two OpAmps are used as buffers, where they still have a high impedance at their otherwise unconnected non-inverting input pins ("+"). By feeding their outputs into another OpAmp, the upper non-inverting input ("+") becomes an inverting input ("-") because it is connected to the 3rd OpAmp's inverting ("-") input. The lower non-inverting input ("+") remains non-inverting due to its connection with the 3rd OpAmp. Common three-OpAmp-InAmps use a slightly different configuration compared to your picture to set the gain with one resistor only (the external gain resistor in the case of completely integrated InAmps). Please refer to the links I've provided for more details. With the three-OpAmp-InAmp, you get both a very high input impedance at two differential inputs (while you would get only one input with such a high input impedance with a regular OpAmp buffer) and you get a very good rejection of common-mode signals (that is achievable with one OpAmp, too, but at the cost of lowering the input impedance with the resistors you have to use to turn the OpAmp into a difference amplifier). The two-OpAmp-InAmp circuit needs less parts, but at the cost of a not-so-good common mode rejection ratio (CMRR). Here is a link to a very good book about InAmps by Analog's Charles Kitchin and Lew Counts where you can find a more in-depth look onto all these issues.
{ "source": [ "https://electronics.stackexchange.com/questions/16003", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4245/" ] }
16,030
I need to make few components in Eagle, which are all in standard packages, and I can see them in ref-packages.lbr. The problem is that when I am creating my library I don't see how I can reuse package/symbol from another library, so that I only need to give new names to the pins and that's it? Package list is just empty and I can only draw one from scratch. Unfortunately, all tutorials I've found in the internets do everything from scratch...
You can just copy a package from another library into your own library and edit it. Are you asking how to copy between libraries? Open your own library (the one you want to copy to). Then bring the Control Panel window to the front (leaving your library open in the background). In the list to the left, expand Libraries and expand the name of the library you want to copy from. Locate the package or footprint you want to copy, right click it, and select Copy to Library. Bring the window with your own library to the front, locate the newly copied package and edit it.
{ "source": [ "https://electronics.stackexchange.com/questions/16030", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2062/" ] }
16,581
As I understand it inrush current is the current when the contact closes. Resistance is not minimum yet, and still inrush current can be several times the nominal current, like 80A on a 10A relay. How come that the inrush current doesn't weld the contacts? edit case in point: this relay can take 800A (!) inrush for 200\$\mu\$s
Relays aren't perfect switches, and will have a certain contact resistance, which may be several tens of milliohms. In power applications this has to be taken into account. A relay with a contact resistance of 10m\$\Omega\$ carrying 16A will dissipate 2.5W in the contact! It has been suggested that contacts tend to weld more on opening than on closing. I don't think that's correct. Firstly, in most relays release time is much faster than operate time. Secondly, yes there is often a nasty arc when opening, but that arc is actually a sign that contact anode and cathode are in fact separated , and then they can't weld anymore. That doesn't mean that arcs are harmless. They're a powerful HF transmitter and cause much EMI . And they burn the contact's coating. In AC switching they will extinguish on the zero-crossing, after maximum 1/100 or 1/120 second (doesn't count for very high voltage switching ), but in DC this may take longer. That's why DC ratings for a relay will be significantly lower than AC ratings. So contacts tend to weld upon closing , and you rightly mention that the contact resistance isn't minimum yet during the inrush, so it looks strange that exactly then a higher current is allowed. It's all to do with time . Closing a contact usually takes several ms, but most of that time is used to build the magnetic field in the coil and the travel of the contact's anode also takes some time. The actual time between first contact and final closure is very short . Add to that that the current isn't 80A yet at the first contact; current can't go from 0 to 80A in a nanosecond. So while the current builds up, the resistance decreases. All in a very short time, so that the total dissipated energy in general won't be too high. For situations where this isn't good enough there are relays with a separate faster tungsten contact to improve closing performance. (In Dutch it's called "voorloopcontact", I don't know the name in English.) edit (in reference to comments to different answers) There was some discussion whether the relay would weld upon opening. Steve quoted Siemens: " [welding occurs upon] opening and immediate reclosing of contacts ". If you want to weld the contact this is definitely the way to do it. Opening will probably draw an arc, and reclosing it during this arc means that there's current during the high resistance phase at the first contact. Contact + high energy = high risk of welding. Even if the arc already extinguished the air in the gap may still be ionized, meaning that it may breakdown during reclosing before contact is made, and that there's already current when closing. So basically the same situation. Further reading: Tyco application note: Relay Contact Life
{ "source": [ "https://electronics.stackexchange.com/questions/16581", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3920/" ] }
16,636
It seems that reading a ceramic capacitor value out of its written values is harder than decoding enigma machine. I wonder if experienced users here does have a trick to quickly figure out these values. Some examples: I know that 103M is 0.01µF but how does one figure this out? Another example 104Z/LK ...this one I cant get it at all. All I know is that Z is for assymetric capacitors with tolerence between 80% and -20% ... Am I right? If not would be nice to correct me and tell me where these Z ceramic capacitors are used mostly?
The numbers work like a resistor. The first two numbers are just numbers. The third number is the number of 0's after. It's in picofarads. So: 103 is 1 0 000 or 10,000pf or 10nF 104 is 1 0 0000 or 100,000pf or 100nF The letter next is the tolerance: B +/- 0.10pF C +/- 0.25pF D +/- 0.5pF E +/- 0.5% F +/- 1% G +/- 2% H +/- 3% J +/- 5% K +/- 10% M +/- 20% N +/- 30% P +100% ,-0% Z +80%, -20% Anything after that is usually manufacturer specific.
{ "source": [ "https://electronics.stackexchange.com/questions/16636", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3757/" ] }
16,767
VHDL and Verilog are some of the HDLs used today. What are the advantages and disadvantages of using Verilog or VHDL over the other?
I can't tell you which to learn, but here's some contrasting points (from a very VHDL-centric user, but I've tried to be as fair as possible!), which may help you make a choice based on your own preferences in terms of development style: And keep in mind the famous quote which goes along the lines of "I prefer whichever of the two I'm not currently using" (sorry, I can't recall who actually wrote this - possibly Janick Bergeron?) VHDL strongly-typed more verbose very deterministic non-C-like syntax (and mindset) Lots of compilation errors to start with, but then mostly works how you expect. This can lead to a very steep feeling learning curve (along with the unfamiliar syntax) Verilog weakly-typed more concise only deterministic if you follow some rules carefully more C-like syntax (and mindset) Errors are found later in simulation - the learning curve to "feeling like getting something done" is shallower, but goes on longer (if that's the right metaphor?) Also in Verilog's favour is that high-end verification is leaning more and more to SystemVerilog which is a huge extension to Verilog. But the high-end tools can also combine VHDL synthesis code with SystemVerilog verification code. For another approach entirely: MyHDL - you get all the power of Python as a verification language with a set of synthesis extensions from which you can generate either VHDL or Verilog. Or Cocotb - all the power of Python as a verification language, with your synthesisable code still written in whichever HDL you decided to learn (ie VHDL or Verilog). SystemC is also a good option for an HDL. SystemC supports both System level and Register Transfer Level (RTL) design. You need only a C++ compiler to simulate it. High-Level Synthesis tools will then convert SystemC code to Verilog or VHDL for logic synthesis.
{ "source": [ "https://electronics.stackexchange.com/questions/16767", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2064/" ] }
16,823
In a comment to this answer Kortuk asks what the ARM advantage is . I first added some arguments in my answer, but I think the question is interesting enough to be a question in itself, so that more answers are possible.
Performance is one advantage. Being a 32-bit processor it outperforms (almost) all 8-bit controllers DMIPS-wise. The core also has gone through several generations, read optimizations. These optimizations not only show in performance numbers, but in power consumption as well. The most recent core has doubled its DMIPS/mW ratio compared to the previous generation (see also this answer ). ARM is available from a great many manufacturers , more than any other microcontroller, and each has a number of versions to choose from, with different combinations of on-chip peripherals and memory, and packages. Case in point: NXP offers no less than 35 controllers with on-chip Ethernet . ARMs are inexpensive ; ARM was probably the first 32-bit controller to break the USD 1 barrier. This combination of performance , wide offering and low cost make it such that you simply can't ignore ARM: In 2005 about 98 percent of all mobile phones use at least one ARM-designed core on their motherboards, according to research from the analyst firm the Linley Group. ( source ) The mobile phone market has also another effect. Mobile phones are very space constrained and demand small packages. NXP's LPC1102 comes in a WLP-16 package just 5mm\$^2\$, a scale previously only used by low pin-count 8-bit microcontrollers.
{ "source": [ "https://electronics.stackexchange.com/questions/16823", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2064/" ] }
16,828
I am starting out programming micro controllers and I was reading some documentation and textbooks. I am a little confused as to what the difference is between a Micro-controller and a System on chip? Some documentation use these two terms inter changeably. However most textbooks point out that using the two terms inter changeably is NOT correct, thus there must be some notable difference... Thanks!
A microcontroller is a processor that has its program and data memory built in. These chips are intended for small embedded control applications, so leaving the pins for I/O and not requiring a external memory bus is very useful. Some microcontrollers have as little as 6 pins, and can do useful things. Contrast that to a general purpose computing processor intended for a PC. Those things have 100s of pins in a array and require extensive external circuitry. As for system on a chip, that is a less well defined term. Cyprus calls some of their parts PSOC (Programmable System on Chip). These are basically a microcontroller with small FPGA on the same chip. Instead of having built in peripherals, you can make whatever you want within the available resources of the FPGA. In general, I think a system on a chip is a microcontroller with some supposedly system-level logic integrated with it. Of course the further you try to go into the system, the less likely any one set of extra hardware is going to be useful, so some kind of configurability is very useful. However, for now "system on chip" is more of a marketing term than anything real.
{ "source": [ "https://electronics.stackexchange.com/questions/16828", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4711/" ] }
16,868
As someone starting with electronics I keep hearing, "Check the datasheet!" Why are datasheets so important and what information can I expect to find in them? It seems like a large amount of information would come from experience and know how, not reading a long dry document the chip manufacturer publishes. Also, is it really worth reading the entire thing? edit When is it OK to work at Absolute Maximum Ratings (AMR)?
Simply put: the datasheet is your complete encyclopedia on a part. A good datasheet will tell you everything you need to know about it. Use this information. Most design errors are due to (deliberately or not) overlooking certain specifications in the datasheet. The most obvious thing the datasheet will give you the part's pinout , so that you know how to connect it. For a 144 pin controller it's obvious you can't do without that, but you may need it for a simple diode as well: For relatively simple components the datasheet will mostly consist of numbers, either in tables or in graphs. One of the first tables in most datasheets will be Absolute Maximum Ratings . These are often interpreted wrongly. Not only do they mean that operating the part above the given values will damage the part, but you're also not supposed to apply these ratings in continuous operation . Absolute Maximum Ratings should only be met in exceptional situations, and never exceeded. Next you'll have voltage and current ratings, like power supply range and consumption, and voltages and currents on specific pins. This will often be minimum and maximum values. Importance: calculation of power budget , and ascertain that you can connect part A to part B , in terms of matching voltage and required current. In particular for digital ICs threshold values are given, voltage levels where a logical zero toggles to a logical one or vice versa. Note that values are often given as minimum/typical or typical/maximum pairs, and under the given conditions . Always work with the extremes ! The following gives \$R_{DS(ON)}\$ for a BSS138 MOSFET: The first line says 0.7\$\Omega\$ typical, 3.5\$\Omega\$ maximum. This should be read as "Most parts will have the lower value, but don't be surprised when you see the higher on certain parts." If you design for the typical value, the ones who are closer to the maximum value may not work properly in your application! In this case you may underestimate dissipated power, so that the FET will overheat and fail long before the product's expected lifetime. And the manufacturer almost never gives a probability curve telling you how many parts will actually have this higher value. So that means you might as well have 20% not-working products! Again, always work with maximum value. In the spirit of "a picture's worth a thousand words" most datasheets will have a number of graphs , relating two parameters against each other. You'll often see the same types of graphs again and again for certain components, and that helps comparing them. For a (MOS)FET for instance one of the most important graphs is \$I_D\$ vs \$V_{DS}\$, and once you get acquainted with FETs you'll recognize this specific graph immediately. (Many design engineers will first look at the pictures in a datasheet because that's the fastest way to find specific information in a sometimes long document. That's why we graduated with honors in kindergarten!) Many datasheets will also have one or more schematics, first of all a typical application . This should get you started when you're using the part for the first time. National 's analog datasheets have a great reputation of providing many application examples. The datasheets for Linear Technology switching regulators, on the other hand, also have much application information, but more in prose, explaining theory of operation and calculations for example. I'm just naming a few, but you'll learn that every manufacturer has his own datasheet style, and his own focus. At the end of the datasheet you can find mechanical drawings of the part's package, and sometimes also recommended PCB footprints. The latter is often published in a document on the package, however, because it is common for all devices using that package. The above lemmas are more or less common for most datasheets. But you can't compare the datasheet for a resistor to one of a microcontroller , of course. Especially the latter needs some getting used to. First of all, they're long! 100 pages and more are no exception. There's nothing you can do about it, they simply can perform so many functions, and everything has to be described in detail. In a microcontroller datasheet you'll see more prose than in other datasheets, because most functions can't be described just in numbers. Microcontrollers and other digital IC datasheets also often have timing diagrams , again a picture which can tell a lot which would be difficult to explain in words. Again, they're eye-catching, so you'll find them easily. Typical of microcontrollers is that they're part of a family, meaning there are related parts with other on-chip peripherals. To avoid having a great deal of the information identical between devices, and thus having even longer documents, most manufacturers choose to extract the common information from the datasheet and publish it in what's often called a family user manual . You'll have to check especially the numbers when reading a datasheet. I've seen several designs fail (and I made errors myself) because the designer missed or misinterpreted some value in the datasheet. Use the information. Before the 'Net and PCs there were databooks with collections of datasheets. Today you can find any datasheet on the manufacturer's website. If you can't find the datasheet, don't use the part! Especially longer datasheets have an advantage in being available electronically (PDF). You can search the datasheet for certain keywords, and long datasheets like for microcontrollers have a structured table of contents with bookmarks . Again, use them! There's a lot more to be said about datasheets, watch for more answers, but it's important to (learn how to) read them. They should give you the information you need to create a working and reliable product. If you can't find some specific information, call your distri's FAE!
{ "source": [ "https://electronics.stackexchange.com/questions/16868", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2064/" ] }
16,935
I have a Bluetooth remote control device that uses 2 AA batteries. Due to positive feedback and responses, I'm thinking to sell it to the mass market. Thus I need FCC certification. After reading many articles including all the FCC related questions in this site, I'm still confused. There are many wireless products in the US market that don't have FCC / CE certification. Many imported products sold by Amazon don't have the certification. With such a high cost ($10,000 - $20,000), I don't think many small companies can afford to get this certification. What are the consequences if you sell the product without FCC certification? Before I apply for FCC certification, what are the things that I need to watch out for, especially on my PCB. Do I have to care about the box (enclosure)? I mean, my product uses 2 batteries, I don't think it will cause a fire hazard or short circuit. Or if I have to watch out for fire hazards, what component should I add? The Bluetooth module is already certified by FCC and CE, so I will be exempted for the Bluetooth test. But I guess they still need test the product as a whole. What if you failed for the first test, do you have to redo all over again and pay another $10,000? Additional info: Q: I’m a retailer, why should I care about FCC regulations? A: It is illegal to import, sell, or operate covered equipment that has not undergone the required equipment authorization procedure. Illegal merchandise can be subject to forfeiture, and you may be subject to fine. Imported merchandise that does not have FCC may be held at customs. Also lack of FCC compliance means the merchandise has never been evaluated for electronic compatibility. This is a sign of bad quality. What other safety or chemical regulatory requirements might not have been evaluated? FCC enforcement action is often levied against retailers and end users, especially where the manufacturer is located outside US jurisdiction. Quote source: FCC FAQ
I'm no lawyer, but have been thru the FCC testing process a few times. For a ordinary device that doesn't deliberately transmit (called "unintentional radiator" by the FCC), there is no legal requirement for certifcation. There are legal requirements for what it is allowed to emit, but it up to you how to make sure your device works within the rules. You can simply sell a unintentionally radiating device without testing. However , if someone files a complaint and the device is found to exceed the legal radiation limits, you're in deep doodoo. If you had the device tested by a accredited test lab and they determined it was within the limits, your legal case will be much better. The FCC still has the right to force you to withdraw the product and even confiscate every unit out there, but if you can show you followed accepted practices of testing then there will be much less of a issue of punative actions. Intentional radiators are a different story. You do have to have FCC certification to legally sell one in the United States. When the device is certified, you get a certification ID, and that ID generally has to be indicated somewhere on the outside of the device where others can see it. In the case of a bluetooth module, most likely the module vendor has gotten the certification for the module. If not, I wouldn't go near it. Even if so though, you are still on the hook for the product as a whole. The module will also be certified with some restrictions, like a specific list of antennas that it is certified with. If you attach a different antenna, for example, the module is no longer certified and you're on your own. If you're trying to sell a intentionally radiating product, you'd better talk to a expert early in the process. You can wing it a bit with unitnentional radiators, but you really don't want to play games with intentional radiators, even if you're using a certified module that does all the intentional radiating. It might be a good idea to talk to a testing house. They generally will know all the rules. Just keep in mind they sell testing services, and their answers may a bit biased towards you needing a lot of testing.
{ "source": [ "https://electronics.stackexchange.com/questions/16935", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5034/" ] }
17,008
I have the following power supply configuration: AC MAINS -> UPS -> 24V POWER SUPPLY -> 5V VOLTAGE REGULATOR -> PCB (microcontroller). What's the best solution to detect the power outage on the mains with the microcontroller? I also need to detect the zero-crossing so that I can control the speed of an AC motor.
Since you also need the zero-crossing you'll get the power outage detection virtually for free . Best is to use an optocoupler to detect zero-crossings. Put the mains voltage via high resistance resistors to the input of the optocoupler. Vishay's SFH6206 has two LEDs in anti-parallel, so it works over the full cycle of the mains voltage. If the input voltage is high enough the output transistor is switched on, and the collector is at a low level. Around the zero crossing, however, the input voltage is too low to activate the output transistor and its collector will be pulled high. So you get a positive pulse at every zero crossing . The pulse width depends on the LEDs' current. Never mind if it's more than 10% duty cycle (1ms at 50Hz). It will be symmetrical about the actual zero-crossing, so the exact point is in the middle of the pulse. To detect power outages you (re)start a timer on every zero-crossing, with a timeout at 2.5 half cycles. Best practice is to let the pulse generate an interrupt. As long as the power is present the timer will be restarted every half cycle and never time out. Upon a power outage however, it will timeout after a bit longer than a cycle, and you can take the appropriate action. (The timeout value is longer than 2 half cycles, so that a spike on 1 zero-crossing causing a missed pulse won't give you a false warning .) If you create a software timer it won't cost you anything, but you can also use a retriggerable monostable multivibrator (MMV), for instance with an LM555 . note: depending on your mains voltage and the resistor type you may need to place two resistors in series for the optocoupler, because the high voltage may cause a single resistor to breakdown. For 230V AC I've used three 1206 resistors in series for this. Q & A time! (from comments, this is extra, in case you want more ) Q: And the input LEDs of the optocoupler will work at 230V? The datasheet states that the forward voltage is 1.65V. A: Like for a common diode the voltage over a LED is more or less constant, no matter what your supply voltage is. The mandatory series resistor will take the voltage difference between power supply and LED voltage. The answers to this question explain how to calculate the resistor's value. Extreme example: a 10 000V power supply for a 2V LED. Voltage over the resistor: 10 000V - 2V = 9 998V. You want 20mA? Then the resistor is \$\frac{9 998V}{20mA}\$ = 499.9k\$\Omega\$. That's 500k, that's even reasonable. Yet, you can't use an ordinary resistor here. Why not? Firstly, a common 1/4W PTH resistor is rated at 250V, and will definitely breakdown at 10 000V, so you'll have to use 40 resistors in series to distribute the high voltage. Secondly, and worse, the power that the resistor would have to dissipate is \$P = V \times I = 9 998V \times 20mA = 199.96W\$, a lot more than the rated 1/4W. So to cope with the power we'll even need 800 resistors. OK, 10kV is extreme, but the example shows that you can use any voltage for a LED, so 230V is also possible. It's just a matter of using enough and the right type of resistors. Q: How does the reverse voltage affect the lifetime of the LEDs? A: The second, anti-parallel LED takes care of that by ensuring that the reverse voltage over the other LED can't become higher than its own forward voltage. And that's a good thing, because a reverse voltage of 325V\$_P\$ would kill any LED (most likely explode), just like any signal diode, by the way. The best way to protect it is a diode in anti-parallel. Q: Won't the resistors dissipate a lot of heat? A: Well, let's see. If we assume 1mA through the resistors and ignore the LED voltage, we have \$P = V \times I = 230V_{RMS} \times 1mA = 230mW\$, so even a 1206 can handle that. And remember, we're using more than 1 resistor, so we're safe if we can work with 1mA (The SFH6206 has a high CTR \$-\$ Current Transfer Ratio).
{ "source": [ "https://electronics.stackexchange.com/questions/17008", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4906/" ] }
17,101
I would like to know what people usually do when trying to solder wires together, and hold small things in place while you fasten them together, etc. I can picture one or more vices attached to a swivel such that I can hold something in place at any orientation, perhaps with an array of alligator clips attached to stiff wires to hold things in place. Does anything like this exist? And if not, what do people usually do to hold two things together while busying your hands with wielding a tool? (considering us poor humans have but two hands.)
There are a few zillion devices available that are just as you describe. This is commonly known as a 3rd hand tool (source: micromark.com ) They say Triple Grip Third Hand Keeps Your Hands Free. Used by jewelers, model builders, miniaturists, electronic hobbyists. Keeps your hands free for soldering, gluing, positioning, while work is held firmly at any angle required. Has 3 alligator-type spring clamps mounted in ball joints for flexibility. Nickel plated steel with heavy iron base for no-tip stability. In my experience, such tools are less useful than they appear they may be. They have their place, but experience usually leads to more normal tools being as easy or easier to use. "Quite a few" more examples can be seen here - click images to see related web page When soldering wires that are not mechanically connected or stabilised I may lie one on the work surface and place something suitably weighty on it to keep it in position. As I hold the soldering iron in my right hand (usually) having the weighted down wire entering from the right usually is best, and I hold the other wire in my left hand or also lay it on the work surface - see below. This still ideally requires 3 arms for iron , 2nd wire and solder. Often I tin the 1st wire and then add copious extra solder in a mini-blob. The 2nd wires is then "offered up" and the excess solder used to complete the joint. Where necessary I will also lay the 2nd wire on the work surface, wieght it as well and slide it against the first wire. Now only two arms are needed ! :-). This is easier to do that to describe. In many cases, making some sort of mechanical join between two wires before soldering is advisable. Using solder alone is frowned on in some circles. Solder is not known for its superb mechanical properties but will in fact usually make an adequately mechanically strong join. Be aware that a join that relies on mechanical strength WILL break if subject to enough vibration cycles. Unlike ferrous metals, which have a lower limit below which they are not susceptible to vibration, non-ferrous metals are subject to creep failure regardless of vibration strength. Enough vibration cycles will break a solder join, no matter how low the amplitude - it just takes more cycles as magnitude decreases. Related only: When inserting multistrand wires in a screw down connector never tin the whole exposed wire end - if done the solder will creep with time and the connector will release the wire. At most, the extreme wire bundle tip may be tinned to keep the strands together. Note that this is a legal requirement for mains wiring in any administration that has its act together. .
{ "source": [ "https://electronics.stackexchange.com/questions/17101", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4348/" ] }
17,116
What's the proper schematic for driving this MOSFET from a microcontroller pin through this or this optocoupler? The MOSFET will drive a motor @ 24V, 6A.
The suggested MOSFET is not well suited to this application. There is a severe risk that the result will be a smoking ruin :-(. Principally, that FET is only very very marginally suited to the task. It could be made to work if it was all you had but there are much much much more suitable FETs available, probably at little or no extra cost. The main issues are that the FET has a very bad (= high) on resistance, which leads to high power dissipation and a reduced level of drive to the motor. The latter is not too significant but is unnecessary. Consider - the data sheet says that the on resistance (Rdson - specified at top right on page 1) = \$0.18 \Omega\$. Power dissipation = \$ I^2 \times R\$ so at 6A the power loss will be \$(6A)^2 \times 0.18 \Omega =~ 6.5W\$. That is easily handled in a TO220 package with an adequate heatsink (somewhat better than a flag type preferably) but this much dissipation is totally unnecessary as much lower Rdson FETs are available. Voltage drop will be \$V = I \times R = 6V \times 0.18 \Omega =~ 1.1V\$. That's \$ \frac{1}{24} =~ 4%\$ of the supply voltage. That's not vast but unnecessarily takes voltage that could be being applied to the motor. That MOSFET is in stock at digikey for $1.41 in 1.s. BUT For 94 cents in 1's also in stock at Digikey you can have the ultra magnificent IPP096N03L MOSFET. This is only 30V rated, but has \$I_{max} = 35A\$, \$R_{DS(on)}\$ of \$10 m \Omega\$ (!!!) and a maximum threshold voltage (turn on voltage of 2.2 volts. This is an utterly superb FET both for the money and in absolute terms. At 6A you get \$P_{diss} = I^2 \times R = (6A)^2 \times 0.010 \Omega = 360 mW\$ dissipation. It will feel warm to the touch when run without a heatsink. IPP096N03L data sheet If you want a bit more voltage headroom you can get the 97 cents in stock 55V, 25A, \$25 m \Omega\$ IPB25N06S3-2 - although gate threshhold is getting marginal for 5V operation. Using Digikey's parameter selection system let's spec the "ideal FET for this and similar applications. 100V, 50A, logic gate (low turn on voltage, \$ R_{ds(on)} \$ < \$ 50 m \Omega\$. Slightly dearer at $1.55 in 1's in stock at Digikey BUT 100V, 46A, \$ 24 m \Omega\$ \$R_{ds(on)} \$ typical, 2V \$V_{th}\$ ... the utterly superb BUK95/9629-100B where do they get these part numbers from? :-) Even with only 3V gate drive, at 6A \$R_{ds(on)}\$ will be about \$35 m \Omega\$ or about 1.25 Watt dissipation. At 5V gate drive \$R_{ds(on)} ~=25 m \Omega\$ giving about 900 mW dssipation. A TO220 package would be too hot too touch in free air with 1 to 1.25 Watt dissipation - say about 60 to 80 C rise. Acceptable but hotter than needed. Any sort of flad heat sink would bring it down to just "nice and warm". This circuit from here is almost exactly what you want and saves me drawing one :-). Replace BUZ71A with MOSFET of your choice as above. Input: Either: X3 is the input from the microcontroller. This is driven high for on and low for off. "PWM5V" is grounded. Or: X3 is connected to Vcc. PWM5V is driven by the microcontroller pin - low = on, high = off. As shown \$R1 = 270 \Omega\$. Current is \$ I= \frac{(Vcc-1.4)}{R1}\$ or Resistor is \$ R = \frac{(Vcc-1.4)}{I} \$ For Vcc = 5V and \$270 \Omega\$ I here =~ 13 mA. If you wanted say 10 mA then \$R = \frac{(5V-1.4V)}{10mA} = 360 \Omega\$ - say 330R Output: R3 pulls FET gate to ground when off. By itself 1K to 10k would be OK - Value affects turn off time but not too important for static drive. BUT we wil use it here to make a voltage divider to reduce FET gate voltage when on. So, make R3 the same value as R2 - see next paragraph. R2 is shown gointo +24 Vdc but this is too high for the FET maximum gate rating. Taking it to +12 Vdc would be good and +5Vdc would be OK if the logic gate FETs mentioned are used. BUT here I will use 24 Vdc and use R2 + R3 to divided the supply voltage by 2 to limit Vgate to a safe value for the FET. R2 sets the FET gate capacitor charge current. Set R2 = 2k2 gives ~10 mA drive. Set R3 = R2 as above. Also, add a 15V zener across R3, cathode to FET gate, Anode o ground, This provides. gate protection against over voltage transients. The motor connects as shown. D1 MUST be included - this provides protection against the back emf spike which occurs when the motor is turned off. Without this the system will die almost instantly. The BY229 diode shown is OK but is overkill. Any 2A or greater current rated diode will do. An RL204 is just one of a vast range of diodes that would suit. A high speed diode here may help slightly but is not essential. Switching speed : As shown the circuit is suitable for on/off control or slow PWM. Anything up to about 10 kHz should work OK./ For faster PWM a properly designed driver is required.
{ "source": [ "https://electronics.stackexchange.com/questions/17116", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4906/" ] }
17,132
Say I have a 1kHz sine, so no higher harmonics, then I need to sample it at least at 2kHz to be able to reconstruct it. But if I sample at 2kHz, but all my samples are on the zero-crossing, then my sampled signal doesn't show a sine at all, rather the ECG of a deceased patient. How can that be explained? This can be expanded to higher sampling frequencies too. If I sample a more complex waveform at 10kHz, I should at least get the first 5 harmonics, but if the waveform is such that the samples are each time zero, then again we get nothing. This isn't far-fetched, it's perfectly possible for a rectangle wave with a duty cycle < 10%. So why is it that the Nyquist-Shannon criterion seems to be invalid here?
You actually need just over 2 kHz sampling rate to sample 1 kHz sine waves properly. It's $$ f_N < f_S / 2 $$ not $$ f_N \le f_S / 2 $$ P.S. If you took your signal into complex space, where a sinusoid is of the form $$v(t) = Ae^{j(2 \pi f t - \theta)} = A(\cos(2 \pi f t - \theta) + j \sin(2 \pi f t - \theta))$$ where t is time, A is amplitude, f is frequency, and θ is phase offset, $$ f_N = f_S / 2 $$ is the point where the frequency "folds over", i.e. you cannot distinguish f from -f . Further increases in frequency will appear, after sampling, to have the sampling frequency subtracted from them, in the case of a pure sinusoid. Non-Sinusoids For the case of a square wave at 1 kHz with a duty cycle less than or equal to 10% which is sampled at 10 kHz, you are misunderstanding the input. First you would need to decompose your waveform into a Fourier series to figure out what the amplitudes of the component harmonics are. You will probably be surprised that the harmonics for this signal are quite large past 5 kHz! (The rule of thumb of third harmonic being 1/3 as strong as the fundamental, and 5th being 1/5 of fundamental, only applies to 50% duty cycle square waves .) The rule of thumb for a communications signal is that your complex bandwidth is the same as the inverse of the time of your smallest pulse, so in this case you're looking at a 10 kHz bandwidth minimum (-5 kHz to 5 kHz) for a 10% duty cycle with the fundamental at 1 kHz (i.e. 10 kbps). So what will ruin you is that these strong higher-order harmonics will fold over and interfere (constructively or destructively) with your in-band harmonics, so it's perfectly expected that you might not get a good sampling because so much information is outside the Nyquist band.
{ "source": [ "https://electronics.stackexchange.com/questions/17132", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3920/" ] }
17,179
I'm trying to work out what value resistor to use in a LED circuit. The equation I'd use to do this is: $$ R = \frac{V_{cc} - V_f}{I_f} $$ Seems logical, and makes complete sense. The answers to the question How do I calculate the resistor value for a simple LED circuit? confirm this too. I have the following LEDs: \$ V_f = 3.3V \$ \$ I_{f_{typ}} = 20mA \$ Using a 5V power supply: \$ V_{cc} = 5V \$ Plugging these into the above equation gives: $$ \begin{eqnarray} R & = & \frac{V_{cc} - V_f}{I_f} \\ & = & \frac{5V - 3.3V}{20mA} \\ & = & 85\Omega \end{eqnarray} $$ All good so far. However, if I use the calculator at http://led.linear1.org/1led.wiz , that gives me 100Ω . If I use the ElectroDroid app on my phone, that gives me 85Ω . So, I assume that the linear1 calculator is using a different method of calculating this resistor value; is there some better way of doing this?
Your calculation is correct. linear1 rounds up to the next E12 value , which happens to be 100\$\Omega\$. The nearest E12 value would have been 82\$\Omega\$, and that would still be safe, because, even if the current will be higher, the difference will be small, within the 10% tolerance of the E12 series. edit Purists may say I'm cutting corners here. Russell has a long answer about iterating the solution, and others whine (hey, no offense!) about rounding up being more safe. My answer is meant to be pragmatic ; no professional design engineer can afford to spend 15 minutes to calculate the resistor for a classical color LED. If you stay well below the maximum allowed current you'll have enough headroom to allow some rounding, and the rounded value won't be noticeable in brightness. For most LEDs perceived brightness doesn't increase much above a value of typically 20mA, anyway.
{ "source": [ "https://electronics.stackexchange.com/questions/17179", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3028/" ] }
17,234
I like to ask to the experts out there.. What is the best embedded linux distro for: Flash memory ~ 700Kb Ram ~ 256Kb Processor: High end arm cortex M3 (something from STM32 family for eg) Required modules: - Kernel core - Basic driver set: USB/Networking (for WiFi - No AP, just client, no security)/SPI/Uart/I2C Is this at all possible or am I dreaming? The idea is to use a 5$ high end CortexM3 and don't use any external memories so that I can enjoy the ready drivers for SDIO/WiFi etc. I updated the question with clarification on WiFi. WiFi in the sense that it is a simple, run of the mill client. Nothing fancy, perhaps wep if I can fit it. Another update: How about uCLinux?
I'd say you're dreaming. The main problem will be the limited RAM. In 2004, Eric Beiderman managed to get a kernel booting with 2.5MB of RAM , with a lot of functionality removed. However, that was on x86, and you're talking about ARM. So I tried to build the smallest possible ARM kernel, for the 'versatile' platform (one of the simplest). I turned off all configurable options, including the ones that you're looking for (USB, WiFi, SPI, I2C), to see how small it would get. Now, I'm just referring to the kernel here, and this does not include any userspace components. The good news: it will fit in your flash. The resulting zImage is 383204 bytes. The bad news: with 256kB of RAM, it won't be able to boot: $ size obj/vmlinux text data bss dec hex filename 734580 51360 14944 800884 c3874 obj/vmlinux The .text segment is bigger than your available RAM, so the kernel can't decompress, let alone allocate memory to boot, let alone run anything useful. One workaround would be to use the execute-in-place support (CONFIG_XIP), if your system supports that (ie, it can fetch instructions directly from Flash). However, that means your kernel needs to fit uncompressed in flash, and 734kB > 700kB. Also, the .data and .bss sections total 66kB, leaving abut 190kB for everything else (ie, all dynamically-allocated data structures in the kernel). That's just the kernel. Without the drivers you need, or any userspace. So, yes, you're going to need a bit more RAM.
{ "source": [ "https://electronics.stackexchange.com/questions/17234", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4861/" ] }
17,382
I've seen lots of schematics use \$V_{CC}\$ and \$V_{DD}\$ interchangeably. I know \$V_{CC}\$ and \$V_{DD}\$ are for positive voltage, and \$V_{SS}\$ and \$V_{EE}\$ are for ground, but what is the difference between each of the two? Do the \$C\$, \$D\$, \$S\$, and \$E\$ stand for something? For extra credit: Why \$V_{DD}\$ and not simply \$V_D\$?
Back in the pleistoscene (1960s or earlier), logic was implemented with bipolar transistors. Even more specifically, they were NPN because for some reasons I'm not going to get into, NPN were faster. Back then it made sense to someone that the positive supply voltage would be called Vcc where the "c" stands for collector. Sometimes (but less commonly) the negative supply was called Vee where "e" stands for emitter. When FET logic came about, the same kind of naming was used, but now the positive supply was Vdd (drain) and the negative Vss (source). With CMOS this makes no sense, but it persists anyway. Note that the "C" in CMOS stands for "complementary". That means both N and P channel devices are used in about equal numbers. A CMOS inverter is just a P channel and a N channel MOSFET in its simplest form. With roughly equal numbers of N and P channel devices, drains aren't more likely to be positive than sources, and vice versa. However, the Vdd and Vss names have stuck for historical reasons. Technically Vcc/Vee is for bipolar and Vdd/Vss for FETs, but in practise today Vcc and Vdd mean the same, and Vee and Vss mean the same.
{ "source": [ "https://electronics.stackexchange.com/questions/17382", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4940/" ] }
17,496
How is using a 1:1 transformer safer than using the mains straight off? Is it because you can limit the current coming from the transformer whereas straight from the mains its not current limited? I fail to see how its "safer" when playing with electricity is dangerous. Could someone please explain why it is considered safer to be isolated by a transformer.
Without a transformer the live wire is live relative to ground. If you are at "ground" potential then touching the live wire makes you part of the return path. {This image taken from an excellent discussion here With a transformer the output voltage is not referenced to ground - see diagram (a) below. There is no "return path" so you could ( stupidly ) safely touch the "live " conductor and ground and not received a shock. From The Electricians Guide I say " stupidly " as, while this arrangement is safer it is not safe unconditionally. This is because, if there is leakage or hard connection from the other side of the transformer to ground then there may still be a return path - as shown in (b) above. In the diagram the return path is shown as either capacitive or direct. If the coupling is capacitive then you may feel a "tickle" or somewhat mild "bite" from the live conductor. If the other conductor is grounded then you are back to the original transformlerless situation. (Capacitive coupling may occur when an appliance body is connected to a conductor but there is no direct connection from body to ground. The body to ground proximity forms a capacitor.) SO a transformer makes things safer by providing isolation relative to ground. Murphy / circumstance will work to defeat this isoation. This is why, ideally, an isolating transformer should be used to protect only one item of equipment at a time. With one item a fault in the equipment will propbably not produce a dangerous situation. The transformer has done its job. BUT with N items of equipment - if one has a fault from neutral to case or is wired wrongly this may defeat the transformer such that a second faulty device may then present a hazard to the user. In figure (b) above, the first faulty device provides the link at bottom and the second provides the link at top. Similarly:
{ "source": [ "https://electronics.stackexchange.com/questions/17496", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1623/" ] }
17,710
A colleague has (apparently unsuccessfully :-)) tried to convince me of the risk of tombstoning in the following situation: He claims pad 1 for R55 and R59 will lose heat faster during soldering because they have two traces leaving, while pad 2 only has one, and that this would cause tombstoning. Frankly, I've never noticed anything like it, even these 0402 resistors lay perfectly flat on the PCB. Am I too careless? (The traces are 0.2mm wide) edit I could also have shown 0402 parts with one pad connected via 4 spokes to a copper pour, which should be even worse, but again no problems whatsoever.
Summary: Tombstoning prospects increase with decreasing component size due to decreased energy of retention from surface tension forces compared to the self centering forces which will cause tombstoning if a mechanical imbalance occurs. 0402 appears to be about the point where you really start to care (although with due lack of care some manage it at 0603 :-)) and with 0201 it really matters. If you are "real enough" to be using 0201 you probably have other even more important matters to care about ! There are far more factors involved than just cooling rate, and it's very very much a case of "YMMV", but you'd be wise to pay some attention to your friend - while it's not necessarily liable to be an overwhelming issue, there are so many factors that if it happened to occur frequently in your case you would not be totally surprised. __________________________________________________ Tombstoning, is contributed to by much more than just raw cooling rates - factors include pad sizes, pad shapes, solder used, solderability, surface roughness, paste type and brand, reflow profile ... and more. Even Oxygen levels and use of an inert atmosphere can make some difference. It's easy to pay slightly more attention to the issue when you get down to 0402 and below and thereby reduce the chances of having a bad day subsequently. It doesn't hurt to be aware of the issues even with 0603 components 'just in case'. Here is a superb paper on the issue TOMBSTONING OF 0402 AND 0201 COMPONENTS: "A STUDY EXAMINING THE EFFECTS OF VARIOUS PROCESS AND DESIGN PARAMETERS ON ULTRA-SMALL PASSIVE DEVICES" which probably tells you more than you ever wanted to know. Results are based on a very large sample of tests (48 test combinations and 50,000 samples!). Interesting reading. Here's a less useful but still interesting paper which concentrates on solder and paste composition and claims to solve the world's problems with appropriate formulations Prevention of tombstone problems for small chip devices *. Their idea of "small" is 0603 and 0402. *Link replaced by web search citing paper abstracts + paywalled paper PLUS many relevant links. This paper Tombstone troubleshooting emphasises that there is no single cause or solution but also very clearly identifies differential cooling as a significant issue, with an example image that is uncomfortably close to your one for practical purposes: Pad 2 Pad 1 They say: if Pad 1 is connected to a wide trace, ground plane, or other heat sinking element. Pad 2 is connected to a thinner trace or less massive circuitry element. Pad 2 will often be hotter than Pad 1 and reflow before Pad 1. This temperature difference result in a reflow timing difference. When Pad 2 wets first, the wetting force from Pad 2 may be enough to overcome the force from Pad 1 resulting in a tombstoned component This July 2011 white paper agrees with your friend. The Low Mass Solution to 0402 Tombstoning In summary it says: Variations in the components [thermal characteristics] must be accounted for in the pad geometry, or else tombstoning may occur. He also recommends treating each pad as a group, and ensuring the copper density of each pad is equal (or very close), meaning both pads achieve the same temperature and liquidus at the same time. Also, Reno says, both pads should achieve solder flow to exposed copper at the same time, and be equal in solder volume necessary to control capillary action. More of the same SMT net discussion - more of the same. Horrendous :-) lINK BROKEN :-( : http://www.circuitsassembly.com/cms/news/11404-suntron-recommends-solutions-to-0402-tombstoning- Glossary: MD = metal defined. Relates to PCB pad. Whole of available metal area is pad with no solder mask limiting pad extent. SMD = solder mask defined. Metal extends beyond pad area defined by solder mask. YMMV = your mileage may vary = caveat emptor = what you experience may be different to what I experience etc. USA origin. It's a wry comment, almost a cynical joke based on US auto advertising claims. viz In our test Model XXX auto got 56 mpg on an urban cycle driving test - YMMV. ie While WE got 56 mpg, you may not. And more
{ "source": [ "https://electronics.stackexchange.com/questions/17710", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2064/" ] }
17,826
How does a resistor "resist" current/potential? I know it's an elementary question, but I'm sure others are wondering too.
Just as it happens I'm reading this application note by Vishay titled " Basics of Linear Fixed Resistors ", which explains the construction of PTH and SMT fixed resistors. Most resistors have a resistive layer on the surface of a non-conductive carrier, either carbon or metal-based or thick film. The resistance value is obtained by laser-cutting lines in the film.
{ "source": [ "https://electronics.stackexchange.com/questions/17826", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4940/" ] }
18,102
Related question: Ceramic capacitors: how to read 3-digit markings? I have some ceramic capacitors with a 2-digit marking. How to read them? Do the colored markings at the top mean anything? Image description: Brown ceramic capacitors with 10 written and a black mark at the top Brown ceramic capacitors with 47 written Yellow ceramic capactiors with 1n0 written and a green mark at the top
The brown capacitors have values in picoFarads eg 47 = 47 picoFarad = 47 pF = 0.000 000 000 047 Farad ! 10 = 10 pF For the yellow and green capacitors with markings of the form anb Here n = nanoFarad = nF. 1n0 = 1.0 nF 2n2 = 2.2 nF 6n8 = 6.8 nF Note that the use of xNx here is (probably) unqiue to capacitors in the nF range - I do not recall seeing eg xPx or xUx markings ever. However page 70 of this superb Vishay ceramic single layer capacitors document suggests you might expect to meet any of eg p68 = 0.68 pF n15 = 0.15nF = 150 pF 5p0 = 5 pF etc The green dot is quite likely to be a voltage rating, but alas I don't know what system it uses. There are several different colour/voltage systems. Typically this sort of capacitor is 50 Volt rated but this is not certain. More usual nnX 3 digit markings Most capacitor numerical markings are 3 digit and express the value in pF (pico Farad = 10^-12 Farad) with the last digit being a power of 10 multiplier. So 223 = 22,000 pF = 22 nF = 0.022 uF = 0.000 000 022 F 106 = 10 000 000 pF = 10 uF 100 = 10 pF and NOT 100 pF etc Part of a larger tutorial series on capacitors. Deals in colour codes. Does not answer exact question but is useful This does NOT answer the original question but is useful
{ "source": [ "https://electronics.stackexchange.com/questions/18102", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3411/" ] }
18,107
How is the maximum current that a certain circuit can provide found experimentally? Say, that of a power supply circuit? Can it be done without destroying the circuit? I'm looking at designing some circuits on my own and learning through experimentation. Current limits are clearly stated for ICs in their datasheet, but when it comes to discrete components, it's not so obvious. I can look at the maximum current provided by each component, but it would also help to have that confirmed through experimentation to make sure I actually understand what I'm doing.
The brown capacitors have values in picoFarads eg 47 = 47 picoFarad = 47 pF = 0.000 000 000 047 Farad ! 10 = 10 pF For the yellow and green capacitors with markings of the form anb Here n = nanoFarad = nF. 1n0 = 1.0 nF 2n2 = 2.2 nF 6n8 = 6.8 nF Note that the use of xNx here is (probably) unqiue to capacitors in the nF range - I do not recall seeing eg xPx or xUx markings ever. However page 70 of this superb Vishay ceramic single layer capacitors document suggests you might expect to meet any of eg p68 = 0.68 pF n15 = 0.15nF = 150 pF 5p0 = 5 pF etc The green dot is quite likely to be a voltage rating, but alas I don't know what system it uses. There are several different colour/voltage systems. Typically this sort of capacitor is 50 Volt rated but this is not certain. More usual nnX 3 digit markings Most capacitor numerical markings are 3 digit and express the value in pF (pico Farad = 10^-12 Farad) with the last digit being a power of 10 multiplier. So 223 = 22,000 pF = 22 nF = 0.022 uF = 0.000 000 022 F 106 = 10 000 000 pF = 10 uF 100 = 10 pF and NOT 100 pF etc Part of a larger tutorial series on capacitors. Deals in colour codes. Does not answer exact question but is useful This does NOT answer the original question but is useful
{ "source": [ "https://electronics.stackexchange.com/questions/18107", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2836/" ] }
18,111
A friend told me that noise of any switching PSU can be attenuated if I put linear regulator before output. Is that true? For example, if I want to power a +-12 V op-amp for an amplifier, I can use a switching-mode power supply (SMPS), say, with a noisy 15 V output and then from the SMPS output feed an LM7812 and a LM7912 . Will the output from LM7812 and LM7912 now have very very low noise compared to their inputs? If this is true, this amazing as there is no need to use a transformer anymore. Is it really correct that a heavy PSU using a transformer for Class A and B amplifiers is no longer needed?
Yes, it is true that adding a linear regulator after a SMPS (switch mode power supply) will reduce noise, but care is still needed. Results can be very good, but the result may not be as good as if a mains powered transformer plus linear regulator had been used. Consider a common LM7805 5V regulator from Fairchild. This has a "ripple rejection" specification of 62 dB minimum. "Ripple" is input noise but usually related to the twice mains frequency variations from the rectified and smoothed mains input. This is a reduction in noise of 10^(dB_noise_rejection/20) = 10^3.1 ~= 1250:1 That is, if there was 1 Volt of "ripple" at the input this would be reduced to 1 mV at the output. However this is specified as being at 120 Hz = twice USA mains frequency, and no specification or graph is given for noise reduction at higher frequencies. The functionally identical LM340 5V regulator from NatSemi has a slightly better specification (68 dB minimum, 80 dB typical = 2500:1 to 10,000:1) at 120 Hz. But NatSemi kindly also provide a graph of typical performance at higher frequencies (bottom left corner of page 8). . It can be seen that for 5V output ripple rejection is down to 48dB at 100 kHz (=250:1). It can also be seen that it is falling about linearly at about 12 dB per decade (60 dB at 10 kHz, 48 dB at 100 kHz) . Extrapolating this to 1 MHz gives 36 dB noise rejection at 1 Mhz (~= 60:1 noise reduction. ) There is no guarantee that this extension to 1 MHz is realistic but the real result will not be letter than this and should (probably) not be much worse. As most (but not all) smps supplies operate in the 100 kHz to 1 MHz range one can guestimate that noise rejection will be in the order of 50:1 to 250:1 in the 100-1000 kHz range for fundamental noise frequencies. However, smps will have output at other than their fundamental switching frequency, often much higher. Very thin fast rising spikes which may occur on switching edges due o leakage inductance in transformers and similar will be less attenuated than lower frequency noise. If you were using a smps by itself you would usually expect to provide some form of output filtering and using passive LC filters with a linear "post regulator will add to its performance. You can get linear regulators with both better and worse ripple rejection than the LM340 - and the above shows you that two functionally identical ICs can have somewhat different specifications. Noise elimination from smps will be greatly helped by good design. The subjct is too complex than to do more than mention it here but there is much good on this subject on the internet (and in past stack exchange replies). Factors include proper use of ground planes, separation, minimising area in current loops, not breaking current return paths, identifying high current flow paths and keeping them short and away from noise sensitive parts of the circuit (and much more). So - yes, a linear regulator can help reduce smps output noise and it may be good enough to allow you to power audio ampliers directly this way (and may many designs do just that) but a linear regulator is not a "magic bullet" in this application and good design is still vital.
{ "source": [ "https://electronics.stackexchange.com/questions/18111", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5296/" ] }
18,212
Most piezos I've seen have just two connections, but this type has three. What's the third wire for?
They're called self drive types, and they're meant to be used as part of the oscillator: The piezo effect works both ways: if you apply a voltage the piezo stretches, but also if it stretches it creates a voltage. This principle is used to create a feedback signal which drives the oscillator. The advantage of the self drive is that it will automagically work at its resonance frequency , where it produces the loudest sound. In 2-wire circuits the oscillator's frequency is independent of the piezo's resonance frequency, and it's the designer who has to make that they're close. For the piezo of your picture: "G" = black "M" = red "F" = blue (I guess M , F and G stand for Main , Feedback and Ground , resp. CMIIW)
{ "source": [ "https://electronics.stackexchange.com/questions/18212", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3920/" ] }
18,301
I am confused with this! How does a capacitor block DC? I have seen many circuits using capacitors powered by a DC supply. So, if capacitor blocks DC, why should it be used in such circuits? Also, the voltage rating is mentioned as a DC value on the capacitor. What does it signify?
I think it would help to understand how a capacitor blocks DC (direct current) while allowing AC (alternating current). Let's start with the simplest source of DC, a battery: When this battery is being used to power something, electrons are drawn into the + side of the battery, and pushed out the - side. Let's attach some wires to the battery: There still isn't a complete circuit here (the wires don't go anywhere), so there is no current flow. But that doesn't mean that there wasn't any current flow. You see, the atoms in the copper wire metal are made up of a nuclei of the copper atoms, surrounded by their electrons. It can be helpful to think of the copper wire as positive copper ions, with electrons floating around: Note: I use the symbol e - to represent an electron In a metal it is very easy to push the electrons around. In our case we have a battery attached. It is able to actually suck some electrons out of the wire: The wire attached to the positive side of the battery has electrons sucked out of it. Those electrons are then pushed out the negative side of the battery into the wire attached to the negative side. It's important to note that the battery can't remove all the electrons. The electrons are generally attracted to the positive ions they leave behind; so it's hard to remove all the electrons. In the end our red wire will have a slight positive charge (cause it's missing electrons), and the black wire will have a slight negative charge (cause it has extra electrons). So when you first connect the battery to these wires, only a little bit of current will flow. The battery isn't able to move very many electrons, so the current flows very briefly, and then stops. If you disconnected the battery, flipped it around, and reconnected it: electrons in the black wire would be sucked into the battery and pushed into the red wire. Once again there would only a tiny amount of current flow, and then it would stop. The problem with just using two wires is that we don't have very many electrons to push around. What we need is a large store of electrons to play with - a large hunk of metal. That's what a capacitor is: a large chunk of metal attached to the ends of each wire. With this large chunk of metal, there are a lot more electrons we can easily push around. Now the "positive" side can have a lot more electrons sucked out of it, and the "negative" side can have a lot more electrons pushed into it: So if you apply an alternating current source to a capacitor, some of that current will be allowed to flow, but after a while it will run out of electrons to push around, and the flow will stop. This is fortunate for the AC source, since it then reverses, and current is allowed to flow once more. But why is a capacitor rated in DC volts A capacitor isn't just two hunks of metal. Another design feature of the capacitor is that it uses two hunks of metal very close to each other (imagine a layer of wax paper sandwiched between two sheets of tin foil). The reason they use "tin foil" separated by "waxed paper" is because they want the negative electrons to be very close to the positive "holes" they left behind. This causes the electrons to be attracted to the positive "holes": Because the electrons are negative, and the "holes" are positive, the electrons are attracted to the holes. This causes the electrons to actually stay there. You can now remove the battery and the capacitor will actually hold that charge. This is why a capacitor can store a charge; electrons being attracted to the holes they left behind. But that waxed paper isn't a perfect insulator; it's going to allow some leakage. But the real problem comes if you have too many electrons piled up. The electric field between the two " plates " of the capacitor can actually get so intense that it causes a breakdown of the waxed paper, permanently damaging the capacitor: In reality a capacitor isn't made of tin foil and waxed paper (anymore); they use better materials. But there is still a point, a "voltage", where the insulator between the two parallel plates breaks down, destroying the device. This is the capacitor's rated maximum DC voltage.
{ "source": [ "https://electronics.stackexchange.com/questions/18301", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5336/" ] }
18,384
I'm planning on using an LM1117 to regulate 5 V to 3.3 V. Looking at ( any of the several ) data sheets for the LM1117, they recommend 10 µF tantalum capacitors between input and ground and between output and ground. I understand the need for the capacitors, but it's not clear to me why these should specifically be tantalum. I have a bunch of electrolytic 10 µF capacitors sitting around here, whereas if it does for some reason need to be tantalum, I'll have to order those. Why are they so specific about using tantalum capacitors?
Tantalum capacitors are completely unnecessary in this application. The only reason for choosing tantalum might be lifetime, and this can be designed for with aluminium wet electrolytic caps. It is assumed from here on that lifetime has been properly designed for and is not an issue. Using a tantalum capacitor as the input capacitor invites capacitor death at any time if the input power rail can have voltage spikes on it from any source. A spike more than a small fraction above a tantalum capacitor's rated value risks it's total destruction in a high energy circuit, such as this one is. The input capacitor is a typical reservoir capacitor, its value is relatively non critical. Tantalum serves no technical purpose here. If ultra low impedance is desired then use of a smaller parallel ceramic is indicated. The output capacitor is NOT a filter capacitor in any traditional sense. Its principal role is to provide loop stability for the regulator. (An eg 10 ohm resistor could be placed in series with the capacitor without impeding its functionality. No normal filter cap would tolerate this without impaired functionality). The characteristics of aluminum wet electrolytic capacitors of the correct capacitance and voltage rating are well suited to the output capacitor's role. There is no reason to not use them there. This 7 cent capacitor pricing / general data / datasheet would be an acceptable choice in many applications. (Longer lifetime applications may indicate 1 2000 hour/105C part). The LM1117 datasheet provides clear guidance on the essential and desirable characteristics of the input and output capacitors. Any capacitor which meets these specifications is suitable. Tantalum is an OK choice but is not the best choice. There are various factors and cost is one. Tantalum offers OK cost per capability at capacitances from about 10 uF up. The output capacitor is "safe" against spikes in most cases. The input capacitor is at risk from "bad behavior" from other parts of the system. Spikes above rated value will produce a (literally) flaming melt down. (Smoke, flame, noise, bad smell and explosion all optional - I have seen one tantalum cap do all of these in turn :-)) Input capacitor The input capacitor is not overly critical when the regulator is fed from an already well decoupled system bus. Under the diagram on the front page they note "Required if the regulator is located far from the power supply filter" - to which you could add "or another well decoupled portion of the supply". ie capacitors used for decoupling in general may make another one here redundant. The output capacitor is more crucial. Output capacitor Many modern low drop out high performance regulators are unconditionally unstable as supplied. To provide loop stability they require an output capacitor which has both capacitance and ESR in selected ranges. Meeting these conditions is essential for stability under all load conditions. Output capacitance required for stability: Stability requires the output output load capacitor to be >= 10 uF when the Cadj pin does not have an added capacitor to ground and >= 20 uF when Cadj has an added bypass capacitor. Higher capacitances are also stable. This requirement could be met by an aluminum wet electrolytic cap or a ceramic cap. As wet electrolytics are generally wide tolerance (up to +100%/-50% if not other wise specified) a 47 uF aluminum wet electrolytic would provide adequate capacitance here even when Cadj was bypassed. BUT it may or may not meet the ESR spec. Output capacitor ESR required for stability: ESR is a "Goldilocks requirement" :-) - not too much and not too little. Required ESR is stated as 0.3 ohm <= ESR <= 22 ohm. This is an extremely wide and unusual requirement. Even quite modest ripple currents in this capacitor would induce far larger than acceptable voltage variations. It's clear that they do not expect high ripple currents and that the capacitor's role is primarily related to loop stability than to noise control per se. Note that "old school" regulators such as eg LM340 / LM7805 often specified no output capacitor or perhaps a 0.1 uF. For example the LM340 datasheet here says "**Although no output capacitor is needed for stability, it does help transient response. (If needed, use 0.1 µF, ceramic disc)". A tantalum capacitor is not required to meet this specification. A wet aluminum capacitor will meet this spec with ease. Here are some typical new maximum ESRs for new aluminum wet electrolytic capacitors. The first group are capacitors that might be used in practice in this application at the low end of the capacitance range. The 10 uF, 10V is about half the allowed ESr - perhaps a bit close for comfort across lifetime. The second group are what would be used with Cadj bypassed and could be used anyway - ESRs are far away from limits in both directions. The third group are capacitors chosen to approach the lower limit (and they will get higher resistance = better with age). The 100 uF 63V pushes the lower limit - but there would be no need to use a 63V part here, and it will get higher (= better) with age. . 10uF, 10V - 10 ohm 10 uF, 25V - 5.3 ohm 47uF, 10V - 2.2 ohm 47 uF, 16V - 1.6 ohm 47 uF, 25 V, 1.2 ohm 470 uF, 10V - 024ohm 220uF, 25V - 0.23 ohm 100 uF, 63V - 0.3 ohm They say in the LM1117 datasheet 1.3 Output Capacitor The output capacitor is critical in maintaining regulator stability, and must meet the required conditions for both minimum amount of capacitance and ESR (Equivalent Series Resistance). The minimum output capacitance required by the LM1117 is 10µF, if a tantalum capacitor is used. Any increase of the output capacitance will merely improve the loop stability and transient response. The ESR of the output capacitor should range between 0.3Ω - 22Ω. In the case of the adjustable regulator, when the CADJ is used, a larger output capacitance (22µf tantalum) is required ESR is crucial ADDED - notes SBCasked: I've read this so many times - "maintain regulator stability". What would be an example of an unstable regulator? Would the output oscillate with high ripple or be undefined or what exactly would happen? Regulator instability, in my experience, (and as you'd expect) results in the regulator oscillating, with large level and often high frequency signal at the output and a DC voltage measured with a non-RMS meter that appears to be stable DC at an incorrect value. The following is comment on what you may see in typical circumstances - actual results vary widely but this is a guide. Look at the output with an oscilloscope and you may see an eg 100 kHz semi sine wave of 100's of mV to some Volts of amplitude on a nominal 5VDC output. Depending on feedback parameters you might get low frequency oscillation, slow enough to see as variations on a "DC" meter and you might get more like MHz signals. I'd expect: (a) very slow changes to be more liable to be high amplitude (as it suggests that the system is chasing its tail in such a way that it is almost in regulation and that corrective feedback is not bringing it rapidly into line, and (b) MHz level oscillation to be more liable to be lower than usual amplitude as it suggests that slew rate of the gain path is a major factor in response speed. BUT anything can happen. Also, how exactly does the ESR come into play here? A naive passerby like myself would expect lower series resistance to be better. The intuitive and the logical do not always match. A regulator is essentially a feedback controlled power amplifier. If the feedback is negative overall the system is stable and the output is DC. If the net loop feedback is positive you get oscillation. The overall feedback is described by a transfer function involving the components involved. You can look at stability from the point of view of eg Nyquist stability criteria or (related) no poles on right half plane and all poles inside unit circle or ... agh!. It's adequate to say that the feedback from output to input does not reinforce oscillation and that a resistance that is too large or too small may lead to an overall reinforcement when considered as part of the overall system. Simple, useful . Only slightly more complex - good Sueful - stack exchange Useful Lots of related pictures And one final note, did you refer to the ripple voltage on the cap being large (even for small currents) as an inherent issue due to the small size? (i.e. Vc = integral of current over capacitance?) They say " ... 0.3 ohm <= ESR <= 22 ohm ..." If you has an ESR of 10 Ohms say, then every mA of ripple current will cause 10 mV of voltage variation across the capacitor. 10 mA of ripple current = 100 mV of voltage variation and you'd be very unhappy with your regulator. The active regulator can work to reduce this ripple, but it is nice to not have your filter capacitor adding to the problem you want it to fix.
{ "source": [ "https://electronics.stackexchange.com/questions/18384", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5411/" ] }
18,478
I'm using a 5 V / 2 A voltage regulator ( L78S05 ) without a heatsink. I'm testing the circuit with a microcontroller (PIC18FXXXX), a few LEDs and a 1 mA piezzo buzzer. The input voltage is aprox. 24 VDC. After running for a minute, the voltage regulator starts to overheat, meaning it burns my finger if I keep it there for more than a second. Within a few minutes it starts to smell like it's burnt. Is this a normal behavior for this regulator? What could cause it to heat that much? Other components used in this circuit: L1: BNX002-01 EMI filter R2: Varistor F1: Fuse 0154004.DR
Summary: YOU NEED A HEATSINK NOW !!!!! :-) [and having a series resistor as well wouldn't hurt :-) ] Well asked question Your question is asked well - much better than usual. The circuit diagram and references are appreciated. This makes it much easier to give a good answer first time. Hopefully this is one ... :-) It makes sense (alas): The behavior is entirely expected. You are thermally overloading the regulator. You need to add a heat sink if you want to use it in this manner. You would benefit greatly from a proper understanding of what is happening. Power = Volts x Current. For a linear regulator Power total = Power in load + Power in regulator. Regulator V drop = V in - V load Here V drop in regulator = 24-5 = 19V. Here Power in = 24V x I load Power in load = 5V x I load Power in regulator = (24V-5V) x I load . For 100 mA of load current the regulator will dissipate V drop x I load (24-5) x 0.1 A = 19 x 0.1 = 1.9 Watt. How Hot?: Page 2 of the data sheet says that the thermal resistance from junction to ambient (= air) is 50 degrees C per Watt. This means that for every Watt you dissipate you get 50 degrees C rise. At 100 mA you would have about 2 Watts dissipation or about 2 x 50 = 100°C rise. Water would boil happily on the IC. The hottest most people can hold onto long term is 55°C. Yours is hotter than that. You didn't mention it boiling water (wet finger sizzle test). Let's assume you have ~~ 80°C case temperature. Let's assume 20°C air temperature (because its easy - a few degrees either way makes little difference. T rise = T case -T ambient = 80°C - 20°C = 60°C. Dissipation = T rise /R th = 60/50 ~= 1.2 Watt. At 19v drop 1.2 W = 1.2/19 A = 0.0632 A or about 60 mA. ie if you are drawing about 50 mA you will get a case temperature of 70°C - 80°C degrees range. You need a heatsink . Fixing It: The data sheet page 2 says R thj-case = thermal resistance from junction to case is 5C/W = 10% of junction to air. If you use a say 10 C/W heatsink then total R th will be R _jc + R c_amb (add junction to case to case to air). = 5+10 = 15°C/Watt. For 50 mA you will get 0.050A x 19V = 0.95W or a rise of 15°C/Watt x 0.95 ~= 14°C rise. Even with say 20°C rise and a 25V ambient you will get 20+25 = 45°C heatsink temperature. The heatsink will be hot but you will be able to hold it without (too much) pain. Beating the heat: As above, heat dissipation in a linear regulator in this situation is 1.9 Watt per 100 mA or 19 Watt at 1A. That's a lot of heat. At 1A, to keep temperature under the temperature of boiling water (100°C) when ambient temperature was 25°C you'd need an overall thermal resistance of no more than (100°C-25°C)/19 Watt = 3.9°C/W. As the junction to case Rthjc is already greater than 3.9 at 5°C/W you cannot keep the junction under 100°C in these conditions. Junction to case alone at 19V and 1A will add 19V x 1A x 5°C/W = 95°C rise. While the IC is rated to allow temperatures as high as 150°C, this is not good for reliability and should be avoided if at all possible. Just as an exercise, to JUST get it under 150°C in the above case the external heatsink would need to be (150-95)°C/19W = 2.9°C/W. That's attainable but is a larger heatsink than you'd hope to use. An alternative is to reduce the energy dissipated and thus the temperature rise. The ways of reducing heat dissipation in the regulator are: (1) Use a switching regulator such as the NatSemi simple switchers series. A performance switching regulator with even only 70% efficiency will reduce the heat dissipation dramatically as only 2 Watt is dissipated in the regulator!. ie Energy in = 7.1 Watts. Energy out = 70% = 5 Watts. Current at 5 Watts at 5V = 1A. Another option is a pre-made drop-in replacement for a 3 terminal regulator. The following image and link are from the part referred to in a comment by Jay Kominek . OKI-78SR 1.5A, 5V drop in switching regulator replacement for an LM7805 . 7V - 36V in. At 36 Volts in, 5V out, 1.5A efficiency is 80%. As Pout = 5V x 1.5A = 7.5W = 80%, the power dissipated in the regulator is 20%/80% x 7.5W = 1.9 Watts. Very tolerable. No heatsink required and can provide 1.5A out at 85 degrees C. [[Errata: Just noticed the curve below is at 3.3V. The 5V part manages 85% at 1.5A so is better than the above.]] (2) Reduce the voltage (3) Reduce the current (4) Dissipate some energy external to the regulator. Option 1 is the best technically. If this is not acceptable and if 2 & 3 are fixed then option 4 is needed. The easiest and (probably best) external dissipation system is a resistor. A series power resistor which drops from 24V to a voltage that the regulator will accept at max current will do the job well. Note that you will want a filter capacitor at the input to the regulator due to the resistance making the supply high impedance. Say about 0.33uF, more won't hurt. A 1 uF ceramic should do. Even a larger cap such as a 10 uF to 100 uF aluminum electrolytic should be good. Assume Vin = 24 V. Vregulator in min = 8V (headroom / dropout. Check data sheet. Selected reg says 8V at <1A.) Iin = 1 A. Required drop at 1A = 24 - 8 = 16V. Say 15V to be "safe". R = V/I = 15/1 = 15 ohms. Power = I 2 *R = 1 x 15 = 15 Watts. A 20 Watt resistor would be marginal. A 25W + resistor would be better. Here's a 25W 15R resistor priced at $3.30/1 in stock lead free with datasheet here . Note that this also needs a heat sink!!! You CAN buy free air rated resistors up to 100's of Watts. What you use is your choice but this would work well. Note that it is rated at 25 Watt commercial or 20 Watt military so at 15W it is "doing well". Another option is a suitable length of properly rated resistance wire mounted appropriately. Odds are a resistor manufacturer already does this better than you do. With this arrangement: Total power = 24W Resistor power = 15 Watt Load power = 5 Watt Regulator power = 3 Watt Regulator junction rise will be 5°C/W x 3 = 15°C above the case. You will need to provide a heatsink to keep regulator and heatsink happy but that is now "just a matter of engineering". Heatsink examples: 21 degrees °C (or °K) per Watt 7.8°C/W Digikey - many heatsink examples including this 5.3 C/W heatsink 2.5°C/W 0.48°C/W!!! 119mm wide x 300mm long x 65 mm tall. 1 foot long x 4.7" wide x 2.6" tall Good article on heatsink selection Forced convection heatsink thermal resistance Reducing linear regulator dissipation with a series input resistor: As noted above, using a series resistor to drop voltage prior to a linear regulator can greatly reduce dissipation in the regulator. While cooling a regulator usually requires heatsinks, air-cooled resistors can be obtained cheaply that are able to dissipate 10 or more Watts without needing a heatsink. It is not usually a good idea to solve high input voltage problems in this manner but it can have its place. In the example below an LM317 5V output 1A supply operated from 12V. Adding a resistor can more than halve the power dissipation in the LM317 under worst case conditions by adding a cheap air cooled wire mounted series input resistor. The LM317 needs 2 to 2.5V headroom at lower currents or say 2.75V under extreme load and temperature conditions. (See Fig 3 in the datasheet , - copied below). LM317 headroom or dropout voltage Rin has to be sized such that it does not drop excessive voltage when V_12V is at its minimum, Vdropout is worst case for the conditions and the series diode drop and output voltage are allowed for. Voltage across resistor must always be less than = Minimum Vin less Maximum Vdiode drop less Worst case dropout relevant to situation less output voltage So Rin <= (v_12 - Vd - 2.75 - 5)/Imax. For 12V minimum Vin, and say 0.8V diode drop and say 1 amp out that's (12-0.8-2.75-5)/1 = 3.45/1 = 3R45 = say 3R3. Power in R = I^2R = 3.3W so a 5W part would be marginally acceptable and 10W would be better. Dissipation in the LM317 falls from > 6 Watt to < 3 Watt. An excellent example of a suitable wire lead mounted air-cooled resistor would be a member of this nicely specified Yageo family of wire-wound resistors with members rated from 2W to 40W air cooled. A 10 Watt units is in stock at Digikey at $US0.63/1. Resistor ambient temperature ratings and temperature rise: Nice to have are these two graphs from the datasheet above which allow real world results to be estimated. The left hand graph shows that a 10 Watt resistor operated at 3W3 = 33% of its rate Wattage has an allowable ambient temperature of up to 150 C (actually about 180 C if you plot the operating point in the graph but the manufacturer says 150 C max is allowed. The second graph shows that temperature rise for a 10 W resistor operated at 3W3 will be about 100°C above ambient. A 5 W resistor from the same family would be operating at 66% of rating and have a temperature rise of 140°C above ambient. (A 40 W would have about 75°C rise but 2 x 10 W = under 50°C and 10 x 2 W only about 25°C !!!. The decreasing temperature rise with an increasing number of resistors with the same combined Wattage rating in each case is presumably related to "Square cubed law" action as there is less cooling surface area per volume as size increases. http://www.yageo.com/documents/recent/Leaded-R_SQP-NSP_2011.pdf ________________________________________ Added August 2015 - Case study: Somebody asked the reasonable question: Isn't a more likely explanation the relatively high capacitive load (220 µF)? E.g. causing the regulator to become unstable, oscillations causing a lot of heat dissipated in the regulator. In the datasheet, all of the circuits for normal operation only have a 100 nF capacitor on the output. I answered in comments, but they MAY be deleted in due course and this is a worthwhile addition to the subject, so here are the comments edited into the answer. In some cases oscillation and instability of the regulator certainly is an issue but, in this case and many like it, the most likely reason is excess dissipation. The 78xxx family are very old and predate both the modern low dropout regulators and the series powered (LM317 style) ones. The 78xxx family are essentially unconditionally stable with respect to Cout. They in fact need none for proper operation and the 0.1uF often shown is to provide a reservoir to provide extra surge or spike handling. In some of the related data sheets they actually say that Cout can be "increased without limit" but I do not see such a note here - but also (as I'd expect) there is no note suggesting instability at high Cout. In fig 33 on page 31 of the datasheet they show the use of a reverse diode to "protect against "high capacitance loads" - i.e., capacitors with high enough energy to cause damage if discharged into the output - i.e., far more than 0.1 uF. Dissipation: At 24 Vin and 5 Vout the regulator dissipates 19 mW per mA. Rthja is 50 C/W for the TO220 package so you'd get ABOUT 1°C rise per mA of current. So with say 1 Watt dissipation in 20 C ambient air the case would be at about 65°C (and could be more depending how the case is oriented and located). 65°C is somewhat above the lower limit of "burn my finger" temperature. At 19 mW/mA it would take 50 mA to dissipate 1 Watt. The actual load in the example given is unknown - he shows an indicator LED at about 8 or 9 mA (if red) plus a load of the regulator internal current used (under 10 mA) + "PIC18FXXXX), a few LEDs ..." That total could reach or exceed 50 mA depending on the PIC circuit, or MAY be much less. | Overall given regulator family, differential voltage, actual cooling uncertainty, Tambient uncertainty, C/W typical figure and more it seems like sheer dissipation is a reasonable reason for what he sees in this case - and for what many people using linear regulators will experience in similar cases. There is a chance that it's instability for reasons less obvious, and such should never be rejected without good reason, but I'd start on dissipation. In this case a series input resistor (say 5W rated with air cooling) would move much of the dissipation into a component better suited to deal with it. And/or a modest heatsink should work marvels.
{ "source": [ "https://electronics.stackexchange.com/questions/18478", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4906/" ] }
18,525
I want to protect the PCB-mounted components from vibrations. What kind of glue/substance should I use? What are the most prone to failure components due to vibrations?
My preference: Neutral cure silicone rubber as the long term anti-vibration and sealing agent. Expect 20+ year lifetimes. If initial mechanical location is required, hot-glue (hot melt glue) only to hold things in place while the silicone rubber sets. [Hot glue is a completely unsatisfactory long term solution !!!]. Details: For long term survivability, anything that needs more mechanical strength than its solder connections provide should use 'proper' mechanical restraints such as brackets, mounting clamps etc. However , it is common in many consumer products for larger mechanical items to be "held in place" or mechanically supported by an adhesive. Properly done this is a legitimate manufacturing method, even though sometimes frowned on by "proper" designers. Asian solution: The most common method in Asian products is a white glue that sets to a hard ceramic white finish. It has good initial bonding capabilities and strength, is fast setting and fails miserably within a period of from months to a year or so. Western solution: The most common western method is hot-glue or hotmelt glue. It has good initial bonding capabilities, reasonable strength, sets within the time taken to cool (tens of seconds to minutes depending on amount and thermal conduction) and fails miserably within a period of from months to a year or so. People still use it and say how good it is. A superb solution: An excellent solution is neutral cure silicone rubber. It has good bonding capabilities once set - these vary with materials but primers or special variants are available for difficult materials. It has reasonable strength but low modulus (somewhat stretchy) so resists vibration and impact forces superbly. Setting time can be minutes to many hours - usually longer than shorter. Once set it will typically last for 20++ years. The long setting times are not an issue as long as the adhesive is not used to retain the objects in position but to add long term vibration resistance. Most silicone rubbers will not flow during setting so setting time is unimportant except that the surface will be tacky at first. "Skin over" occurs typically in 10 to 20 minutes. Setting in achieved by action of atmospheric moisture so low humidity causes longer setting times. Full setting rate depends on water thickness with rates of about 3mm depth per 24 hours being typical. An excellent combination where mechanical retention or location using the adhesive is required is to use a "dab" of hotmelt glue to hold things in place initially and then use silicone rubber as the long term binding and antivibration agent. As well as their excellent low modulus and long life, SR's make excellent waterproofing and sealing agents. However they are not water vapor sealants - water vapor will pass through SR with time - liquid water wont. So they "breathe" - which may be good or bad depending on application. Note that the silicone rubber used MUST be neutral cure where electronics or corrosion sensitive components are used. "Normal" SR is acid cure and releases acetic acid during setting (smells like vinegar). This is "not too bad" in many cases but can cause corrosion in sensitive applications. Neutral cure silicone rubbers are widely available, and cost little if any extra. Most are "Oxeme" or alcohol cure and release oxemes and methyl alcohol. Oxemes may cause corrosion of bare copper. Oxeme cure is not suitable for polycarbonate - use alcohol cure. Somewhat dearer neutral cure SR's are usually straight alcohol cure and release Methyl Alcohol on setting. Sensible handling is good enough. Ventilate well if using kgs at a time :-). Dow Corning make a wide range of SRs that are available in most countries and which have extensive technical data available. [No association with Dow Corning apart from being a satisfied customer]. There are numerous other adhesives which may work for you. Cyanoacrylate glues set fast (super glue, elephant glue), risk marring surfaces in some applications and will debond with time on some surfaces. Latex and contact glues are low modulus and can be useful. Solvents tend to be used which may have adverse effects. Numerous others exist, but I've found silicone rubber to be the best overall. - Silicone Rubber versus Epoxy Resin: I didn't mention epoxy resin (ER), and Mike has recommended it in preference to silicone rubber (SR). Epoxy resins are certainly highly useful. They are available with "pot lives (usable time before setting) of under 1 minute to hours, and time to full curing of an hour or so to a day or so. Elevated temperatures increase curing rate and reduce pot life. ER's are usually '2 pot" - two different components are mixed just prior to use. SR's are also available in 2 part formulations, usually with rapid setting times (minutes to 10's of minutes - can be longer by design). A factor which I cited as an advantage, Mike has cited as a disadvantage. Both of us are correct (and both wrong? :-) ) and it is worth understanding why. The factor is "modulus". Modulus (more properly "Elastic Modulus") is the ratio of stress to strain - the amount of deformation experienced per applied force. There are various measures of modulus, mainly relating to directional aspects of the forces and deformations. Some more details here in this Wikipedia article on elastic modulus. . Silicone Rubbers are "low modulus" - while the actual values can vary widely by design, those normally encountered will be "substantially "more elastic" than typical epoxy resins. Mike and I are agreed on this :-). How useful this is is a matter for discussion. An epoxy resin will generally act in the same way as a clamp or bracket or mechanical mounting system. It rigidly attaches the component to a fixture or PCB etc. Typical silicone rubbers are by no means "jelly like" - they set to a stiff rubber - typically as flexible as a less than stiff block eraser - solid but not concrete like. The hardness of a rubber formulation is not intuitively related to modulus* but the two are related enough to be useful. An appropriate scale for hardness for rubbers is the "Shore A" hardness scale. Car tyres would be about Shore A 70 - a rather stiff rubber. A typical rubber band would be Shore A 25-30 - rather stretchy. Silicone Rubbers are available that set to either of these extremes, but most would be intermediate, with a tendency towards the stiff side in most cases. Now consider, if you have ever encountered a vibration damper, earthquake protector, shockproof mounting, auto engine mounting, ... - what mechanical characteristics do they have? What sort of materials would you expect them to be made of? If a mounting demands absolute rigidity, cannot tolerate any degree of flexure, needs to be solidly mounted with no movement relative to mounting whatsoever - then silicone rubber is probably not the mounting material of choice. Conversely, if a degree of mechanical isolation from vibration, the ability to withstand impacts and large external movements or forces by distributing the forces involved in space and time, the minimisation of vibration transmission by lossy transfer of vibrational energy, the freedom from stress fractures and minimal mechanical degradation with age are valuable, then silicone rubbers will be serious contenders. M Alin asked about what sort of things most needed to be glued. Typically these are Heavy objects that exert substantial forces under impact (can capacitors, loose battery boxes, ... PCB's in slots, edge connectors, objects that may attain "whiplash" under eternal impact. Items retained solely or mainly by the adhesive. Essentially, objects that are liable to move under impact or vibration and to sustain damage if they do. Rubber shock absorbers - images are live links Wikipedia - Shore durometer Machinist materials - Shore hardness Shore A to Shore D comparison (suspect!) (* A formula that approximately relates Shore A to Young's modulus is given below. Unlikely to be useful in this context.) Added 4 years on: Q: I don't understand your back-to-back contradicting statements "should use 'proper' mechanical restraints [...] Properly done this [adhesive] is a legitimate manufacturing method". Adhesives are fantastic, when, as you mention, it is designed for the task at hand. If you want to lead off with "everything should use nuts and bolts", I'd point you to 3M whose adhesives hold the surface of the world's tallest building on. A: I reread or skimmed what I'd written to see how it fits together. I agree that if you take some of the statements both in semi-isolation AND (boolean) at absolute face value you can see some contradiction. If you take both the original question and what I've written as a whole it seems reasonably OK as is. I could change a few words to remove a few partial contradictions but I think that anyone looking for advice on the subject who reads it all will get an extremely good idea of what I am saying [[Whether what I am saying is valid is another question :-) ]]. I note that I note that good quality SRs (silicone rubbers) will "last" 20+ years) and that they are useful in this role. And I am comfortable with the idea that a 'properly' designed 'properly' applied sheet of material (such as glass) can be 'safely' held by adhesive for the design life of a building when all assumptions turn out to be valid. But, I also know of glass panes that have "just fallen off" buildings and here (New Zealand) one man fell to his death when he leaned against an adhesive retained full floor height window-pane - which then fell off :-(. ie "Proper design" and "proper application" and "valid assumptions" are or should be within the capabilities of professional designers - and even they get it wrong sometimes. People "designing" fastening systems with lesser resources and capabilities can in general be much more sure of "getting it right" with mechanical fasteners than with adhesives. I have been involved with the use of adhesive fastening on maybe 500,000 products made in a number of (Chinese) factories over a number of years. The lessons and experiences were enlightening :-). Yes, adhesives have their place. Yes, I use them. And, yes, I am ALWAYS wary and aware of what may happen.
{ "source": [ "https://electronics.stackexchange.com/questions/18525", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4906/" ] }
18,552
Mini USB connectors were standardized as part of USB 2.0 in 2000. In 2007, the USB Implemeters Forum standardized Micro USB connectors, deprecating Mini USB connectors four months later. Why? What are the advantages of Micro USB over Mini USB that made USB-IF rip out an existing standard and replace it with another one that's basically the same thing?
Added mid 2022: A lightly edited version of a comment by @LittleWhole In 2022 the world is moving towards the far more robust and convenient USB-C connector. While there are still issues with USB-C (including even mechanical incompatibilities), things are slowly being addressed (i.e. USB4 standard on the protocol side) and I have only ever encountered one USB-C cable that wouldn't plug into a USB-C receptacle in my life. Adoption of USB-C is definitely picking up the pace - not just in consumer electronics, but a motor controller for my school's robotics club has even adopted USB-C _____________________ A major flaw: A major factor in abandoning mini-USB is that it was fatally flawed mechanically. Most people who have used a mini-USB device which requires many insertions will have experienced poor reliability after a significant but not vast number of uses. The original mini-USB had an extremely poor insertion lifetime - about 1000 insertions total claimed. That's about once a day for 3 years. Or 3 times a day for one year. Or ... For some people that order of reliability may be acceptable and the problems may go unnoticed. For others it becomes a major issue. A photographer using a flash card reader may expend that lifetime in well under a year. The original mini-USB connector had sides which sloped as at present but they were reasonably straight. (Much the same as the sides on a micro-A connector). These are now so rare that I couldn't find an image using a web search. This image is diagrammatic only but shows the basic shape with sloped but straight sides. Efforts were made to address the low lifetime issues while maintaining backwards compatibility and the current "kinked sides" design was produced. Both plug and socket were changed but the sockets ("receptacle") will still accept the old straight sided plugs. This is the shape that we are all so used to that the old shape is largely forgotten. Unfortunately, this alteration "only sort of worked". Insertion lifetime was increased to about 5,000 cycles. This sounds high enough in theory but in practice the design was still walking wounded with respect to mechanical reliability. 5,000 cycles is a very poor rating in the connector industry. While most users will not achieve that many insertion cycles, the actual reliability in heavy use is poor. The micro-USB connector was designed with these past failings in mind and has a rated lifetime of about 10,000 insertion cycles. This despite its apparent frailty and what may appear to be a less robust design. [This still seems woefully low to me. Time will tell]. Latching Unlike mini USB, Micro USB has a passive latching mechanism which increases retention force but which allows removal without active user action (apart from pulling). [Latching seems liable to reduce the plug "working" in the receptacle and may increase reliability]. Size matters: The micro and mini USB connectors are of similar width. But the micro connector is much thinner (smaller vertical dimension). Some product designs were not able to accommodate the height of the mini receptacle and the new thinner receptacle will encourage and allow thinner products. A mini-USB socket would have been too tall for thin design. By way of example - a number of Motorola's "Razr" cellphones used micro-USB receptacles, thus allowing the designs to be thinner than would have been possible with a Mini-USB receptacle. Specific Razr models which use MICRO-USB include RAZR2 V8, RAZR2 V9, RAZR2 V9m, RAZR2 V9x, DROID RAZR, RAZR MAXX & RAZR VE20. Wikipedia on USB - see "durability". Connector manufacturer Molex's micro USB page They say: Micro-USB technology was developed by the USB Implementers Forum, Inc. (USB-IF), an independent nonprofit group that advances USB technology. Molex's Micro-USB connectors offer advantages of smaller size and increased durability compared with the Mini-USB. Micro-USB connectors allow manufacturers to push the limits of thinner and lighter mobile devices with sleeker designs and greater portability. Micro-USB replaces a majority of Mini-USB plugs and receptacles currently in use. The specification of the Micro-USB supports the current USB On-The-Go (OTG) supplement and provides total mobile interconnectivity by enabling portable devices to communicate directly with each other without the need for a host computer. ... Other key features of the product include high durability of over 10,000 insertion cycles, and a passive latching mechanism that provides higher extraction forces without sacrificing the USB's ease-of-use when synchronizing and charging portable devices. All change: Once all can change, all tend to. A significant driver to a common USB connector is the new USB charging standard which is being adopted by all cellphone makers. (Or all who wish to survive). The standard relates primarily to the electrical standards required to allow universal charging and chargers but a common mechanical connection system using the various micro-USB components is part of the standard. Whereas in the past it only really mattered that your 'whizzygig' could plug into its supplied power supply, it is now required that any whizzygig's power supply will fit any other device. A common plug and socket system is a necessary minimum for this to happen. While adapters can be used this is an undesirable approach. As USB charging becomes widely accepted not only for cellphones but for xxxpods, xxxpads, pda's and stuff in general, the drive for a common connector accelerates. The exception may be manufacturers whose names begin with A who consider themselves large enough and safe enough to actively pursue interconnect incompatibility in their products. Once a new standard is widely adopted and attains 'critical mass" the economics of scale tend to drive the market very rapidly to the new standard. It becomes increasingly less cost effective to manufacture and stock and handle parts which have a diminishing market share and which are incompatible with new facilities. I may add some more references to this if it appears there is interest - or ask Mr Gargoyle. Large list of cellphones that use micro-USB receptacle _______________________________ _______________________________ A few more images allowing comparisons of a range of aspects including thickness, area of panel, overall volume (all being important independently of the others to some for various reasons) and retention means. Large Google image samples each linked to a web page and more Useful discussion & brief history Note: they say (and, as Bailey S also notes) Why Micro types offer better durability? Accomplished by moving leaf-spring from the PCB receptacle to plug, the most-stressed part is now on the cable side of the connection. Inexpensive cable bears most wear instead of the µUSB device. Maybe useful: USB CONNECTOR GUIDE — GUIDE TO USB CABLES USB connections compared What is Micro USB vs Mini USB
{ "source": [ "https://electronics.stackexchange.com/questions/18552", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5473/" ] }
18,612
I bought a Lithium-ion battery for a camera (much cheaper than the brand replacement but non unreasonably cheap compared to AAA Li-Ion batteries with similar charge). I however have doubts that it has the capacity it claims on the package (in mAH). Is there a simple way for me to roughly verify the claim on the package which is more precise and less time consuming that to run some benchmarks with the camera and a comparison battery? I.e. is there some device / method that given a a Lithium-ion battery which is supposedly fully charged determines: Is it actually fully charged What is its current charge in mAH
Assessing full charge is the easy part. Method (a) A fully charged Lithium Ion single cell battery will have an open circuit voltage of about 4.2 Volt*. (4.1 to 4.2 OK. 4.0 not quite there. 4.3 - a bit high.) Some cameras use two cells - double the expected voltages. Laptops and other larger devices use 3 or more cells. The voltage should be a multiple of the above voltage. [*There are variants that allow higher voltages. Unless you are CERTAIN that this includes your one, assume that it doesn't. Getting it wrong can be 'upsetting'. (ie N x (4.1 to 4.2V)) Method (b) Use a good quality charger (eg one supplied by camera manufacturer or one of known quality) which has a "charging light. Place "charged battery on charger". Depending on how long since it was last charged the charge light should either flash or perhaps remain on for a minute or two and then go off. Remove battery from charger. Wait 10 seconds. Place battery back on charger. Charge light should flash very briefly and go out. Assessing capacity is harder, but not hard. (a) you can get some indication, for nominally equal batteries, from the weight. A significant part of the weight in a LiIon battery is actively involved components whether electrically or mechanically (separators, conductors, electrolyte & (of course) Lithium metal. Two batteries of the same nominal capacity should have similar weights. I'd guesstimate that a 10% difference may be due to happenstance and construction, but beyond that I'd be suspicious. In larger & heavier batteries this test will work better than for very small batteries. For interest, for AA NimH cells this is an excellent indicator. Modern high capacity AA's which claim 2500 mAh + capacity should be in the high twenty gram range - say 26 grams plus with some just over 30 grams. Anything under 20 grams is a complete dud and anything 25 grams or below is suspect. (b) For any sort of accuracy you need to discharge the battery to an "end point" and measure capacity. No other method reasonably available to you is available. There are other methods such as measuring the change in voltage over a given time under a given load and trying to assess where you are on the discharge curve. This is difficult to get right and needs experience and a degree of luck. Measuring discharge time is "easier". Best is a constant current load, which can be made very easily with eg an LM317 and one resistor, but I'll assume for now that you don't want to do that. Ask if interested. A discharge resistor that takes at least one hour to discharge should be used. You could use a motor or lamp or camera or ... but a resistor has some advantages. R minimum ~= (Cells_in_battery x 4000) / mAh eg if you have a 1 cell battery (Voc=~4.2V) of 1500 mAh capacity then R = cells x 4000 / mAh = 1 x 4000/1500 = 2.666 ohm ~= 3 ohm or 3.3 ohm (std value) Use the next largest resistor than the value calculated. Up to Several times larger is OK BUT it will take proportionally longer. Resistor power rating: Resistor power = V^2/R = (4 x number of cells)/R eg for the above single cell and 3 ohm resistor the minimum wattage rating is 4 x 1 / 3 = 1.333 Watt. Use a 2 Watt or greater resistor. Method: I'll describe this briefly as I don't know your experience level. This may be easy to follow or hard. If hard, ask more questions. Attach temporary wires to battery terminals. Two paper clips bent at end resting on terminal is flat and accessible and held with weight or tape. Wires inserted into connector id not openly accessible. Some batteries will not provide power until you give them secret handshakes. but most will. Battery with accessible terminals. Below: Harder to access terminals. Two dress making pins or two wires can work here BUT DO NOT SHORT TOGETHER !!! IF YOU ARE NOT COMFORTABLE DOING THIS DON'T DO IT. Monitor battery voltage throughout. Multimeter connected to battery wires and set to appropriate range. http://t2.gstatic.com/images?q=tbn:ANd9GcR4lcHSRViGF_kk58tbzmBWf9G11VxLY3J45qj0lW-_spRMZIiDNg Connect resistor to battery leads. Start a timer. Monitor voltage. Stop at 3.2V per cell. DO NOT DISCHARGE BELOW 3 VOLTS PER CELL. STOPPING AT 3.2V IS A "GOOD IDEA". A LiIon battery may be damaged badly by very deep discharge. Set a timer. DO NO leave this running and walk away. Below: Typical lithium Ion 1 cell 'battery' discharge curve. Best method is to do this with genuine and clone batteries and compare times. Method (c) Easiest :-). Use a camera. Set to video or timed photos. Note start and end frame times. Compare. Major advantages are "set and forget no playing with battery connections self timing. UPDATE - January 1st 2013 - Happy New Year. I've just been asked offlist by somebody about the LM317 circuit I mentioned for constant current discharge. Here is an example. I copied this from the very useful and relevant webpage on LED driving - here and they in turn copied it from an LM317 data sheet. The offlist query said You mentioned a way by using LM317 to determine battery capacity. I need to check a lithium ion battery with about 1700mAh capacity. What do you recommend to me to measure this kind of battery capacity in a reasonable time like 3-4 hours. A 1700 mAh battery would be discharged in 3 hours by 1700/3 =~ 570 mA and in 4 hours by 1700/4 ~= 425 mA. So using about 500 mA and seeing how long it takes will give a measure of battery capacity. The current of the3 load in the circuit above is Iout = Vref/R1 so R1 = Vref/Iout For an LM317 Vref = 1.25V so for 500 mA R1 = V/I = 1.25V / 0.5A = 2.5 Ohm. Power in R1 = I^2 R = 0.5^2 x 2.5 or about 0.7 Watt. A 1 Watt resistor would probably survive this - a 2 Watt or 5 Watt would be better. The LM317 will dissipate V_LM317 x I = (Vbattery - Vref) x I = (4.2-1.25) x 0.5 =~ 1.5 Watt. So a heatsink or piece of Aluminum or other thermally conductive material on the LM317 will be "a good idea". I use 4.2 V for the battery voltage. It will drop as the battery discharges. Note that in many cases a 1700 mAh LiIon battery can be safely discharged at up to 1C rate - = 1700 mA in this case. Safer is C/2 = 850 mA. Actual max allowed rate should be set by the manufacturer. Use Imax = C/2 if no data available. This will usually be safe but "caveat emptor" / "YMMV" ... . If using a higher rate the power dissipation in the resistor and LM317 will be higher and changes will be needed. Some LM317 will handle 1A max. Some will handle 1.5A. (Some smaller pkgs < 1A) . See data sheet. The LM350 is a big brother version of the LM317 that works at several amps. The battery endpoint voltage should be the endpoint Voltage that you will use in your system. As per my comments above, this MUST NOT BE below 3.0V to prevent battery damage, and higher is safer. You need either to keep a close eye on this if stopping discharge manually OR set up an automatic cutoff system. How you do this and how you time the discharge period is up to you.
{ "source": [ "https://electronics.stackexchange.com/questions/18612", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5500/" ] }
18,874
I need to pass high current on some part of my circuit. I used an online PCB track width calculator to see that required track width is about 5mm and minimum clearance is 1mm, which makes it about 7mm width at total just for one track. I need several of these high current carrying tracks on my PCB which will consume too much space to afford. I am thinking of soldering copper wires on the top side of the PCB which will be parallel to the thin and symbolical tracks on the bottom side. But I would like to know if there is a more professional way of overcoming this problem.
High-current PCB bus bars are available from several suppliers, such as: http://www.espbus.com and are an ideal solution. A quick search for "PCB bus bars" will yield a number of suppliers.
{ "source": [ "https://electronics.stackexchange.com/questions/18874", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5542/" ] }
18,884
I think, it is time I understand working principle of MOSFET transistors... Suppose that; I want to switch voltage on a resistive load by a MOSFET transistor. Any control signal between -500V and +500V can be easily generated. The transistor models in the picture are not important, they can be of any other appropriate model as well. Question #1 Which of the driving techniques are feasible? I mean, which of these four circuits would work with correctly applied control signals? Question #2 What is the range of the voltage level of the control signals (CS1, CS2, CS3, CS4) that loads and unloads the resistor? (I understand that exact boundaries of on and off states must be calculated individually. But I'm asking for approximate values to understand the working principle. Please give statements like " In circuit (2), the transistor turns on when CS2 is below 397V and turns off when above 397V. ".)
All the circuits are feasible when correctly driven, but 2 & 3 are far more common, far easier to drive well and far safer wrt not doing things wrong. Rather than give you a set of voltage based answers I'll give you some general rules which are much more useful once you understand them. MOSFETs have a safe maximum Vgs or Vsg beyond which they may be destroyed, This is usually about the same in either direction and is more a result of construction and oxide layer thicknesses. MOSFET will be "on" when Vg is between Vth and Vgsm In a positive direction for N Channel FETs. In negative direction for P Channel FETs. This makes sense of controlling the FETs in the above circuits. Define a voltage Vgsm as the maximum voltage that gate may be more +ve than source safely. Define -Vgsm as the most that Vg may be negative relative to s. Define Vth as the voltage that a gate must be wrt source to just turn the FET on. Vth is +ve for N channel FETs and negative for P channel FETs. SO Circuit 3 MOSFET is safe for Vgs in range +/- Vgsm. MOSFET is on for Vgs> +Vth Circuit 2 MOSFET is safe for Vgs in range +/- Vgsm. MOSFET is on for - Vgs > -Vth (ie gate is more negative than drain by magnitude of Vth. Circuit 1 Exactly the same as circuit 3 ie the voltages relative to the FET are identical. No surprise when you think about it. BUT Vg will now be ~= 400V at all timed. Circuit 4 Exactly the same as circuit 2 ie the voltages relative to the FET are identical. Again, no surprise when you think about it. BUT Vg will now be ~= 400V below the 400V rail at all times. ie the difference in the circuits is related to the voltage of Vg wrt ground for an N Channel FET and +400V for a P channel FET. The FET does not "know" the absolute voltage its gate is at - it only "cares" about voltages wrt source. Related - will arise along the way after the above discussion: MOSFETS are '2 quadrant' switches. That is, for an N channel switch where the polarity of gate and drain relative to the source in "4 quadrants" can be + +, + -, - - , and - +, the MOSFET will turn on with Vds = +ve and Vgs +ve OR Vds negative and Vgs positive Added early 2016: Q: You mentioned that the circuits 2 & 3 are very common, why is that? The switches can work in both quadrants, what makes one to choose P channel to N channel, high side to low side? – A: This is largely covered in the original answer if you go through it carefully. But ... ALL circuits operate only in 1st quadrant when on: Your question about 2 quadrant operation indicates a misunderstanding of the above 4 circuits. I mentioned 2 quadrant operation at the end (above) BUT it is not relevant in normal operation. All 4 of the circuits above are operating in their 1st quadrant - ie Vgs polarity = Vds polarity at all times when turned on. 2nd quadrant operation is possible ie Vgs polarity = - Vds polarity at all times when turned on BUT this usually causes complications due to the inbuilt "body diode" in the FET - see "Body Diode" section at end. In circuits 2 & 3 the gate drive voltage always lies between the power supply rails, making it unnecessary to use "special" arrangements to derive the drive voltages. In circuit 1 the gate drive must be above the 400V rail to get enough Vgs to turn on the MOSFET. In circuit 4 the gate voltage must be below ground. To achieve such voltages "bootstrap" circuits are often used which usually use a diode capacitor "pump" to give the extra voltage. A common arrangement is to use 4 x N Channel in a bridge. The 2 x low side FETs have usual gate drive - say 0/12 V, and the 2 high side FETS need (here) sav 412V to supply +12V to the high side FETS when the FET is turned on. This is not technically hard but is more to do, more to go wrong and must be designed. The bootstrap supply is often driven by the PWM switching signals so there is a lower frequency at which you still get upper gate drive. Turn off the AC and the bootstrap voltage starts to decay under leakage. Again, not hard, just nice to avoid. Using 4 x N channel is "nice" as all are matched, Rdson is usually lower for same $ than P channel. NOTE !!!: If packages are isolated tab or use insulated mounting all can go together on the same heatsink - BUT do take due CARE!!! In this case The lower 2 have switched 400V on the drains and sources are grounded, gates are at 0/12V say. while the upper 2 have permanent 400V on the drains and switched 400V on the sources and 400/412 V on the gates. Body diode: All FETS that are usually encountered* have an "intrinsic" or "parasitic" reverse biased body diode between drain and source. In normal operation this does not affect intended operation. If the FET is operated in the 2nd quadrant (eg for N Channel Vds = -ve, Vgs = +ve) [[pedantry: call that 3rd if you like :-) ]] then the body diode will conduct when the FET is turned off when Vds is -ve. There are situations where this is useful and desired but they are not what is commonly found in eg 4 FET bridges. *The body diode is formed due to the substrate that the device layers are formed on is conductive. Device with an insulating substrate (such as Silicon on Saphire), do not have this intrinsic body diode, but are usually very expensive and specialised).
{ "source": [ "https://electronics.stackexchange.com/questions/18884", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5542/" ] }
18,885
I have the following circuit hooked up on a breadboard. I vary the gate voltage using a potentiometer. Here is what confuses me: according to wikipedia, the MOSFET is in saturation when V(GS) > V(TH) and V(DS) > V(GS) - V(TH). If I slowly increase the gate voltage starting from 0, the MOSFET remains off. The LED starts conducting a small amount of current when the gate voltage is around 2.5V or so. The brightness stops increasing when the gate voltage reaches around 4V. There is no change in the brightness of the LED when the gate voltage is greater then 4V. Even if I increase the voltage rapidly from 4 to 12, the brightness of the LED remains unchanged. I also monitor the Drain to Source voltage while I'm increasing the gate voltage. The drain to source voltage drops from 12V to close to 0V when the gate voltage is 4V or so. This is easy to understand: since R1 and R(DS) form a voltage divider and R1 is much larger than R(DS), most of the voltage is dropped on R1. In my measurements, around 10V is being dropped on R1 and the rest on the red LED (2V). However, since V(DS) is now approximately 0, the condition V(DS) > V(GS) - V(TH) is not satisfied, is the MOSFET not in saturation? If this is the case, how would one design a circuit in which the MOSFET is in saturation? Note that: R(DS) for IRF840 is 0.8 Ohms. V(TH) is between 2V and 4V. Vcc is 12V. Here is the load line that I plotted of my circuit. Now, from what I've gained from the answers here is that in order to operate the MOSFET as a switch, the operating point should be towards the left of the load line. Am I correct in my understanding? And If one imposes the MOSFET characteristic curves, on the above graph, then the operating point would be in the so called "linear/triode" region. Infact, the switch should reach that region as quickly as possible in order to work efficiently. Do I get it or am I completely wrong?
First of all, "saturation" in mosfets means that change in VDS will not produce significant change in the Id (drain current). You can think about MOSFET in saturation as a current source. That is regardless of the voltage across VDS (with limits of course) the current through the device will be (almost) constant. Now going back to the question: According to wikipedia, the MOSFET is in saturation when V(GS) > V(TH) and V(DS) > V(GS) - V(TH). That is correct. If I slowly increase the gate voltage starting from 0, the MOSFET remains off. The LED starts conducting a small amount of current when the gate voltage is around 2.5V or so. You increased The Vgs above Vth of the NMOS so the channel was formed and device started to conduct. The brightness stops increasing when the gate voltage reaches around 4V. There is no change in the brightness of the LED when the gate voltage is greater then 4V. Even if I increase the voltage rapidly from 4 to 12, the brightness of the LED remains unchanged. You increased the Vgs making the device conducting more current. At Vgs = 4V the thing that is limiting amount of current is no longer transistor but resistor that you have in series with transistor. I also monitor the Drain to Source voltage while I'm increasing the gate voltage. The drain to source voltage drops from 12V to close to 0V when the gate voltage is 4V or so. This is easy to understand: since R1 and R(DS) form a voltage divider and R1 is much larger than R(DS), most of the voltage is dropped on R1. In my measurements, around 10V is being dropped on R1 and the rest on the red LED (2V). Everything looks in order here. However, since V(DS) is now approximately 0, the condition V(DS) > V(GS) - V(TH) is not satisfied, is the MOSFET not in saturation? No it is not. It is in linear or triode region. It behaves as resistor in that region. That is increasing Vds will increase Id. If this is the case, how would one design a circuit in which the MOSFET is in saturation? You already have. You just to need take care for operating point (make sure that conditions that you have mention are met). A) In linear region you can observe following: -> when increasing the SUPPLY voltage, the LED will get brighter as the current across resistor and transistor will rise and thus more will be flowing through the LED. B) In saturation region something different will happen -> when increasing SUPPLY voltage, the LED brightness will not change. The extra voltage that you apply on the SUPPLY will not translate to bigger current. Instead it will be across MOSFET, so the DRAIN volage will rise together with supply voltage (so increase supply by 2V will mean increasing drain volage by almost 2V)
{ "source": [ "https://electronics.stackexchange.com/questions/18885", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3966/" ] }
19,077
I'd like to do some hobbyist soldering at home, and would like to make sure I don't poison those living with me (especially small children). Lead-free seems necessary - what other features should I look for in solder? Are the different types of solder roughly the same in terms of safety (breathing the fumes, vapor fallout, etc.)? Is there more I should do to keep the place clean besides having a filter fan and wiping down the work surface when finished?
What type of solder is safest for home (hobbyist) use? This advice is liable to be met with doubt and even derision by some - by all means do your own checks, but please at least think about what I write here: I have cited a number of references below which give guidelines for soldering. These are as applicable for lead-free solders as for lead based solders. If you decide after reading the following not to trust lead based solders, despite my advice, then the guidelines will still prove useful. It is widely know that the improper handling of metallic lead can cause health problems. However, it is widely understood currently and historically that use of tin-lead solder in normal actual soldering applications has essentially no negative health impact. Handling of the lead based solder, as opposed to the actual soldering, needs to be done sensibly but this is easily achieved with basic common sense procedures. While some electrical workers do have mildly increased epidemiological incidences of some diseases, these appear to be related to electric field exposure - and even then the correlations are so small as to generally be statistically insignificant. Lead metal has a very low vapor pressure and when exposed at room temperatures essentially none is inhaled. At soldering temperatures vapor levels are still essentially zero. Tin lead solder is essentially safe if used anything like sensibly. While some people express doubts about its use in any manner, these are not generally well founded in formal medical evidence or experience. While it IS possible to poison yourself with tin-lead solder, taking even very modest and sensible precautions renders the practice safe for the user and for others in their household. While you would not want to allow children to suck it, anything like reasonable precautions are going to result in its use not being an issue. A significant proportion of lead which is "ingested" (taken orally or eaten) will be absorbed by the body. BUT you will acquire essentially no ingested lead from soldering if you don't eat it, don't suck solder and wash your hands after soldering. Smoking while soldering is liable to be even unwiser than usual. It is widely accepted that inhaled lead from soldering is not at a dangerous level. The majority of inhaled lead is absorbed by the body. BUT the vapor pressure of lead at soldering temperatures is so low that there is essentially no lead vapor in the air while soldering. Sticking a soldering iron up your nose (hot or cold) is liable to damage your health but not due to the effects of lead. The vapor pressure of lead at 330 C (VERY hot for solder) / 600 Kelvin is about 10^-8 mm of mercury. Lead = "Pb" crosses x-axis at 600K on lower graph here . These are interesting and useful graphs of the vapor pressure with temperatures of many elements. (By comparison, Zinc has about 1,000,000 times as high a vapor pressure at the same temperature, and Cadmium (which should definitely be avoided) 10,000,000 times as high. Atmospheric pressure is ~ 760 mm or Hg so lead vapor pressure at a VERY hot iron temperature is about 1 part in 10^11 or one part per 100 billion. The major problems with lead are caused either by its release into the environment where it can be converted to more soluble forms and introduced into the food chain, or by its use in forms which are already soluble or which are liable to be ingested. So, lead paint on toys or nursery furniture, lead paint on houses which gets turned into sanding dust or paint flakes, lead as an additive in petrol which gets disseminated in gaseous and soluble forms or lead which ends up in land fills are all forms which cause real problems and which have led to bans on lead in many situations. Lead in solder is bad for the environment because of where it is liable to end up when it is disposed of. This general prohibition has lead to a large degree of misunderstanding about its use "at the front end". If you insist on regularly vaporising lead in close proximity to your person by eg firing a handgun frequently, then you should take precautions re vapor inhalation. Otherwise, common sense is very likely to be good enough. Washing your hands after soldering is a wise precaution but more likely to be useful for removal of trace solid lead particles. Use of a fume extractor & filter is wise - but I'd be far more worried about the resin or flux smoke than of lead vapor. Sean Breheney notes: " There IS a significant danger associated with inhaling the fumes of certain fluxes (including rosin) and therefore fume extraction or excellent ventilation is, in my opinion, essential for anyone doing soldering more often than, say, 1 hour per week. I generally have trained myself to inhale when the fumes are not being generated and exhale slowly while actually soldering - but that is only adequate for very small jobs and I try to remember to use a fume extractor for larger ones. (Added July 2021) Note that there are MANY on we b documents which state that lead solder is hazardous. Few or none try to explain why this is said to be the case. Soldering precautions sheet . They note: Potential exposure routes from soldering include ingestion of lead due to surface contamination. The digestive system is the primary means by which lead can be absorbed into the human body. Skin contact with lead is, in and of itself, harmless, but getting lead dust on your hands can result in it being ingested if you don’t wash your hands before eating, smoking, etc. An often overlooked danger is the habit of chewing fingernails. The spaces under the fingernails are great collectors of dirt and dust. Almost everything that is handled or touched may be found under the finger nails. Ingesting even a small amount of lead is dangerous because it is a cumulative poison which is not excreted by normal bodily function Lead soldering safety guidelines Standard advice Their comments on lead fumes are rubbish. FWIW - the vapor pressure of lead is given by $$\log_{10}p(mm) = -\frac{10372}{T} - \log_{10}T - 11.35$$ Quoted from The Vapor Pressures of Metals; a New Experimental Method Wikipedia - Vapor pressure For more on soldering in general see Better soldering Lead spatter and inhalation & ingestion It's been suggested that the statement: "The majority of inhaled lead is absorbed by the body. BUT the vapor pressure of lead at soldering temperatures is so low that there is essentially no lead vapor in the air while soldering." is not relevant, as it's suggested that Vapor pressure isn't important if the lead is being atomized into droplets that you can then inhale. Look around the soldering iron and there's lead dust everywhere. In response: "Inhalation" there referred to lead rendered gaseous - usually by chemical combination. eg the use of Tetraethyl lead in petrol resulted in gaseous lead compounds not direcly from the TEL itself but from Wikipedia Tetraethyllead page : The Pb and PbO would quickly over-accumulate and destroy an engine. For this reason, the lead scavengers 1,2-dibromoethane and 1,2-dichloroethane are used in conjunction with TEL—these agents form volatile lead(II) bromide and lead(II) chloride, respectively, which are flushed from the engine and into the air. In engines this process occurs at far higher temperatures than exist in soldering and there is no intentional process which produces volatile lead compounds. (The exceedingly unfortunate may discover a flux which contains substances like the above lead scavenging halides, but by the very nature of flux this seems vanishingly unlikely in the real world.). Lead in metallic droplets t soldering temperatures does not come close to being melted or vaporised at anything like significant partial pressures (see comments and references above) and if any enters the body it counts as 'ingested', not inhaled. Basic precautions against ingestion are widely recommended, as mentioned above. Washing of hands, not smoking while soldering and not licking lead has been noted as sensible. For lead "spatter" to qualify for direct ingestion it would need to ballistically enter the mouth or nose while soldering. It's conceivable that some may do this but if any does the quantity is very small. It's generally recognised both historically and currently that the actual soldering process is not what's hazardous. A significant number of webpages do state that lead from solder is vaporized by soldering and that dangerous quantities of lead can be inhaled. On EVERY such page I have looked at there are no references to anything like reputable sources and in almost every such case there are no references at all. The general ROHS prohibitions and the undoubted dangers that lead poses in appropriate circumstances has lead to a cachet of urban legend and spurious comments without any traceable foundations. And again ... It was suggested that: Anyone who's sneezed in a dusty room knows that it doesn't have to enter the nose or mouth "ballistically". Any time solder splatters or flux pops, it creates tiny droplets of lead that solidify to dust. Small enough particles of dust can be airborne and small exposures over years accumulate in the body. "Lead dust can form when lead-based paint is dry scraped, dry sanded, or heated. Lead chips and dust can get on surfaces and objects that people touch. Settled lead dust can re-enter the air when people vacuum, sweep or walk through it." In response: A quality reference, or a few, that indicated that air borne dust can be produced in significant quantity by soldering would go a long way to establishing the assertions. Finding negative evidence is, as ever, harder. There is no question about the dangers from lead based paints, whether form airborne dust from sanding, children sucking lead painted objects or surface dust produced - all these are extremely well documented. Lead in a metallic alloy for soldering is an entirely different animal. I have many decades of personal soldering experience experience and a reasonable awareness of industry experience. Dusty rooms we all know about, but that has no link to whether solder does or doesn't produce lead dust. Soldering can produce small lead particles, but these appear to be metallic alloyed lead. "Lead" dust from paint is liable to contain lead oxide or occasionally other lead based substances. Such dust may indeed be subject to aerial transmission if finely enough divided, but this provides no information about how metallic lead performs in dust production. I am unaware of discernible "Lead dust" occurring from 'popping flux', and I'm unaware of any mechanism that would allow mechanically small lead droplets to achieve a low enough density to float in air in the normal sense. Brownian motion could loft metallic lead particles of a small enough size. I've not seen any evidence (or found any references, that suggest that small enough particles are formed in measurable quantities. Interestingly - this answer had 2 downvotes - now it has one. Somebody changed their mind. Thanks. Somebody didn't. Maybe they'd like to tell me why? The aim is to be balanced and objective and as factual as possible. If it falls short please advise. ___________________________________________________________ Added 2020: SUCKING SOLDER? I remember biting solder when I was a kid and for about 2 years I wouldn't wash my hands after soldering. Will the effects show up in the future?? I can only give you a layman's opinion. I'm not qualified to give medical advice. I'd GUESS it's probably OK BUT I don't know. I suspect that the effects are limited due to insolubility of lead - BUT lead poisoning from finely divided lead such as in paint is a significant poisoning path. You can be tested for lead in the blood very easily (it requires one drop of blood) and it's probably worth doing. Internet diagnosis is, as I'm sure you know, a very poor substitute for proper medical advice. That said Here is Mayo Clinic's page on Lead poisoning symptoms & causes. And Here is their page on diagnosis and treatment. Mayo Clinic is one of the better sources for medical advice but, even then, it certainly does not replace proper medical advice.
{ "source": [ "https://electronics.stackexchange.com/questions/19077", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4700/" ] }
19,102
At an exhibition on home appliances, I saw this guy hawking his 'energy saver device'. The guy claimed that merely plugging his instrument into the supply would reduce the consumption reported in an energy usage meter by a fourth. He mentioned something about the load seen by the meter being non-resistive ... Not sure if this is the right forum to post the question (feel free to vote for the question to be closed)... Anyhow I find myself wondering - Is this just a sales-pitch (no warranty/guarantee on the device) or is there some fire to the smoke?
SCAM WARNING. These "energy saver" devices usually are simply capacitors, and they don't save you any money. Typical customer energy (usually billed in kilowatt-hours, kWh) meters are not affected by adding a capacitor. The scam works like so: Many loads in your home are inductors (fridge motor, furnace fan) If you install a capacitor which has just the right value, the current in the power grid leading to your home is reduced. Con artists correctly claim that some energy somewhere is being saved. Con artists correctly claim that industry uses these capacitors to save money Con artist sneakily insinuates that this somehow saves YOU money As evidence, con artist supplies testimonials rather than basic lab test results. So why doesn't this save you money? It's because, while motors do draw extra unnecessary current, the energy meter on the side of your home is designed to ignore that extra current! Adding a capacitor doesn't change your electric bill. So energy is really being saved, right? Yes: it's energy which otherwise would heat all the power lines between the company generators and your home. The extra capacitor doesn't cause your motors to use less energy. Instead it relieves some load-current on the power grid. The electric company benefits from this ...but the homeowner doesn't! Why then do factories use these Power-Factor Correction capacitors? Ah, for most huge industrial customers, electric utility companies install a different type of a meter: one with two dials. One dial is used to bill the customer for real energy consumed, while the other is used to bill wasted or 'reactive' energy. These industrial meters do detect the excess current drawn by induction motors. The industrial customers are charged for the unnecessary heating of the power grid. If they install just the right value of capacitor, they can reduce their electric bills. And this brings up one last bit of info. To reduce the excess current in the power grid, the capacitor has to be just the right value! If you have no induction motors in your home, then a PFC capacitor is less than worthless. Adding a PFC capacitor will INCREASE the wasted reactive current, not reduce it. So basically that's part of the dishonesty: selling capacitors of an unknown value in order to cancel out the effects of an unknown number of induction motors ...which aren't being billed by the electric company in the first place. Finally, what about #6 above? The testimonials? I suspect that these are genuine. If you were to install a very expensive PFC capacitor in your home, you'd be bringing in the "stone soup effect." You'd become very aware of any wasted energy. You'd start "helping" the device: turning off lights, turning down the furnace and the air conditioning, perhaps buying better windows and installing improved insulation. The expensive and worthless "stone" has turned into "soup." But you'd save lots more money if you skipped the PFC capacitor scam and just started turning down the hot water heater in the first place.
{ "source": [ "https://electronics.stackexchange.com/questions/19102", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5156/" ] }
19,103
Related: Safe current/voltage limit for human contact? From what I've heard: 110 V (or 220 V; household voltage pretty much) is dangerous (i.e. can kill you) I think there's consensus on this, no need to try :) 60 V (old telephone lines) is supposedly dangerous (never tried, only heard it once... probably won't try) From what I know first-hand: 9 V is not dangerous (I've put a 9-V battery on my tongue, nbd... actually it kinda hurt!) 1.5 V can indeed be quite shocking with enough current (fell for one of those "Do you want some gum?" tricks back in high school...), but they sometimes do not use 1.5 V with the low amperage levels, some use a DC motor to vibrate and complete the trick. So I guess there's two parameters here, voltage and current... but are there rough numbers on how much of each (or in combination, which I guess would be power) would be considered hazardous? No old telephone lines have always been 48vDC well at least since from 1950s, if your skin is wet you can feel it slightly, like on your forearm. Now the ring voltage is 90-110vAC with a 2 on 4 sec off cycle (USA). It will ring your bell but good, should you be touching the wires when someone calls. The ring voltage rides on top of the 48vDC, so its present on the same two conductors that the voice voltage(DC) is on. Luckily it's 4 seconds off will give you a chance to get off the conductors with a scream (of pain).
How much voltage is dangerous is not really a static number as it depends on your body resistance, time of exposure and source "stiffness" (i.e. how much current it can supply). You get figures like 60V (or as low as 30V) which are an attempt at an average figure above which "caution should be taken". However, depending on how "conductive" you are at any one time, sometimes e.g. 50V might be quite safe and other times it may kill you. DC or AC (and what frequency) seem to make a difference too, female or male, etc - this table is very instructive: Figures as low as 20mA across the heart are given as possibly capable of inducing fibrillation - here is another table from the same source that gives body resistance based on different situations: You can see that as low as 20V may be dangerous given the right conditions. Here is the reference the tables came from, I think it is quite accurate based on some experiments I have done myself measuring body resistances. The rest of the site seems to be generally very well informed and presented from the bits I have read, so I think this may be quite a trustworthy source.
{ "source": [ "https://electronics.stackexchange.com/questions/19103", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4645/" ] }
19,233
So there are several types of transistors: BJT JFET MOSFET Combine all that with the various flavors of each (NPN, PNP, enhancement mode, depletion mode, HEXFET, etc) and you've got a wide array of parts, many of which are capable of accomplishing the same job. Which type is best suited for which application? Transistors are used as amplifiers, digital logic switches, variable resistors, power supply switches, path isolation, and the list goes on. How do I know which type is best suited for which application? I'm sure there are cases where one is more ideally suited than another. I admit that there is some amount of subjectivity/overlap here, but I'm certain that there is a general consensus about which category of applications each of the transistor types listed (and those I left off) is best suited for? For example, BJTs are often used for analog transistor amplifiers and MOSFETs are generally used for digital switching. PS - If this needs to be a Wiki, that's fine if someone would like to convert it for me
The main division is between BJTs and FETs, with the big difference being the former are controlled with current and the latter with voltage. If you're building small quantities of something and aren't very familiar with the various choices and how you can use the characteristics to advantage, it's probably simpler to stick mosly with MOSFETs. They tend to be more expensive than equivalent BJTs, but are conceptually easier to work with for beginners. If you get "logic level" MOSFETS, then it becomes particularly simple to drive them. You can drive a N channel low side switch directly from a microcontroller pin. IRLML2502 is a great little FET for this as long as you aren't exceeding 20V. Once you get familiar with simple FETs, it's worth it to get used to how bipolars work too. Being different, they have the own advantages and disadvantages. Having to drive them with current may seem like a hassle, but can be a advantage too. They basically look like a diode accross the B-E junction, so this never goes very high in voltage. That means you can switch 100s of Volts or more from low voltage logic circuits. Since the B-E voltage is fixed at first approximation, it allows for topologies like emitter followers. You can use a FET in source follower configuration, but generally the characteristics aren't as good. Another important difference is in full on switching behaviour. BJTs look like a fixed voltage source, usually 200mV or so at full saturation to as high as a Volt in high current cases. MOSFETs look more like a low resistance. This allows lower voltage accross the switch in most cases, which is one reason you see FETs in power switching applications so much. However, at high currents the fixed voltage of a BJT is lower than the current times the Rdson of the FET. This is especially true when the transistor has to be able to handle high voltages. BJT have generally better characteristics at high voltages, hence the existance of IGBTs. A IGBT is really a FET used to turn on a BJT, which then does the heavy lifting. There are many many more things that could be said. I've listed only a few to get things started. The real answer would be a whole book, which I don't have time for.
{ "source": [ "https://electronics.stackexchange.com/questions/19233", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1571/" ] }
19,395
All the components I own have strange names, like a transistor called 2N2222 and a motor driver called L293D. When you see these kind of things writen down do you instantly know what it means or do you have to google it every time? How much information is hidden in these codes or are they totally random?
The prefix often has a specific meaning, but the numbering following the prefix often doesn't. In general: 1N... = diodes 2N... = transistors A... (2 letters + 3 digits) = germanium transistor, e.g. AF117 B... (idem) = silicon transistor, e.g BC847 For diodes like 1N400x the last digit is kind of counter to indicate the diodes belong to the same series: 1N4001: 50V 1N4002: 100V 1N4003: 200V 1N4004: 400V 1N4005: 600V 1N4006: 800V 1N4007: 1000V The 1N4148 is a typical switching diode. For it's SMT counterpart manufacturers use the same number (4148), but with a different prefix: Fairchild calls it an LL4148, Rectron an MM4148. On the other hand, the SMT version of the BC547 transistor is the BC847, so there they keep the prefix, but change the number. You try and find the logic in it. IC manufacturers often release new devices with their own prefix, like "LT" for Linear Technology, or "LM" for National Semiconductor, so sometimes it refers directly to the name, but often it doesn't. When other manufacturers make compatible parts, however, they often stick to the same part number, so that prefix doesn't always tell you who the manufacturer is. A MAX809, for instance, is made by (at least) Maxim, On Semiconductor and NXP. "TIP" originally meant "Texas Instruments Power" but you'll also find a TIP110 transistor with Fairchild. Like Matt says sometimes the number following the prefix refers to the device's function. He mentions the MAX232 as an EIA232 driver, and guess what the MAX485 is. FTDI's FT232R is also an EIA232 bridge. But those are really exceptions. Sometimes the last digit refers to the number of opamps, for instance, in a device. LF411 = single opamp LF412 = dual LF411 I once asked a question about other than manufacturer's prefixes in IC type numbers, but there seems to be little systematical in it.
{ "source": [ "https://electronics.stackexchange.com/questions/19395", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5526/" ] }
19,561
I'm working on a PCB that has shielded RJ45 (ethernet), RS232, and USB connectors, and is powered by a 12V AC/DC brick power adapter (I do the 5V and 3.3V step down on board). The entire design is enclosed in a metal chassis. The shields of the I/O connectors are connected to a CHASSIS_GND plane on the periphery of the PCB and also make contact with the front panel of the metal chassis. The CHASSIS_GND is isolated from digital GND by a moat (void). Here's the question: Should the CHASSIS_GND be tied to the digital GND plane in any way? I've read countless app notes and layout guides, but it seems that everybody has differing (and sometimes seemingly contradictory) advice about how these two planes should be coupled together. So far I've seen: Tie them together at a single point with a 0 Ohm resistor near the power supply Tie them together with a single 0.01uF/2kV capacitor at near the power supply Tie them together with a 1M resistor and a 0.1uF capacitor in parallel Short them together with a 0 Ohm resistor and a 0.1uF capacitor in parallel Tie them together with multiple 0.01uF capacitors in parallel near the I/O Short them together directly via the mounting holes on the PCB Tie them together with capacitors between digital GND and the mounting holes Tie them together via multiple low inductance connections near the I/O connectors Leave them totally isolated (not connected together anywhere) I found this article by Henry Ott ( http://www.hottconsultants.com/questions/chassis_to_circuit_ground_connection.html ) which states: First I will tell you what you should not do, that is to make a single point connection between the circuit ground and the chassis ground at the power supply...circuit ground should be connected to the chassis with a low inductance connection in the I/O area of the board Anybody able to explain practically what a "low inductance connection" looks like on a board like this? It seems that there are many EMI and ESD reasons for shorting or decoupling these planes to/from each other, and they are sometimes at odds with each other. Does anybody have a good source of understanding how to tie these planes together?
This is a very complex issue, since it deals with EMI/RFI, ESD, and safety stuff. As you've noticed, there are many ways do handle chassis and digital grounds-- everybody has an opinion and everybody thinks that the other people are wrong. Just so you know, they are all wrong and I'm right. Honest! :) I've done it several ways, but the way that seems to work best for me is the same way that PC motherboards do it. Every mounting hole on the PCB connects signal gnd (a.k.a. digital ground) directly to the metal chassis through a screw and metal stand-off. For connectors with a shield, that shield is connected to the metal chassis through as short of a connection as possible. Ideally the connector shield would be touching the chassis, otherwise there would be a mounting screw on the PCB as close to the connector as possible. The idea here is that any noise or static discharge would stay on the shield/chassis and never make it inside the box or onto the PCB. Sometimes that's not possible, so if it does make it to the PCB you want to get it off of the PCB as quickly as possible. Let me make this clear: For a PCB with connectors, signal GND is connected to the metal case using mounting holes. Chassis GND is connected to the metal case using mounting holes. Chassis GND and Signal GND are NOT connected together on the PCB, but instead use the metal case for that connection. The metal chassis is then eventually connected to the GND pin on the 3-prong AC power connector, NOT the neutral pin. There are more safety issues when we're talking about 2-prong AC power connectors-- and you'll have to look those up as I'm not as well versed in those regulations/laws. Tie them together at a single point with a 0 Ohm resistor near the power supply Don't do that. Doing this would assure that any noise on the cable has to travel THROUGH your circuit to get to GND. This could disrupt your circuit. The reason for the 0-Ohm resistor is because this doesn't always work and having the resistor there gives you an easy way to remove the connection or replace the resistor with a cap. Tie them together with a single 0.01uF/2kV capacitor at near the power supply Don't do that. This is a variation of the 0-ohm resistor thing. Same idea, but the thought is that the cap will allow AC signals to pass but not DC. Seems silly to me, as you want DC (or at least 60 Hz) signals to pass so that the circuit breaker will pop if there was a bad failure. Tie them together with a 1M resistor and a 0.1uF capacitor in parallel Don't do that. The problem with the previous "solution" is that the chassis is now floating, relative to GND, and could collect a charge enough to cause minor issues. The 1M ohm resistor is supposed to prevent that. Otherwise this is identical to the previous solution. Short them together with a 0 Ohm resistor and a 0.1uF capacitor in parallel Don't do that. If there is a 0 Ohm resistor, why bother with the cap? This is just a variation on the others, but with more things on the PCB to allow you to change things up until it works. Tie them together with multiple 0.01uF capacitors in parallel near the I/O Closer. Near the I/O is better than near the power connector, as noise wouldn't travel through the circuit. Multiple caps are used to reduce the impedance and to connect things where it counts. But this is not as good as what I do. Short them together directly via the mounting holes on the PCB As mentioned, I like this approach. Very low impedance, everywhere. Tie them together with capacitors between digital GND and the mounting holes Not as good as just shorting them together, since the impedance is higher and you're blocking DC. Tie them together via multiple low inductance connections near the I/O connectors Variations on the same thing. Might as well call the "multiple low inductance connections" things like "ground planes" and "mounting holes" Leave them totally isolated (not connected together anywhere) This is basically what is done when you don't have a metal chassis (like, an all plastic enclosure). This gets tricky and requires careful circuit design and PCB layout to do right, and still pass all EMI regulatory testing. It can be done, but as I said, it's tricky.
{ "source": [ "https://electronics.stackexchange.com/questions/19561", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5768/" ] }
19,759
With the following circuits as examples : and How will the current I know how much to flow? Would any other wave travel first in the circuit and then come back and say so much current should flow?
Not sure if this is what you're asking, but yes, when the battery is connected, an electric field wave travels from the battery down the wires to the load. Part of the electrical energy is absorbed by the load (depending on Ohm's law), and the rest is reflected off the load and travels back to the battery, some is absorbed by the battery (Ohm's law again) and some reflects off the battery, etc. Eventually the combination of all the bounces reaches the stable steady-state value that you would expect. We usually don't think of it this way, because in most circuits it happens too quickly to measure. For long transmission lines it is measurable and important, however. No, the current does not "know" what the load is until the wave reaches it. Until that time, it only knows the characteristic impedance or "surge impedance" of the wires themselves. It doesn't yet know if the other end is a short circuit or an open circuit or some impedance in between. Only when the reflected wave returns can it "know" what's at the other end. See Circuit Reflection Example and Transmission line effects in high-speed logic systems for examples of lattice diagrams and a graph of how the voltage changes in steps over time. See Termination of a Transmission Line for an animated simulation of different terminations that you can modify, and this for a light switch example. And in case you don't understand it, in your first circuit, the current is equal at every point in the circuit. A circuit is like a loop of pipework, all filled with water. If you cause the water to flow with a pump at one point, the water at every other point in the loop has to flow at the same rate. The electric field waves I'm talking about are analogous to pressure/sound waves traveling through the water in the pipe. When you move water at one point in the pipe, the water on the other end of the pipes doesn't change instantly; the disturbance has to propagate through the water at the speed of sound until it reaches the other end.
{ "source": [ "https://electronics.stackexchange.com/questions/19759", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4026/" ] }
20,023
USB specifies 4 pins: 1. VBUS +5V 2. D- Data- 3. D+ Data+ 4. GND Ground Why is this not 3? Could the Data and Power not share a common ground? Am I correct in understanding that D- is the ground for D+ ?
No, D- is not ground. Data is sent over a differential line , which means that D- is a mirror image of D+ , so both Data lines carry the signal. The receiver subtracts D- from D+ . If some noise signal would be picked up by both wires, the subtraction will cancel it. So differential signalling helps suppressing noise. So does the type of wiring, namely twisted pair . If the wires ran just parallel they would form a (narrow) loop which could pick up magnetic interference. But thanks to the twists the orientation of the wires with respect to the field changes continuously. An induced current will be cancelled by a current with the opposite sign half a twist further. Suppose you have a disturbance working vertically on the twisted wire. You could regard each half twist as a small loop picking up the disturbance. Then it's easy to see that the next tiny loop sees the opposite field (upside down, so to speak), so that cancels the first field. This happens for each pair of half twists. A similar balancing effect occurs for capacitance to ground. In a straight pair one conductor shows a higher capacitance to ground than the other, while in a twisted pair each wire will show the same capacitance. edit Cables with several twisted pairs like cat5 have a different twist length for each pair to minimize crosstalk.
{ "source": [ "https://electronics.stackexchange.com/questions/20023", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17697/" ] }
20,643
I have a Hormann Promatic Akku Garage door actuator, since my garage doesn't have mains electricity. Generally speaking I'm very impressed with how well it works. Unfortunately while it lasts around two months between charging, it gives almost no notice about when battery charge is getting low. I would like it to last longer between charges, so a while ago I bought some low power solar chargers. My concern is that there may be something I've missed in the naive way I'm intending to wire them up. Background The Hormann battery unit (lower right in the picture) has a pair of XLR sockets, so you can plug the actuator power lead into either one. I presume that Hormanns own (ridiculously expensive and not available in the UK) solar charger plugs into the other socket. Internally, there are a couple of deep discharge 12v batteries wired in series. Attrib. Hormann . When fully charged, the battery module measures 26.3v and the actuator refuses to work when the voltage drops to 23.3v. The solar panels I have are just 1w, but are have an integral diode. They were sold as solar 12v battery re-chargers, and the research I did at the time suggested that such a small wattage should count as trickle charging the batteries, so over charging shouldn't be an issue. I note though that open circuit voltage under bright sunlight can reach 18v though. On one of the panels the diode is blown, so any advice on replacing it would also be useful (I removed the old diode, but it was hot glued to the panel and removing it scratched off some of the ID, I'm guessing it's a 1N4007 though). How I intend to wire them up So, naively, I was just going to wire up both panels in series, just as the batteries are wired up, through an XLR plug and leave them connected permanently. I'm also considering whether it might be better to leave the diode off the second panel and just wire the two panels together with a single diode between them. My concerns Assuming a rare British summers day, I know that at just 1w @ 18v means a pretty low current to the battery, but might 36v from a pair of panels damage the actuator circuitry? I'm guessing that being connected to the battery will drop that voltage to just above the battery voltage, but it would be nice to know for sure. Also, when the actuator is actually drawing current, it is drawing quite a few watts for 20-30 seconds. Could the solar cells be in any way damaged? Looking at the diode datasheet, it looks like each of the diodes would be more than capable of coping with the current and voltages we are talking about. I've reviewed the relevant solar-cell tagged and battery-charging questions, but non that I can find seem to address my concerns.
but might 36v from a pair of panels damage the actuator circuitry? So here's the deal. Lead-acid batteries look electrically like a voltage source/sink with a small series resistance, with the voltage level a function of state of charge. 2V/cell (there are 6 cells in series in a 12V battery) is nominal, and if I remember right, their open circuit voltage is something like 1.9V empty, 2.1V full. That covers 90% of their behavior. Considering that, the "1W@18V" spec of the solar panel isn't going to be able to "win" against the battery, and the solar panel's voltage will be pulled down to battery voltage, delivering probably 0.055A (=1W/18V) at whatever the battery voltage is. When a battery gets completely full, however, its series resistance goes up dramatically, and the voltage goes up, until there's enough voltage to start electrolysis of the fluid and you get H2 and O2 generation at the terminals and loss of the electrolyte. A lead-acid battery, depending on the type + manufacturer, has a certain recombination rate of H2 + O2 => electrolyte that it can handle; if you electrolyze at a higher current than that, it leads to permanent electrolyte loss (+hence capacity loss) So there is a safe current that can be delivered to a lead-acid battery continuously, where its own self discharge due to electrolysis balances the charging current. It depends on the manufacture + construction. I wouldn't feel worried about a C/10 or C/20 rate of charge (where C = the current needed to discharge a battery in 1 hour). Garage door batteries are probably > 1Ah capacity so you should be safe with 55mA charging current. HOWEVER -- I would probably put a (zener diode and resistor in series) in parallel with each battery, the zener diode being about 14V and resistor being maybe 10 ohms or so, so that it keeps the battery terminals from getting charged too far. Also: if you can, wire each solar panel to each battery (and keep the diodes), rather than the pair of panels in series wired to the batteries in series -- i.e. try to connect the center taps. By doing so, you'll charge each battery independently. Otherwise, what can ruin battery life is if the battery voltages diverge -- the one with the higher voltage will tend to get overcharged, while the other one will tend to get overdischarged and not completely charged.
{ "source": [ "https://electronics.stackexchange.com/questions/20643", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3774/" ] }
20,699
With a nine volt battery, touching the two terminals together (or using a faulty terminal) will cause a spark roughly where I would want it to be. How is this possible? Is it ionizing only a very small portion of air surrounding the wires when this happens and it is just more visible? I believe at an extremely small distance, ~300v is the breakdown point of air (often, for example according to Paschen 's law) so I do not understand how the battery can do this.
As the contact is being broken, a connection is made through very small pieces of metal (microscopic features), which have enough current through them to vaporize, the ions of which then support a current through the air briefly. While lower voltages do not, in general, jump a gap that is present before the voltage is applied, interrupting an existing current flow with a gap often produces a low-voltage spark or arc. As the contacts are separated, a few small points of contact become the last to separate. The current becomes constricted to these small hot spots, causing them to become incandescent, so that they emit electrons (through thermionic emission). Even a small 9 V battery can spark noticeably by this mechanism in a darkened room. The ionized air and metal vapour (from the contacts) form plasma, which temporarily bridges the widening gap. Also, when a flowing current is interrupted, it will cause inductive kickback, where the collapsing magnetic field causes an increase in voltage, to try to maintain the existing current. The voltage can increase enough to cause dielectric breakdown of air and allow current to flow through it. Attempting to open an inductive circuit often forms an arc, since the inductance provides a high-voltage pulse whenever the current is interrupted Wikipedia: High voltage § Sparks in air I'm not sure if inductive kickback is strong enough with a 9 V battery to cause a spark by itself, but it would help current to flow after the plasma path has formed.
{ "source": [ "https://electronics.stackexchange.com/questions/20699", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5849/" ] }
20,701
When combining battery cells in series, the voltages of the cells are added to get the voltage of the final circuit. Do the mAh add up, or stay the same? For example, suppose you have two 3.7V cells, each with 200 mAh capacity. When connected in series, will the resulting battery will be a 7.4V, 200mAh battery?
Summary mAh stay the same when you connect cells in series - provided that cells are all of the same mAh capacity. Special and unusual case If two cells are connected in series and they have differing mAh capacities the effective capacity is that of the lower mAh capacity cells. This is not normally done, but it can sometimes make sense to do so. mAh add when you connect cells in parallel (but there are technical issues which mean that doing this may not be straightforward.) The answer can be deduced by considering what mAh capacity means : mAh = Product of ma × hours that a battery will provide. While there are (as ever) complications, this means that eg, a 1500 mAh cell will provide 1500 mA for one hour or 500 mA for 3 hours or 850 mA for 2 hours or even 193.9 uA for one year ( 193.9 uA x 8765 hours = 1500 mA.hours). In practice the capacity of a cell varies with loading. A cell will generally produce its rated capacity if loaded at its C1 = 1 hour rate. eg 1500 mAh = 1500 mA for one hour. BUT a 1500 mAh cell loaded at say 5V (5 x 1500 = 7500 mA = 7.5A) will NOT do this for 1/5 hour = 12 minutes - and may not produce 7.5A at all even on short circuit. A load of say C/10 = 150 mA or C/100 = 15 mA may produce more than 1500 mAh overall BUT a load of say 150 uA = 10,000 x as long = 10,000 hours = about 14 months may produce less than 1500 mAh if the battery self discharges rapidly with time. BUT If a cell will produce say 2000 mA for 1 hour at 3.7V (a typical rating for liIon 18650 cells) then two identical cells will do the same thing if tested independently. If instead of using 2 loads you connect the cells in series and draw the same current as before the identical current flows through both cells. You can still here only draw 2000 mA for one hour BUT the available voltage has doubled. If you use 2 x 3.7V, 2000 mAh cells in parallel to drive a 3.7V nominal load, one cell can provide 2000 mA for one hour or 200 mA for 10 hours etc AND the other cell can do the same. So the mAh ratings add. If one cell has more mAh than the other, the mAh TEND to add when connected in parallel. Say you have 1000 mAh and 2000 mAh cells in parallel, each rated at 3.7V nominal, as the smaller battery loses capacity it will tend to reduce in voltage faster so the larger battery will provide more current so they will TEND to balance. YMMV and this is usually not good practice without specific design of what happens. In the special case I mentioned above, you may have a 12V 7AH sealed lead acid "brick" battery beloved of the alarm industry. You may want to use an N Channel high side switch which needs a gate voltage of say 4V above the +12 rail. If you use a 9 Volt PP3 "transistor radio battery" and connect its negative terminal to +12 V then the PP3 positive terminal will be at 12+9 = 21 V initially. The N Channel MOSFET needs 12+4 = 16V so the PP3 + SLA combined followed by a regulator will operate it until the combined voltage falls to under 16V. This should be never happen, as the PP3 "dead voltage " = 6V and the Sla should not be under say 11V so minimum Voltage available = 11+6 = 17 V. If you use this occasionally, and disconnect battery when not in use, the PP3 will last a long time. If the PP3 is rated at say 150 mAh, and if the FET high side cct takes a steady 10 mA when oj then the PP3 will last for ~~= 150/10 = 15 hours. This may be acceptable or not depending on the application. BUT the SLA has a 7Ah = 7000 mAh capacity BUT the combination can only provide 150 mAh at >= 17 Volts. So the mAh effectively is that of the much smaller PP3. This is for the task which needs the combined voltage - the 12V output still has the full 7Ah capacity.
{ "source": [ "https://electronics.stackexchange.com/questions/20701", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17697/" ] }
21,131
For the sake providing as much current as possible to a small circuit, regulated down from ~9V to 5V (or 5V->1.5V), I have looked at some possible options. What I was originally going to do (maybe a regulator for a solar cell, or 9v battery) is the I assume standard used LM7805 (5v) IC. I have read that it does use up a small, but fair bit of current to do this especially when only 50-100mA peak current is available. Would a Zener diode rated at ~5 volts be able to do this more efficiently, as it should keep the voltage at or very close to 5V for quite some time higher, "regulating" it? Would a (MOS|J)FET/other transistor (if more efficient, ignoring the slightly weird use) or something of that sense be able to lower the voltage with a very simple energy conversion?
Linear regulators like the 7805 are inefficient, and more so when the input voltage is higher. It works as a variable resistor, which varies its value to keep the output voltage constant, here 5V. That means that the current consumed by your 5V circuit also flows through this variable resistor. If your circuit dissipates 1A then the power dissipation in the 7805 will be \$ P = \Delta V \cdot I = (9V - 5V) \cdot 1A = 4W \$ 4W in a single component is rather much, the 5W in your circuit will probably be distributed over several components. It means that the 7805 will need a heatsink, and that's most often a bad sign: too much power dissipation. This will be worse with higher input voltages, and the efficiency of the regulation can be calculated as \$ \eta = \dfrac{P_{OUT}}{P_{IN}} = \dfrac{V_{OUT} \cdot I_{OUT}}{V_{IN} \cdot I_{IN}} = \dfrac{V_{OUT}}{V_{IN}}\$ since \$I_{OUT} = I_{IN}\$. So in this case \$\eta = \dfrac{5V}{9V} = 0.56 \$ or 56%. With higher input voltages this efficiency will even get worse. The solution is a switching regulator , or switcher for short. There are different types of switcher depending on the \$V_{IN}/V_{OUT}\$ ratio. If \$V_{OUT}\$ is less than \$V_{IN}\$ you use a buck converter . While even an ideal linear regulator has a low efficiency, an ideal switcher has a 100% efficiency, and actual efficiency can be predicted by the properties of used components. For instance there's a voltage drop over the diode, and resistance of the coil. A well designed switcher may have an efficiency as high as 95% , like for the given 5V/9V ratio. Different voltage ratios may result in somewhat lower efficiencies. Anyway, 95% efficient means that the power dissipated in the regulator is \$ P_{SWITCHER} = \left(\dfrac{1}{\eta} - 1\right) \cdot P_{OUT} = \left(\dfrac{1}{0.95} - 1\right) \cdot 5W = 0.26W \$ which is low enough not to need a heatsink. As a matter of fact the switching regulator itself may be in a SOT23 package, with the other components, like coil and diode SMDs as well.
{ "source": [ "https://electronics.stackexchange.com/questions/21131", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5849/" ] }
21,576
I was thinking the other day. A transformer is just two coils, right? There is no polarisation or anything else fancy? The ratio of the number of windings on the input to the number of windings to the output reflects how many volts are outputted given the input? So if I connect my 250V to 7.5V transformer to the mains in reverse, will I simply get 8.33KV? I assume so. More practically, what will happen? From memory, the breakdown voltage of air is approx 1KV/cm. Doesn't this mean there would be all sorts of streamers between the transformer output terminals? What about between the windings themselves? Surely the thin layer of insulation around the transformer windings tightly wrapped doesn't insulate that much? And say there are streamers between the 8KV outputs of the transformer. What then? Fire? Blown fuse?
As Mr Banana says - magic smoke happens. Because ... Energy is transferred in transformer via a magnetic field. The field is produced by the amp-turns in the core (amps flowing x number of turns). Above a certain level the core cannot support any more amp turns and the core "saturates". What was an inductor with resistance to AC of far more than its resistance becomes mainly a resistor. You'd get lots and lots and lots of amps in the case that you mentioned - so much so that if the fuse didn't get there first the transformer would DEFINITELY be destroyed. The iron core in a transformer is usually operated on the part of its magnetic curve where it is beginning to saturate and get less efficient. This is to get as much use of the steel core as possible. They are run close enough to "the edge" that a transformer made to run on 60 Hz mains will get much warmer on 50 Hz mains at the same voltage as the cycles are 60/50 = 20% longer and the current in the winding gets that much longer to increase and ... So a SLIGHT overvoltage may work OK - say about 20% max. But 230/7.5 0 30+ times as much "will not work"! :-)
{ "source": [ "https://electronics.stackexchange.com/questions/21576", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6348/" ] }
21,598
In AVR datasheets under the Electrical Characteristics section you will typically find a graph like this (this one is from the ATMega328): I've seen designs that seem to "work" but operate outside the shaded envelope. Specifically, I've seen 3.3V (Arduino) designs that run the clock from an external 16MHz crystal. Clearly, this is out of spec. What are the practical negative consequences of running outside this envelope?
How to make life more interesting 101: If you don't care that your results may sometimes be wrong, that your system may sometimes crash, that your life may be more interesting, that your Segway clone only occasionally does face-plants for no obvious reason, that ... Then by all means run the part outside manufacturer's spec. You get what you don't pay for. If you have a $10 head, buy a $10 helmet. It may often work. It may not work sometimes. It may not be obvious that it isn't working sometimes. A divide may usually work A jump may usually arrive. A table may be looked up correctly. An ADC value may be correct. Or not
{ "source": [ "https://electronics.stackexchange.com/questions/21598", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/771/" ] }
21,686
What's the purpose of the two capacitors in parallel on each side of the regulator in this power supply circuit I've seen similar setups in other similar circuits and can guess that it's related to one being polarized an one not, but I don't really understand what's going on there.
Summary: Big capacitors handles low frequency ripple and mains noise and major output load changes. Small capacitors handle noise and fast transients. That circuit uses "overkill" with that application but serves as an OK example. Here is a typical LM7805 datasheet It can be seen on page 22 that having two capacitors at Vin abd two at Vout is not necessarily a standard arrangement, and that the capacitor values in the supplied circuit are relatively large. Below is fig22 from the datasheet. Your circuit: A large capacitor like the 2200 uF act as a "reservoir" to store energy from the rough DC out of the bridge rectifier. The larger the capacitor the less ripple and the more constant the DC. When large current peaks are drawn the capacitor supplied surge energy helps the regulator not sag in output. The white and black bars on the capacitor symbol show that it is a "polar " capacitor - it only works with + and - on the selected ends. Such capacitors are usually "electrolytic capacitors". These have good ability to filter out low frequency ripple and to respond to reasonably fast load changes. By itself it is not enough to do the whole job as it is not good at filtering higher frequency noise because electrolytics tend to have large internal inductance + large (relatively) internal series resistance (ESR). The small input capacitor (here shown as u1 = 0.1 uF) will be non polarized and will usually nowadays be a multilayer ceramic capacitor with low ESR and low inductance giving it excellent high frequency response and noise filtering capabilities. By itself it is not enough to do the whole job as it cannot store enough energy to deal with the energy needed to filter out ripple changes and large load transients. The same applies in general terms to the output capacitors. C4 = 10 uF helps to supply any gross load changes thus taking some load off the regulator. It is not usually deemed necessary to have more than a very small capacitor here. Some modern regulators need a largish capacitor here for stability reasons but the LM78xx does not. Here the second output capacitor is 0.1 uF and it is there to deal with high frequency noise. Note that having a large capacitor on the output can cause problems. If the input was shorted so that power was removed C4 would discharge back through the regulator. Depending on voltage and capacitor size this can cause damage. One method of dealing with this is to provide a usually reverse-biased diode from regulator output to regulator input. If the regulator input is shorted to ground the output capacitor will discharge through the now forward biased diode. Added: Nils noted: A very large reservoir capacitor may lead to increased noise. The on-time of the diodes would get shorter yet the same amount of power is transferred. This causes current spikes in the transformer which start to radiate out a noisy magnetic field. Bigger is not always better here. It's unlikely to cause problems in circuits that uses the 78xx series regulators though, they just don't move enough power usually. Good point. Adding a small series resistor between transformer and 1st capacitor serves to "spread" the conduction angle, reduce current peak, reduce noise and make life easier for the diodes. Working out the diode current can be somewhat mind-taxing I seem to recall (having done it as an exercise long ago). Nowadays a simulation is easy enough to make calculation unusual.
{ "source": [ "https://electronics.stackexchange.com/questions/21686", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6335/" ] }
21,787
Naive perhaps, but Why is high input impedance a good thing? Is high input impedance always a good thing?
It is a good thing for a voltage input, as if the input impedance is high compared to the source impedance then the voltage level will not drop too much due to the divider effect. For example, say we have a \$10V\$ signal with \$1k\Omega\$ impedance. We connect this to a \$1M\Omega\$ input, the input voltage will be \$ 10V\cdot\frac{1M\Omega}{1M\Omega+1k\Omega} = 9.99V \$. If we reduce the input impedance to \$10k\Omega\$, we get \$10V \cdot \frac{10k\Omega}{10k\Omega + 1k\Omega} = 9.09V\$ Reduce it to 1k and we get \$ 10V \cdot \frac{1k\Omega}{1k\Omega + 1k\Omega} = 5V\$ Hopefully you get the picture - generally an input impedance of at least 10 times the source impedance is a good idea to prevent significant loading. High input impedance is not always a good thing though, for example if you want to transfer as much power as possible then the source and load impedance should be equal. So in the above example the 1k input impedance would be the best choice. For a current input a low input impedance (ideally zero) is desired, for example in a transimpedance (current to voltage) amplifier.
{ "source": [ "https://electronics.stackexchange.com/questions/21787", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5156/" ] }
21,806
What would be the least expensive way to boost a DC voltage? The aim is to convert 1.2 V/1.5 V (from an AA/AAA cell) to 3.3 V to power a small 8-bit microprocessor, like Atmel ATtiny45 or ATtiny2313, and also (if possible) 6 V to power a buzzer. Also, what would be the maximum current one could draw safely from an alkaline battery, after boosting it to 3.3 V/6 V? Finally, how I could compute the duration for which the alkaline battery would last, given a certain consumption?
There's a technique called a charge pump with which you can make a voltage doubler, but that will only give you 3V from a 1.5V cell, and even less from the 1.2V cell. I'm still mentioning it because several microcontrollers these days will work with voltages down to 2V. A charge pump can only supply limited current, enough to power the microcontroller, but extra power devices like motors or relays are out. The voltage will also drop under load. So not ideal. The LM2660 is a switched capacitor charge pump. The better solution is a switching regulator . These exist in two major topologies: "buck" to go from higher to lower voltage, and "boost" to go from lower voltage to higher. So you want a boost regulator. Major manufacturers include Linear Technologies (more expensive) and National Semiconductor (recently acquired by Texas Instruments). The LM2623 can operate on input voltages as low as 0.8V. About current and battery life. I'll assume you're working with 1.5V batteries. The ones here on my table are rated for 2300mAh, so let's use that value. Also let's say your microcontroller plus extras need 100mA at 3.3V. That's 330mW. If the switcher is 85% efficient that means it draws 330mW/0.85 = 390mW from the battery. That's at 1.5V, so you'll draw 260mA from the battery. The battery is rated at 2300mAh, then your device can run for 2300mAh/260mA = 9 hour on one charge. If you plan to load the battery rather heavily, I would remain below 2300mA, which will drain it in 1 hour.
{ "source": [ "https://electronics.stackexchange.com/questions/21806", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6406/" ] }
21,854
In the book Computer Networks , the author talks about the maximum data rate of a channel. He presents the Nyquist formula : C = 2H log\$_2\$ V (bits/sec) And gives an example for a telephone line : a noiseless 3-kHz channel cannot transmit binary (i.e., two-level) signals at a rate exceeding 6000 bps. He then explain the Shannon equation : C = H log\$_2\$ (1 + S/N) (bits/sec) And gives (again) an example for a telephone line : a channel of 3000-Hz bandwidth with a signal to thermal noise ratio of 30 dB (typical parameters of the analog part of the telephone system) can never transmit much more than 30,000 bps I don't understand why the Nyquist rate is much lower than the Shannon rate, since the Shannon rate takes noise into account. I'm guessing they don't represent the same data rate but the book doesn't explain it.
To understand this you first have to understand that bits transmitted don't have to be purely binary, as given in the example for the Nyquist capacity. Lets say you have a signal that ranges between 0 and 1V. You could map 0v to [00] .33v to [01] .66v to [10] and 1v to [11]. So to account for this in Nyquist's formula you would change 'V' from 2 discrete levels to 4 discrete levels thus changing your capacity from 6000 to 12000. This could then be done for any number of discrete values. There is a problem with Nyquist's formula though. Since it doesn't account for noise, there is no way of to know how many discrete values are possible. So Shannon came along and came up with a method to essentially place a theoretical maximum on the number of discrete levels that you can read error free. So in their example of being able to get 30,000 bps, you would have to have 32 discrete values that can be read to mean different symbols.
{ "source": [ "https://electronics.stackexchange.com/questions/21854", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6436/" ] }
21,886
I am studying 8085 microprocessor architecture and the terms edge triggered and level triggered confusing me really very much. Can anyone explain me it in layman's words ? While studying the interrupts of 8085 named RST 7.5, RST 6.5, RST 5.5 and TRAP i came across these terms and they confused me. Here i have attached one document link from which i was reading and i have mentioned my confusion diagrams. in the document RST 7.5 -> Edge triggered RST 5.5 -> Level triggered. TRAP -> Edge triggered and Level triggered. (why does it make any difference?). the document link
I didn't read you document really, but I can understand why you are confused. But it is a very simple concept really. Let me explain. Triggering: This means making a circuit active. Making a circuit active means allowing the circuit to take input and give output. Like for example supposed we have a flip-flop. When the circuit is not triggered, even if you give some input data, it will not change the data stored inside the flip-flop nor will it change the output Q or Q'. Now there are basically two types of triggering. The triggering is given in form of a clock pulse or gating signal. Depending upon the type of triggering mechanism used, the circuit will become active at specific states of the clock pulse. Level Triggering: In level triggering the circuit will become active when the gating or clock pulse is on a particular level. This level is decided by the designer. We can have a negative level triggering in which the circuit is active when the clock signal is low or a positive level triggering in which the circuit is active when the clock signal is high. Edge Triggering: In edge triggering the circuit becomes active at negative or positive edge of the clock signal. For example if the circuit is positive edge triggered, it will take input at exactly the time in which the clock signal goes from low to high. Similarly input is taken at exactly the time in which the clock signal goes from high to low in negative edge triggering. But keep in mind after the the input, it can be processed in all the time till the next input is taken. That is the general description of the triggering mechanisms and those also apply to the 8085 interrupts.
{ "source": [ "https://electronics.stackexchange.com/questions/21886", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6158/" ] }
21,928
There was some discussion on this question What are some reasons to connect capacitors in series? What are some reasons to connect capacitors in series? which I don't see as being conclusively resolved: "turns out that what might LOOK like two ordinary electrolytics are not, in fact, two ordinary electrolytics." "No, do not do this. It will act as a capacitor also, but once you pass a few volts it will blow out the insulator." 'Kind of like "you can't make a BJT from two diodes"' "it is a process that a tinkerer cannot do" So is a non-polar (NP) electrolytic cap electrically identical to two electrolytic caps in reverse series, or not? Does it not survive the same voltages? What happens to the reverse-biased cap when a large voltage is placed across the combination? Are there practical limitations other than physical size? Does it matter which polarity is on the outside? I don't see what the difference is, but a lot of people seem to think there is one. Summary: As posted in one of the comments, there's a sort of electrochemical diode going on: The film is permeable to free electrons but substantially impermeable to ions, provided the temperature of the cell is not high. When the metal underlying the film is at a negative potential, free electrons are available in this electrode and the current flows through the film of the cell. With the polarity reversed, the electrolyte is subjected to the negative potential, but as there are only ions and no free electrons in the electrolyte the current is blocked. — The Electrolytic Capacitor by Alexander M. Georgiev Normally a capacitor cannot be reverse-biased for long, or large currents will flow and "destroy the center layer of dielectric material via electrochemical reduction": An electrolytic can withstand a reverse bias for a short period, but will conduct significant current and not act as a very good capacitor. — Wikipedia: Electrolytic capacitor However, when you have two back-to-back, the forward-biased capacitor prevents a prolonged DC current from flowing. Works for tantalums, too : For circuit positions when reverse voltage excursions are unavoidable, two similar capacitors in series connected “back to back” ... will create a non-polar capacitor function ... This works because almost all the circuit voltage is dropped across the forward biased capacitor, so that the reverse biased device sees only a negligible voltage. Solid Tantalum Capacitors Frequently Asked Questions (FAQs) : The oxide dielectric construction that is used in tantalum capacitors has a basic rectified property which blocks current flow in one direction and at the same time offers a low resistance path in the opposite direction.
Summary: Yes "polarised" aluminum "wet electrolytic" capacitors can legitimately be connected "back-to-back" (ie in series with opposing polarities) to form a non-polar capacitor. C1 + C2 are always equal in capacitance and voltage rating Ceffective = = C1/2 = C2/2 Veffective = vrating of C1 & C2. See "Mechanism" at end for how this (probably) works. It is universally assumed that the two capacitors have identical capacitance when this is done. The resulting capacitor with half the capacitance of each individual capacitor. eg if two x 10 uF capacitors are placed in series the resulting capacitance will be 5 uF. I conclude that the resulting capacitor will have the same voltage rating as the individual capacitors. (I may be wrong). I have seen this method used on many occasions over many years and, more importanttly have seen the method described in application notes from a number of capacitor manufacturers. See at end for one such reference. Understanding how the individual capacitors become correctly charged requires either faith in the capacitor manufacturers statements ("act as if they had been bypassed by diodes" or additional complexity BUT understanding how the arrangement works once initiated is easier. Imagine two back-to-back caps with Cl fully charged and Cr fully discharged. If a current is now passed though the series arrangement such that Cl then discharges to zero charge then the reversed polarity of Cr will cause it to be charged to full voltage. Attempts to apply additional current and to further discharge Cl so it assumes incorrect polarity would lead to Cr being charge above its rated voltage. ie it could be attempted BUT would be outside spec for both devices. Given the above, the specific questions can be answered: What are some reasons to connect capacitors in series? Can create a bipolar cap from 2 x polar caps. OR can double rated voltage as long as care is taken to balance voltage distribution. Paralleld resistors are sometimes used to help achieve balance. "turns out that what might LOOK like two ordinary electrolytics are not, in fact, two ordinary electrolytics." This can be done with oridinary electrolytics. "No, do not do this. It will act as a capacitor also, but once you pass a few volts it will blow out the insulator." Works OK if ratings are not exceeded. 'Kind of like "you can't make a BJT from two diodes"' Reason for comparison is noted but is not a valid one. Each half capacitor is still subject to same rules and demands as when standing alone. "it is a process that a tinkerer cannot do" Tinkerer can - entirely legitimate. So is a non-polar (NP) electrolytic cap electrically identical to two electrolytic caps in reverse series, or not? It coild be but the manufacturers usually make a manufacturing change so that there are two Anode foils BUT the result is the same. Does it not survive the same voltages? Voltage rating is that of a single cap. What happens to the reverse-biased cap when a large voltage is placed across the combination? Under normal operation there is NO reverse biased cap. Each cap handles a full cycle of AC whole effectively seeing half a cycle. See my explanation above. Are there practical limitations other than physical size? No obvious limitation that i can think of. Does it matter which polarity is on the outside? No. Draw a picture of what each cap sees in isolation without reference to what is "outside it. Now change their order in the circuit. What they see is identical. I don't see what the difference is, but a lot of people seem to think there is one. You are correct. Functionally from a "black box" point of view they are the same. MANUFACTURER'S EXAMPLE: In this document Application Guide, Aluminum Electrolytic Capacitors bY Cornell Dubilier, a competent and respected capacitor manufacturer it says (on age 2.183 & 2.184) If two, same-value, aluminum electrolytic capacitors are connected in series, back-to-back with the positive terminals or the negative terminals connected, the resulting single capacitor is a non-polar capacitor with half the capacitance. The two capacitors rectify the applied voltage and act as if they had been bypassed by diodes. When voltage is applied, the correct-polarity capacitor gets the full voltage. In non-polar aluminum electrolytic capacitors and motor-start aluminum electrolytic capacitors a second anode foil substitutes for the cathode foil to achieve a non-polar capacitor in a single case. Of relevance to understanding the overall action is this comment from page 2.183. While it may appear that the capacitance is between the two foils, actually the capacitance is between the anode foil and the electrolyte. The positive plate is the anode foil; the dielectric is the insulating aluminum oxide on the anode foil; the true negative plate is the conductive, liquid electrolyte, and the cathode foil merely connects to the electrolyte. This construction delivers colossal capacitance because etching the foils can increase surface area more than 100 times and the aluminum-oxide dielectric is less than a micrometer thick. Thus the resulting capacitor has very large plate area and the plates are awfully close together. ADDED: I intuitively feel as Olin does that it should be necessary to provide a means of maintaining correct polarity. In practice it seems that the capacitors do a good job of accommodating the startup "boundary condition". Cornell Dubiliers "acts like a diode" needs better understanding. MECHANISM: I think the following describes how the system works. As I described above, once one capacitor is fully charged at one extreme of the AC waveform and the other fully discharged then the system will operate correctly, with charge being passed into the outside "plate" of one cap, across from inside plate of that cap to the other cap and "out the other end". ie a body of charge transfers to and from between the two capacitors and allows net charge flow to and from through the dual cap. No problem so far. A correctly biased capacitor has very low leakage. A reverse biased capacitor has higher leakage and possibly much higher. At startup one cap is reverse biased on each half cycle and leakage current flows. The charge flow is such as to drive the capacitors towards the properly balanced condition. This is the "diode action" referred to - not formal rectification per say but leakage under incorrect operating bias. After a number of cycles balance will be achieved. The "leakier" the cap is in the reverse direction the quicker balance will be achieved. Any imperfections or inequalities will be compensated for by this self adjusting mechanism. Very neat.
{ "source": [ "https://electronics.stackexchange.com/questions/21928", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/142/" ] }
21,998
I'm a beginner in hobby electronics and I am wondering why digital oscilloscopes are still so expensive? In times of cheap GHz CPUs, USB 3, ADSL modems, DVB-S receivers, blu-ray players which all of them have remarkable clock frequencies/sampling rates it makes me wonder why a digital oscilloscopes which is capable sampling signals of a bandwidth of 10MHz are still very expensive, 100MHz is already high-end. How can this be explained? What differs the ADC from an digital oscilloscopes from one of the devices mentioned above?
I'd firstly agree with other posters as to economics of scale . Consumer devices are produced in the millions whereas such a market does not exist for digital oscilloscopes. Secondly, oscilloscopes are precision devices . They need to undergo rigorous quality control to ensure they live up to expected standards. This further increases costs. As for bandwidth. The Nyquist criterion states that the sampling rate should be at least twice the frequency you want to measure. But even at twice the rate, it is terrible at best. Consider the following pictures: The graph captions tell the story. You need to exceed the specified bandwidth by a great amount in order to gain an accurate representation of the square wave input signal (high frequency harmonics). And greater bandwidth = greater cost. In the end the precision, bandwidth and limited production quantities that drive up prices.
{ "source": [ "https://electronics.stackexchange.com/questions/21998", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6491/" ] }
22,017
I'm not that big of a fan of the official Arduino IDE (in terms of visuals), so I've started looking for nicer alternatives. However, most of the projects I've found are in alpha/beta and are generally incomplete. I'm 100% new to circuit board programming and I've never used an Arduino before, but from what I gather the Arduino IDE is just a wrapper for an avr library which does the actual writing to the board. Are other "arduino-like device" IDEs a possible option? Again, I'm very new to this so user-friendly-ness would be nice.
Warning, a long-winded explanation is forthcoming. I'd like to clear some misconceptions that I think you're having. The Arduino is really two things. A collection of C/C++ libraries compiled with avr-gcc and A small bootloader firmware program which was previously programmed onto the chip from the factory. Yes, the Arduino IDE basically wraps avr-gcc - the AVR C compiler. Your projects, or "sketches", incorporate the mentioned Arduino libraries and is compiled with avr-gcc. However, none of this has anything to do with how anything gets written to the board. How these sketches are deployed is a bit different than usual. The Arduino IDE communicates with your Arduino over the usb-to-serial chip on the board and it initializes a programming mode that the bootloader understands and sends your new program to the chip where the bootloader will place it in some known location and then run it. There is no "avr library which does the actual writing" - it's just the Arduino IDE opening a serial port and talking with the bootloader - this is how your debug messages get printed to the IDE during runtime as well. Any alternative IDE will have to be able to do this same serial communication with the bootloader. Arduino is easy because of all the libraries they already provide you with and one-touch program-and-run from the IDE. I honestly don't think it gets any easier, or more user friendly. They've abstracted all the details of the AVR micro-controller and the building/deploying process. The alternative would be something like avr-studio (which also uses avr-gcc for its compiler) and an ICSP programmer (which is an additional piece of hardware that you have to buy). You aren't provided with much other than some register definition header files and some useful macros. You also aren't provided with any bootloader on your AVR chip, its just a blank slate. Anything you want to do with the microcontroller, you'll have to go in depth and learn about its hardware peripherals and registers and move bytes around in C. Want to print a debug message back to the PC? Write the UART routine for print() first and open a terminal on your computer. A step lower from this you're writing code in an text editor and calling avr-gcc and avr-dude (programming command line tool) from a Makefile or command-line. A step lower from that and you're writing assembly in a text editor and calling the avr-assembler and avr-dude. I'm not sure where I'm going with this, I just think that the existing IDE and Arduino is absolutely genius and perfect for a beginner - their claim to fame is user friendly-ness. Maybe not the answer you're looking for, learn the work flow and make something cool with it.
{ "source": [ "https://electronics.stackexchange.com/questions/22017", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3619/" ] }
22,157
I have a simple application where a 6V, 2A DC power supply is driving 4 hobbyist-grade servos. In most cases this is adequate, but there are cases (when all servos are suddenly loaded) when I think the power draw will exceed 2A for a short period of time. It was suggested to me that I should use a capacitor between my power source and the servos in order to handle this kind of transient load. Unfortunately the suggestor didn't know how this would actually be implemented. I tried the University of Google, but mostly came up with videos of giant capacitors being used to dramatically explode things. Could someone point me in the right direction, or give me a simple circuit example of how I would do this. Is it as simple as wiring a capacitor onto the positive lead? What calculations should I make to determine the appropriate capactitor size? For example, if I wanted to sustain a peak of 3A for 5 seconds.
Subset summary: I = excess current to be provided. T = time to provide this extra current. V = acceptable drop in voltage during this period. C = capacitance in Farad to meet this requirement. Then: C = I x T / V In theory, and close enough to be useful in real applications: One Farad will drop in voltage by one volt in one second with a 1 Ampere load. Scale as required. The results are not encouraging :-(. (1) Providing a capacitor to do everything For over current of I ampere, droop of V volt over time T seconds (or part thereof) Capacitor C required is, as above) C = I x T / V <- Cap for given VIT ie more current requires more capacitance. More holdup time requires bigger capacitance. More acceptable Voltage droop = less capacitance. or droop given CIT is, simply rearranging Vdroop - (T x I) / C or time a Cap C will hold up given C I V, simply rearranging = Time = T = V x C / I So eg for 1 amp overload for 1 second and 2 volt droop C = I x T / V = 1 x 1 x/2 = 0.5 Farad = Um. Supercaps may save you as long as required peak current can be supported. SUPERCAP SOLUTION A Supercap (SC) solution looks almost viable. These 3F, 2.5V supercaps are availale ex stock from Digikey for $1.86/10 and under 85 cents in manufacturing volume. Prices For the 3F, 2.7V unit the acceptable 1 second discharge rate to 1/2 Vrated is 3.3A. Internal resistance is under 80 milliohms allowing about 0.25V drop due to ESR at 3A. Two in series gives 1.5F and 5.4V Vmax. 3 in series gives 1 Farad, 8.1V Vmax, same 3A discharge and 0.75V drop due to ESR at 3A. This would work well for surges in the tenths of a secnd range. For the specified wort case 3A, 5 seconds requirement perhaps 15 Farad is needed. The same family 10F, 2.7V $3/10, 26 milliohm looks good. 10A allowed discharge. Two in series drooping from 5.4 to 5 volts at 3A gives T = V x C / I = 0.4 x 5 / 3 = 0.666 seconds. Getting there. (2) IF the droop causes system reset etc and one wishes to avoid this (as one usually does :-) ) an often useful solution is to provide a sub supply for the electronics with cap that hold them up over the dropout period. eg electronics need say 50 mA. Holdup time desired = say 3 seconds (!). Acceptable droop = 2V say. From above C = I x T / V = 0.05 x 3 / 2 = 0.075 Farad = 75,000 uF = 75 mF (milliFarad) This is large by most standards but doable. A 100,000 uF supercap is reasonably small. Here the 3 second holdup is "the killer". For a more typical say 0.2S dropout the required cap is 75,000 uF x 0.2/3 = 5000 uF = very doable. (3) A small holdup battery for the electronics can be useful for obvious reasons. (4) Boost converter: In a commercial design where 4 x C non rechargeable batteries were used, to provide 5V, 3V3 and motor drive battery (exercise equipment controller) end of life Vbattery got well below needed 5V during end of battery life and much much below when motors operated. (The primary design was not mine). I added a boost converter based on a 74C14 hex Schmitt CMOS inverter package to provide 5V to the electronics at all times plus 3V3 regulated to the microcontroller. Quiescent current of boost converter and 2 x LDO regs and electroncs under 100 uA. E&OE - may have got something on wrong side somewhere there, easily done. If so, somebody will tell me about it :-). ADDED: Query: It has been (quite understandably) suggested that I am not sure you are answering the users main question. To stop from overloading a power supply it does not seem feasible. It is not a case of power supply cutout, it is a case of wanting to allow higher current for short periods(on the order of 5 or more seconds). This seems like a case of needing another power supply Response I believe that I am addressing the question completely, as asked, BUT I am also addressing what I believe is liable to be the larger question as well. Consequently, there seem to be tangents and irrelevant material here. I have addressed points unasked as well as points asked based both on my own experiences in closely analogous applications and also on general expectations. The issues are "What if demand exceeds supply" and "What if supply falls below demand". These are one and the same in practice but may have different causes. Note that my answer (1) specifically says "For over current of I ampere" and his question was " ... but there are cases (when all servos are suddenly loaded) when I think the power draw will exceed 2A for a short period of time. ie dealing with overcurrent is exactly what he is asking. BUT overcurrent is caused by overload and, when the "cost" of trying to deal with overcurrent is seen (0.5 Farad caps or whatever) then the perspective may well turn to "what can we do to ride out this overload differently". The next most obvious "solution" is to accept the hit on motor performance, let the supply rail fall BUT maintain a local supply to keep the eectronics sane. Another solution which I didn't bother addresssing is to deloa the system by eg slowing servo rates when all are on at once. Whether this is acceptable depends on the application. The reason that we can TRY to address the short term overcurrent situation is that the supply has spare capacity most of the time and this is used to charge the caps prior to the surge event. The caps do not magically manufacture extra current, just save up spare current for arainy day. To supply current the capacitor MUST lose voltage so I specify the acceptable limit for that too. I think you'll find that if you couch his requirement in numbers and then plug them into my formulae that his question as-asked will be answered. Re on geometrikal post. But it is not a case of 6V*3A*5s. You need enough capacitance to stop the output from sagging low enough to cause the output of the power supply to need to host more current. It is really just not going to happen in a good way. What happens depends very much on the original supply characteristics. Imagine an LM350 was being used. Datasheet here . This is essentially an LM317 on steroids. Good for about 3A in most conditions and 4.5a IN MANY, deep-ending on application. 3A guaranteed. Fig 2 shows that it is good for 4.5A for a Vin-Vout differential of 5 to 15V depending on other issues. It can be run up near its current limit with good regulation. If being run at 3A and if the drop across it is not too high and it is well heatsunk it will not be hot and intermittent peaks of 4.5A will be provided. Do this too often and the temperature will rise and figs 1,4,5 and a few things unshown will affect how it behaves. First off Vout will start to droop on peaks and a capacitor on the output will help it serve the load. Increasing drOop and longer peaks and the capacitor will be called on to do more. If the IC decided to completely cut out for a moment (which it is unlikely to ever do) as long as T x I / C does not exceed the voltage droop which is acceptable the capacitor will do the whole job. Restore Iout to 3A and the capacitor will recharge until next time.
{ "source": [ "https://electronics.stackexchange.com/questions/22157", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17697/" ] }
22,291
Why can't you use a single resistor for a number of LEDs in parallel instead of one each?
The main reason is because you can't safely connect diodes in parallel. So when we use one resistor, we have a current limit for the whole diode section. After that it's up to each diode to control the current that goes through it. The problem is that real world diodes don't have same characteristics and therefore there's a danger that one diode will start conducting while others won't. So you basically want this ( open in Paul Falstad's circuit simulator ): And you in reality get this ( open in Paul Falstad's circuit simulator ): As you can see, in the first example, all diodes are conducting equal amounts of current and in the second example one diode is conducting most of the current while other diodes are barely conducting anything at all. The example itself is a bit exaggerated so that the differences will be a bit more obvious, but nicely demonstrate what happens in real world. The above is written with assumption that you will chose the resistor in such way that is sets the current so that the current is n times the current you want in each diode where n is the number of diodes and that the current is actually larger than the current which a single diode can safely conduct. What then happens is that the diode with lowest forward voltage will conduct most of the current and it will wear out the fastest. After it dies (if it dies as open circuit) the diode with next lowest forward voltage will conduct most of the current and will die even faster than first diode and so on until you run out of diodes. One case that I can think of where you can use a resistor powering several diodes would be if the maximum current going through the resistor is small enough that a single diode can work with full current. This way the diode won't die, but I myself haven't experimented with that so I can't comment on how good idea it is.
{ "source": [ "https://electronics.stackexchange.com/questions/22291", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6587/" ] }
22,796
What is the required creepage (e.g. trace-to-trace) distance for PCBs handling 240VAC rms? What about 120VAC? This is for UL and CE certification. The standards for PCB Creepage (e.g. the distance across the surface of a PCB between high-voltage connections) are locked up in proprietary, pay-only IEC standards (specifically, IEC Report 664/664A ). This is troubling, as following these standards is a good way to ensure safety, even if you never intend to actually get your project UL or CE certified. Can we get a nice summary of what trace-trace spacing should be maintained for common voltages (e.g. 120V, 240V), with common materials (e.g. FR4, etc...)?
Brings back memories. not all good ones. Herewith potpourri / hodgepodge - some value. Useful online calculator covering subset of question. They say Insulation Calculator This program is based on Table 2G and Figure 2F of IEC 60950. Select the circuits that bridge the insulation to be determined by using the drop down lists. For example, a Primary Circuit to a Primary Circuit requires Functional Insulation. The Insulation Calculator will automatically determine the insulation. Notes are also provided as called out in Table 2G. Acknowledgement The author thanks the International Electrotechnical Commission (IEC) for permission to reproduce Section 2.9 "Insulation", Section 2.10 "Clearances, creepage distance and distances through insulation", and Section 5.2 "Electric Strength" from its International Standard IEC 60950. All such extracts are copyright of IEC, Geneva, Switzerland. All rights reserved. Consult with IEC 60950 for all final design decisions. Useful summary page Includes From DIN EN 60664-1 (VDE 0110-1), creepage. From DIN EN 60664-1 (VDE 0110-1), clearance ___________________________ Insulation Material groups: In the table "material groups" are mentioned. Materials are grouped according to their CTI (Comparative Tracking Index). CTI is a measure of the material's resistance to the formation of conductive tracks which lead to material breakdown when exposed to a standard CTI test. From here Insulation Material Groups (in accordance with EN 60664-1:2007 and VDE 0110-1) For the purposes of the above mentioned standards, materials are classified into four groups according to their CTI values. These values are determined in accordance with IEC 60112 using solution A. The groups are as follows: Insulation materials group I 600 ≤ CTI Insulation materials group II 400 ≤ CTI < 600 Insulation materials group IIIa 175 ≤ CTI < 400 Insulation materials group IIIb 100 ≤ CTI < 175. The proof tracking index (PTI) is used to verify the tracking characteristics of materials. A material may be included in one of these four groups on the basis that the PTI is not less than the lower value specified for the group. The means of assessing CTI is described here and this video [1m 38s] is both impressive and informative. Arcs sparks smoke and flames happen :-). CTI testing - stand clear: And again from here not much else useful on this exact topic but MANY OTHER SIMILAR pages with links to portions of relevant standards. Small but useful extract from (Extract DIN VDE 0110-04.97*) They say: This standard is a technical adaptation of IEC Report 664/664A and specifies, in general, the minimum insulation distances for equipment. It can be used by committees to protect persons and property in the best possible way from the effects of electrical voltages or currents (e.g. fire hazard) or from functional failure of the equipment by providing adequate dimensioning of clearances and creepage distances in equipment.) Useful subset Interesting comment from here : IEC 60601-1 Third Edition: Creepage Distance and Clearance Requirements July 04, 2011 It's simple: Engineers must be aware of the design for each medical device. The awareness of what is most critical is important. But why? The isolation required between parts with different operating voltages, to prevent against unacceptable risk, is the primary reason for the importance of creepage and clearance distances. Specifically, creepage is the shortest distance between the path of two conductive parts of a medical device and is measures along the surface of insulation. The clearance is similar, but very different. It [clearance] is the shortest distance between two conductive parts, measured through air. In IEC 60601-1 Third Edition, there are requirements for creepage distance and clearance, which follows the IEC "Modern Standard" approach. This approach though requires the use of six different tables for spacings and the introduction of five additional requirements to be included as part of the evaluation. But what if your company has already begun to address these requirements established by Second Edition? "If your product meets Second Edition's creepage and clearance, then the medical product will be in compliance with the requirements for Third Edition," said Todd Konieczny, North American Medical Technical Leader. "The Third Edition requirements for creepage and clearance require less stringent parameters for operator protection – which, ultimately, allows companies to build a smaller product."
{ "source": [ "https://electronics.stackexchange.com/questions/22796", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1374/" ] }
22,898
I designed my first PCB for a DC-DC boost converter only to find that it produced very noisy output. The design is based around the MIC2253 . Here's a schematic: Although my circuit allows for different combinations of input voltages (Vin) and output voltages (Vout). The case I am debugging is with Vin=3.6V and Vout=7.2V. The Load was a 120 ohm resistor. I calculated the duty cycle D=0.5 (i.e. 50%). This seems to be within the 10% minimum and 90% maximum duty cycle limits specified in the datasheet. The other components, i.e. caps, inductors, resistors are the same or similar to what the data sheet suggests in its application example. The design appears to give the correct RMS step up voltage on the output, but, after viewing the signal through an oscilloscope I see damped sinusoidal voltage oscillations appearing periodically which seems to be initiated by the switching of the inductor. I see the same oscillations on almost every ground point on the board. The oscillations on the output are large, that is 3 V peak to peak. After doing a bit of research it seems that my problems are not particular to my choice of converter, but, to problems with my PCB layout (see links below). I'm not sure how to fix my layout to ensure acceptable results. These documents seem useful for debugging the problem: http://www.physics.ox.ac.uk/lcfi/Electronics/EDN_Ground_bounce.pdf http://www.analog.com/library/analogDialogue/cd/vol41n2.pdf http://www.enpirion.com/Collateral/Documents/English-US/High-frequency-implications-for-switch-mode-DC-R_0.pdf http://www.maxim-ic.com/app-notes/index.mvp/id/3645 http://www.maxim-ic.com/app-notes/index.mvp/id/735 I've attached three images. "original pcb.png" contains an image of the board I am having issues with. It is a 2 layer board. Red is the top copper. Blue is bottom copper. "current loops.jpg" shows the prototype board with orange and yellow overlays of the two different current paths used to charge (orange) and discharge (yellow) the inductor. One of the articles, ( http://www.physics.ox.ac.uk/lcfi/Electronics/EDN_Ground_bounce.pdf ), suggested that the two current loops should not change in area, thus, I tried to minimize their the change in area in a new layout I started in "pcb_fix.png". I hacked the original PCB so that it was closer to this new layout, but, the performance of the board didn't change. It is still noisy! The quality of the hack isn't as good as shown in "pcb_fix.png", however, it is a fair approximation. I would have expected somewhat of an improvement, but, I didn't see any. I'm still not sure how to fix this. Maybe the ground pour is causing too much parasitic capacitance? Perhaps the caps have too much impedance (ESR or ESL)? I don't think so, because they are all ceramic multilayer and have the values and dielectric material requested by the datasheet, i.e. X5R.Perhaps my traces might have too much inductance. I chose a shielded inductor, but, is it possible that its magnetic field is interfering with my signals? Any help would be very appreciated. At the request of a poster, I've included some oscilloscope output under different conditions. Output, AC Coupled, 1M Ohm, 10X, BW limit OFF: Output, AC Coupled, 1M Ohm, 10X, BW limit OFF: Output, AC Coupled, 1M Ohm, 10X, BW limit 20Mhz: Output, AC Coupled, 1M Ohm, 1X, BW limit 20Mhz, 1uF, 10uF, 100nF caps and 120 ohm resistor shunting output, i.e. they are all in parallel: Switching Node, DC Coupled, 1M Ohm, 10X, BW limit OFF Switching Node, AC Coupled, 1M Ohm, 10X, BW limit 20Mhz ADDED: Original oscillations attenuated greatly, however, under heavier load new undesirable oscillations occur. Upon implementing several of the changes suggested by Olin Lathrop, a large decrease in oscillation amplitude was observed. Hacking the original cicuit board to approximate the new layout helped somewhat by bringing down the oscillations to 2V peak to peak: It will take at least 2 weeks and more money to get new prototype boards so I am avoiding this order until I sort out the problems. The addition of additional input 22uF ceramic capacitors made only a negligible difference. However, the overwhelming improvement came from simply soldering a 22uF ceramic cap between the output pins and measuring the signal across the cap. This brought the noise maximum amplitude to 150mV peak to peak without any bandwidth limiting of the scope!! Madmanguruman suggested a similar approach, with the exception that he suggested altering the probe tip instead of the circuit. He suggested putting two caps between ground and the tip: one 10uF electrolytic and one 100nF ceramic (in parallel I assumed). In addition, he suggested limiting the bandwidth of the measurement to 20Mhz and putting the probes on 1x. This seemed to have a noise attenuating effect as well in about the same magnitude. I guess I can conclude that there was originally insufficient capacitance at the output. I'm not sure if this is a acceptably low noise floor or even a typical noise amplitude for a switching converter, but, it is a massive improvement. This was encouraging so I went on to test the robustness of the circuit under more significant loading. Unfortunately, under heavier loading the circuit is producing some new weird behaviour. I tested the circuit with a 30 ohm resistive load. Although the board does still boost the input voltage as it should, the output now has a low frequency sawtooth/triangle wave output. I'm not sure what this indicates. It looks like constant current charging and discharging of the output cap to me at a much lower frequency than the switching frequency of 1 Mhz. I am unsure why this would happen. Probing the switching node under the same test conditions showed a messy signal and horrible oscillations. Solution Found The question has been answered and the circuit is performing adequately. The problem was indeed related to the stability of the control loop as Olin Lathrop suggested. I received may great suggestions, however, Olin was the only one to suggest this course of action. I therefore credit him with the right answer to my question. However, I greatly appreciate everyone's help. Several of the suggestions made were still relevant to improving the design and will be implemented into the next revision of the board. I was compelled to follow Olin's advice also because I noticed that the frequency of the sawtooth/triangle output had the same frequency of appearance as the square wave portion of the signal at the switching node. I thought that the ramp up of the voltage on the output was due to successfully energizing the inductor and the ramp down was due to failure to adequately energize the inductor during the oscillatory portion of the signal on the switching node. It made sense that this was a stability problem because of this. By following Olin's suggestion to take a closer look at the compensation pin, I determined that increasing the capacitance of the RC series network on the comp pin restored the stability of the control loop. The effect that this had on the switching node was significant as can be seen by the square wave output: The low frequency sawtooth/triangle wave was eliminated. Some high frequency noise (100Mhz) may still exist on the output, but, it has been suggested that this is just an artefact of the measurement and disappears when the 200Mhz scope's bandwidth is limited to 20Mhz. The output is pretty clean at this point: I suppose I still have some questions regarding the high frequency noise, however, I think that my questions are more general and not specific to this debugging question, so, the thread ends here.
Your schematic is excessivly large and layed out in a confusing way, which discourages people from responding. Don't draw grounds going upwards, for example, unless the parts really are coming from a negative voltage. If you want others to look at a schematic, give them some respect. Don't make us tilt our heads to read things and make sure text doesn't overlap other parts of the drawing. Attention to these details not only helps your credibility, but it also shows respect from those you are seeking a favor from. I did see this question earlier, but all the above made me think "too much trouble, screw this", and then I went on to something with a lower hassle factor. You gave us a bunch of details, but forgot about the obvious high level issues. What voltage is the output supposed to be? You mentioned boosting somewhere in your lengthy writeup, but there also appears to be "7.2V" written by the output connector. This doesn't match with "2.5V-10V" written by the input. From how the inductor, switch, and diode are wired, you have a boost topology. This isn't going to work if the input exceeds the desired output voltage. What are your actual input and output voltages? At what current? Now to the ringing. First, some of these things are clearly scope artifacts. You have a very small (2.2µH) inductor. I didn't look at the controller datasheet, but that sounds surprisingly low. What switching frequency is the controller supposed to operate at? Unless it's a MHz or more, I'm skeptical about the 2.2 µH choice for the inductor. Let's look at some of your scope traces: This is actually showing a reasonably expected switching pulse. From this we can also see that the switching frequency, at least in this instance, is 1 MHz. Is that what you intended? The trace starts at the left with the switch closed so that the inductor is charging up. The switch open at 100 ns and the inductor output therefore immediately rises until its current starts dumping thru D1. That is at 8V, so the output voltage is apparently something like 7.5V considering D1 is a Schottky diode but is getting a large current pulse (it would be good to know how large, or at least how large the average is). This goes on for 300 ns until the inductor is discharged at t=400ns. At that point the output side of the inductor is open and is only conected to parasitic capacitance to ground. The inductance and this parasitic capacitance form a tank circuit, which is producing the ringing. There are only two cycles of this ringing before the next pulse, but note how it is decaying slightly. The little remaining energy that was left in the inductor after the diode shut off is now sloshing back and forth between it and the capacitance, but each cycle a little is getting dissipated. This is all as expected, and is one of the characteristic signatures of this kind of switching power supply. Note that the ringing frequency is about 5 MHz, which in a real commercial design you have to be careful to handle to avoid it radiating. This ringing can actually be the main emission from a switching power supply, not the pulse frequency as many people seem to assume. We can also see that the ringing is decaying towards a bit below 4V, which tells us the input voltage you were using in this case. This confirms it really is operating as a boost converter with about 2x stepup, at least in this case. The 2x stepup is also confirmed by the roughly equal inductor charge and discharge phases, which are 300 ns each in this instance. The free ringing tank circuit phase is brought to a abrupt end when the switch turns on again at t=800ns. The switch stays on for about 300ns charging up the inductor and the process repeats with about a 1 µs period. This scope trace actually shows things working as expected. There is no smoking gun here. You complain about output oscillations, but unfortunately none of your scope traces show this. The early ones aren't meaningful since they are most likely showing scope artifacts and common mode ground bounce showing up as a diffential signal. Even this one: Isn't telling us much. Note the sensitive voltage scale. There is nothing surprising here at 20 mV/division. Some of this is almost certainly the common mode transients confusing the scope so that they show up as differential signal. The slower parts are the diode conducting and then not conducting, and the current pulse being partially absobed by the capacitor. So, this all gets down to what exactly is the problem? If you are seeing large scale voltage fluctuations on the output over a number of switching cycles, then show that. That's what I thought you were originally complaining about. If that is the case, then take a careful look at the compensation network for the switcher chip. I didn't look up the datasheet, but from the name "comp" for pin 12 and the fact that C4 and R2 are connected to it, this is almost certainly the feedback compensation network. Usually, datasheets just tell you what to use and don't give you enough information to come up with your own values anyway. Read that section of the datasheet carefully and see if you have met all the conditions for using the values you did. Those are the suggested value for this part, right? Added: I meant to mention this before but it slipped thru the cracks. You have to make sure the inductor is not saturating. That can cause all sorts of nasty problems, including large transients and control instability. From the first scope trace I copied, we can see that the inductor is being charged for 300 ns from about 3.8 V. 3.8V x 300ns / 2.2µH = 518mA. That is the peak inductor current in this case. However, that is at a rather low output current. Again from the scope trace we can infer the output current is only about 75-80 mA. At higher output currents the peak inductor current will go up until eventually the controller will run in continuous mode (I'm guessing, but that's likely). You have to make sure the inductor current doesn't exceed its saturation limit over the full range. What is the inductor rated at? Added2: I think there are two basic problems here: You are expecting a switching power supply to have low noise like linear power supplies you have looked at. This is not reasonable. You are getting a lot of measuring artifacts which make the output look a lot worse than it really is. Your original layout didn't help matters. The second one is better but I still want to see a few improvements: Unfortunately you have the tStop layer turned on cluttering up what we really want to see, but I think we can still decipher this picture. You now have a direct path from the diode thru the output cap back to the ground side of the input cap without cutting accross the ground plane. That's a big improvement over the original. However, you've got the ground plane broken up with a big L shaped slot in the middle that extends all the way to the bottom edge. The left and right parts of the bottom of the ground plane are connected only by a long about route. This could be easily fixed by reducing the excessive spacing requirement around some of your nets, and by moving a few parts just a little. For example, there is no reasons the two very large vias to the right of the + input couldn't be a litte farther apart to let the ground plane flow between them. The same things is true to the left of R3, between the cathode of the diode and C5, and between the board edge and D1. I also think you have too little capacitance both before and after the switcher. Change C1 to 22µF like C5, and add another ceramic cap immediately between the two pins of JP2. Try a new experiment with the new layout. Manually solder another 22µF cap directly between the pins of JP2 on the bottom of the board. Then clip the scope probe ground to the "-" pin (not some other ground point on the board, directly to the "-" pin only ) and hook the probe itself to the "+" pin (again, right at the pin , not some other point on the output voltage net). Make sure nothing else is connected to the board, including any other scope probes, ground clips, grounding wires, etc. The only other connection should be the battery, which should also not be connected to anything else. Keep this setup at least a foot or so away from anything else conductive, particularly anything grounded. Now look a the output waveform. I suspect you will see substantially less of the noise that appeared to be in the first scope trace you posted.
{ "source": [ "https://electronics.stackexchange.com/questions/22898", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1900/" ] }
22,992
I read an interesting article about an invasive ant species the other day and was amazed by the following paragraph: The ants can bite, but the biggest danger is that they're attracted to circuit boxes. The reason isn't known, but their sheer numbers can create an ant bridge between connections, shorting out entire electrical systems. The journalist that wrote that probably doesn't know any more about electrical engineering than I do, so I thought I'd ask some of you guys... If this is good reporting, how does this happen ? Related questions: How many ants do you think it would take to create a bridge between connections? What would you do to protect your equipment from these ants? Any theory into why they are so attracted to circuit boxes? Or is this just bad reporting?
If the conductors are 1 ants length apart, then one ant is all it takes. If they are 10 ants lengths apart, then 10 ants, if they go top-to-tail. In reality it will take more as they move around lots. Also, it is probable that it would be a gradual build-up of dead ants. As an ant gets electrocuted it will curl up and/or explode. After a while, the bits of dead ant will eventually bridge the circuit. As for why... well, who knows what goes through the mind of an ant? (besides 110V) It is well known and documented that mice chew through cables because of the 50/60Hz buzz they produce. Maybe the ants are attracted by the EMF exciting certain areas of their tiny minds?
{ "source": [ "https://electronics.stackexchange.com/questions/22992", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6821/" ] }
23,182
I have a digital multimeter and its accuracy for VDC is marked like this: ±0,03%+10Digit This multimeter has maximum display of 80000. So in the 80 V range it can show for example 79.999V. 0.03% of 80V is 0.024V - that is clear for me. But what does the +10Digit mean? The device in question in Digitek DT-80000 .
The 1 digit means that the least significant digit can be off by +/- 1. In this resolution 1 digit would mean +/- 0.001V. 10 digit means that basically of your 79.999V displayed, it could also be 79.989V (not including the 0,03%!) So basically in your range the 10 digit specification means that +/- 0.03% + 0,01V is your error. For measuring 79.999V it means an absolute maximum error or +/- 79.999*0.03% + 10*0.001V = 0.034V.
{ "source": [ "https://electronics.stackexchange.com/questions/23182", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1691/" ] }
23,190
What is the point of R2 in the following diagram: I get that R1 controls the current to the Base, but what does R2 do?
The R2 resistor is used to bring the voltage on the base into a known state. Basically when you turn the whatever source of current you have at the other side of R1 off, the whole line would go into an unknown state. It may pick up some stray interference and that may influence the operation of the transistor or the device on the other side or it may take some time for the voltage to drop just with just the transistor base. Also note that the source of the current going through R1 may leak and that may affect the way transistor operates. With the R2, which is in configuration called pull-down resistor, we are certain that whatever excess voltage there may be in the branch containing R1 will be safely conducted into ground.
{ "source": [ "https://electronics.stackexchange.com/questions/23190", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6844/" ] }
23,349
I have a simple NPN switch, see the diagram. I feed a 100KHz square wave (TTL) to the base of this transistor and it turns on very very fast (a few nSec) but it doesn't turn off as fast, it almost takes 2uSec for it to turn off. (I am looking at the collector of this circuit). The diode is a laser, transistor is run off the mill NPN ( datasheet ). I also tried with another NPN from ONSemi which is faster (at least what I think) same story. Why the transistor doesn't turn off as fast? How can I make it turn off in a few nSec? Is it better to use a MOSFET than NPN in this case? ** UPDATE ** I have added a 1K instead of that NA capacitor pad and use a faster BJT, things improved a bit. (Actually, I found that the BJT is similar speed but lower collector output capacitance, 2pF vs. 6pF). Anyway, now I see turn off about 120nSec. I will add a speed up cap and report results from here.
A faster BJT will probably help once you get the fundamentals sorted out. There are two (probably) new miracle working friends that you should meet. Anti saturation Schottky clamp Speedup capacitor. (1) Connect a small Schottky diode from base to collector (Anode to base, Cathode to collector), so that the diode is reverse biased when the transistor is off. When the transistor is turned on the collector cannot fall more than a Schottky "junction" drop below the base. The transistor this cannot go into saturation and charge accumulated is much smaller so is quicker to get rid of on turn off. Example of this from here Look at the internal block diagrams for Schottky TTL. Note how this compares. This is primarily what allows Shottky TTL to be faster than standard TTL. (2) Connect a small capacitor in parallel with the resistor. This is known as a "speedup capacitor". Sounds good :-). Better for on than off but has a role both ways. It helps to "sweep charge" out of the base emitter junction capacitance on turnoff and to get charge in there on turn on. As per example below from here . This page is VERY worth looking at. They note (more worthwhile material on page) Reducing storage time . The biggest overall delay is storage time. When a BJT is in saturation, the base region is flooded with charge carriers. When the input goes low, it takes a long time for these charge carriers to leave the region and allow the depletion layer to begin to form. The amount of time this takes is a function of three factors: The physical characteristics of the device. The initial value of Ic The initial value of reverse bias voltage applied at the base. Once again, we can't do much about the first factor, but we can do something about the other two. If we can keep just below saturation, then the number of charge carriers in the base region is reduced and so is . We can also reduce by applying a high initial reverse bias to the transistor. Fall time. Like rise time, fall time () is a function of the physical characteristics of the transistor, and there is nothing we can do to reduce its value. Putting all these statements together, we see that delay and storage time can be reduced by: Applying a high initial value of (to decrease delay time) that settles down to some value lower than that required to saturate the transistor (to reduce storage time). Applying a high initial reverse bias (to reduce storage time) that settles down to the minimum value required to keep the transistor in cutoff (to reduce delay time). It is possible to meet all of these conditions simply by adding a single capacitor to a basic BJT switch. This capacitor, called a speed-up capacitor, is connected across the base resistor as shown in Figure 19-7. The waveforms in the figure are the result of adding the capacitor to the circuit. When initially goes high, the capacitor acts like a short circuit around. As a result, the input signal is coupled directly to the base for a brief instant. This results in a high initial voltage spike being applied to the base, generating a high initial value of . As the capacitor charges, decreases to the point where is held just below the saturation point. When the input first goes negative, the charge on the speed-up capacitor briefly drives the base to –5 V. This drives the transistor quickly into cutoff. As soon as the capacitor discharges, the base voltage returns to 0 V. This ensures that the base-emitter junction is not heavily reverse biased. In this way, all of the desired criteria for reducing switching time are met. (3) See how that goes . If not good enough we can see if we can add some regenerative drive next. LSTTL & even faster friends: Warning !!!!!!!!!!!! Looking in here whence the below diagram came from, is liable to result in you and your soldering iron and/or breadboard staying awake all night :-). Many good ideas. Can you do a Miller killer ? :-). Note that low power Schottky uses Schottky diodes whereas the earlier Schottky TTL used Schottky transistors - an apparent step backwards.
{ "source": [ "https://electronics.stackexchange.com/questions/23349", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4861/" ] }
23,378
My road map for learning electronics included the 7400 series logic chips. I started in on electronics by following the labs in the "Art of Electronics" lab manual which includes labs with these chips. I ended up building several custom Microchip PIC and Atmel microcontroller boards before doing these particular labs. Now I am eye-balling FPGA's and getting excited to try one of those out. Should I leave the 7400 series behind or is an understanding of them considered fundamental to understanding the more modern programmable logic chips? Are some of the 7400 series still used in new (good) designs for simple stuff? Are there still particularly useful 7400 series chips that get used all the time? I guess it wouldn't take long just to do the 7400 series labs, but, I just wanted a sense of how obsolete they are since I had such a difficult time sourcing the parts. I couldn't find some and I ended up spending way more money than I thought was acceptable.
Don't think for one minute that just because you have an FPGA that learning about 74xx is obsolete. For designing with FPGA you must be able to 'see' the logic working in your head at a discrete gate level (you will learn this skill from discrete logic chips 74xx, cmos 40xx ). Programming an FPGA is NOT like writing a computer program, it looks like it is, but only the idiots will tell you it is. you will see many ,many people on the net talk about their FPGA design is big or slow, in reality they just don't understand how to think at a true multiprocessing parallel gate level and end up serial processing most of what they try to do, this is because they just crack open the design tools and start programming like they are writing 'C' or 'C++' In the time it takes to compile a design for an FPGA on a home computer, you can breadboard a simple logic design in 74xx Using FPGA for a design you MUST work with simulators rather than with the 'hard' FPGA That is to say, if your 74xx design is malfunctioning you can fiddle with the connections, with an FPGA you must re-write, re-run a simulation, and then spend upwards of 30 minutes re-compiling the FPGA design. Stick with the 74xx or 40xx range, build some 'adders', 'shifters', and LED flashers with gating, once you are used to seeing discrete chips it becomes easier when working with a massive 'blob' that is an FPGA
{ "source": [ "https://electronics.stackexchange.com/questions/23378", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1900/" ] }
23,645
There are many tutorials that use a pull-up or pull-down resistor in conjunction with a switch to avoid a floating ground, e.g. http://www.arduino.cc/en/Tutorial/button Many of these projects use a 10K resistor, merely remarking that it is a good value. Given a a particular circuit, how do I determine the appropriate value for a pull-down resistor? Can it be calculated, or is it best determined by experimentation?
Quick Answer: Experience and experimentation is how you figure out the proper pullup/pulldown value. Long Answer: The pullup/down resistor is the R in an RC timing circuit. The speed that your signal will transition will depend on R (your resistor) and C (the capacitance of that signal). Often times C is hard to know exactly because it depends on many factors, including how that trace is routed on the PCB. Since you don't know C, you cannot figure out what R should be. That's where experience and experimentation come in. Here are some rules of thumb when guessing at a good pullup/down resistor value: For most things, 3.3k to 10k ohms works just fine. For power sensitive circuits, use a higher value. 50k or even 100k ohms can work for many applications (but not all). For speed sensitive circuits, use a lower value. 1k ohms is quite common, while values as low as 200 ohms are not unheard of. Sometimes, like with I2C, the "standard" specifies a specific value to use. Other times the chips application notes might recommend a value.
{ "source": [ "https://electronics.stackexchange.com/questions/23645", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17697/" ] }
23,986
I am taking a first stab at designing a PCB from scratch. I am considering using a CNC mill fabrication process, and it seems like with this process I would want to remove as little copper as possible. A copper-pour-style ground plane would seem to be a good way to address this constraint. But I have noticed that relatively few PCB designs have a ground plane, and even those that do often have them only in specific areas of the board. Why is that? Are there reasons not to have a copper-pour ground plane that covers most of a PCB? In case it's relevant, the circuit I am designing is a 6-bit D/A converter plug. A first cut at my PCB layout (which does not include a ground plane) is shown below.
Ground planes in general are almost always a good thing, but if used incorrectly can actually hurt the quality of your board. A typical board like you have here would have 1 layer dedicated to be a ground pour only with no traces running on it. However, it sounds like you are wanting to make your top layer have a ground pour so that you don't have to remove all of that extra copper. Doing a ground pour on a layer with a lot of traces is not really a ground plane at all, rather you can think of it as a ground trace with varying sizes running all around your board. It is hard to say if it will actually hurt the signal integrity of the design, but I can say for certain that it will not provide the same benefit that a ground plane will. Typically when I see milled boards like this, the copper will be left unconnected on the unused areas of board. This provides a benefit of knowing that if you accidentally short one line to the unused copper, you don't get a hard short to ground that can kill some ICs. This can also be a negative though as accidentally shorting to a large unused piece of copper can turn into a nice antenna and pick up noise that you may have a hard time hunting the source of. I realize my answer may not be a direct answer to what you are wanting to know, but it is very difficult to predict what configuration will be best for you. But, if it were my design, I would go ahead and just leave the extra copper on the board, but leave it disconnected from everything.
{ "source": [ "https://electronics.stackexchange.com/questions/23986", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1317/" ] }
24,061
I have a basic circuit that uses a photoresistor powered by a five volt source. I had made this project to show my son about various sensors and had used a circuit I had found online. It looks something like this: simulate this circuit – Schematic created using CircuitLab The only way I could explain this, is that the resistor would provide a safe path to ground so that current would not flow in to and hurt the analog sensor (leaving just "voltage" to read from the photoresistor). I am not sure its point is to protect it. I've looked at examples of pullup/pulldown resistors, however that seems to be for preventing a logic input from "floating". It appears that it would not do so in this circuit as it is a continuous variable voltage supply. How do I name its purpose?
It's not for protection, it's to form a voltage divider with the photocell. For a typical photocell, the resistance may vary between say, 5 kΩ (light) and 50 kΩ (dark) Note that the actual values may be quite different for your sensor (you'll need to check the datasheet for those) If we leave the resistor out, the analog input will see 5 V either way (assuming an analog input of a high enough impedance not to affect things significantly) This is because there is nothing to sink the current and drop voltage. No Resistor Let's assume the sensor is connected to an opamp with an input resistance of 1 MΩ(pretty low as opamps go, can be 100's of MΩ) When there is no light shining on the photocell and it's resistance is at 50 kΩ we get: $$ 5~\mathrm{V} \times \frac{1~\mathrm{M}\Omega}{1~\mathrm{M}\Omega + 50~\mathrm{k}\Omega} = 4.76~\mathrm{V} $$ When there is light shining on the photocell and it's resistance is at 5 kΩ, we get: $$ 5~\mathrm{V} \times \frac{1~\mathrm{M}\Omega}{1~\mathrm{M}\Omega + 5~\mathrm{k}\Omega} = 4.98~\mathrm{V} $$ So you can see it's not much use like this - it only swings ~200 mV between light/dark. If the opamps input resistance was higher as it often will be, you could be talking a few µV. With Resistor Now if we add the other resistor to ground it changes things, say we use a 20 kΩ resistor. We are assuming any load resistance is high enough (and the source resistance low enough) not to make any significant difference so we don't include it in the calculations (if we did it would look like the bottom diagram in Russell's answer) When there is no light shining on the photocell and it's resistance is at 50 kΩ, we get: $$ 5~\mathrm{V} \times \frac{20~\mathrm{k}\Omega}{20~\mathrm{k}\Omega + 50~\mathrm{k}\Omega} = 1.429~\mathrm{V} $$ With there is light shining on the photocell and it's resistance is 5k we get: $$ 5~\mathrm{V} \times \frac{20~\mathrm{k}\Omega}{20~\mathrm{k}\Omega + 5~\mathrm{k}\Omega} = 4.0~\mathrm{V} $$ So you can hopefully see why the resistor is needed in order to translate the change of resistance into a voltage. With load resistance included Just for thoroughness let's say you wanted to include the 1 MΩ load resistance in the calculations from the last example: To make the formula easier to see, lets simplify things. The 20 kΩ resistor will now be in parallel with the load resistance, so we can combine these both into one effective resistance: $$ \frac{20~\mathrm{k}\Omega \times 1000~\mathrm{k}\Omega}{20~\mathrm{k}\Omega + 1000~\mathrm{k}\Omega} \approx 19.6~\mathrm{k}\Omega $$ Now we simply replace the 20 kΩ in the previous example with this value. Without light: $$ 5~\mathrm{V} \times \frac{19.6~\mathrm{k}\Omega}{19.6~\mathrm{k}\Omega + 50~\mathrm{k}\Omega} = 1.408~\mathrm{V} $$ With light: $$ 5~\mathrm{V} \times \frac{19.6~\mathrm{k}\Omega}{19.6~\mathrm{k}\Omega + 5~\mathrm{k}\Omega} = 3.98~\mathrm{V} $$ As expected, not much difference, but you can see how these things may need to be accounted for in certain situations (e.g. with a low load resistance - try running the calculation with a load of 10 kΩ to see a big difference)
{ "source": [ "https://electronics.stackexchange.com/questions/24061", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/7181/" ] }
24,610
I am looking for a free circuit simulator for educational purposes. My requirements are: Visual ("draw a circuit diagram, click simulate") It should contain light bulbs as circuit components such that 2.1. They become (visually) brighter if you apply more power 2.2. You can change the manufacturer specs for example "3.5V,0,2A" It should contain swiches, npn-transistors, diodes and LEDs as well (the LEDs should react to interactive changes in the simulation) Any recommodations for this? It would be nice if the simulator runs under Linux, but that's not a strict requirement.
I often use the falstad simulator: http://www.falstad.com/circuit It's a Java applet, so will work on pretty much any operating system. The interface does take a bit of getting used to, and there are problems saving in Linux (it gives you a link to copy and paste, and copy and paste in Java doesn't work too well in Linux). Other than that it ticks all your boxes. It also has some good sample circuits. A Windows version (circuitmod) is based on this.
{ "source": [ "https://electronics.stackexchange.com/questions/24610", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/7373/" ] }
25,181
I am curious if there are practical differences between a DC power supply based on a half wave rectifier or a full wave rectifier. I mean I have a few small DC power supply units which should give 12V 0.1A each. They all have a transformer 240V->18V, then 1 diode or 4 diodes, then 78L12 (0.1A regulator) and one or two capacitors (typically 220uF or 470uF). My question is if the power supply can give a good quality DC voltage with just a half wave rectifier (a single diode) when a 470uF capacitor and 78L12 is added, or if bridge rectifier (4 diodes) is better. I also have one old 12V 0.2A power supply based on a Zener diode instead of 7812 regulator. It also has 18V going to just a single diode, then 33R resistor which limits current to 0.2 Amp, then Zener diode parallel with a 1000uF capacitor. Again: Would it be better to have 4 diodes there, or is the half wave rectification good enough here thanks to the 1000uF capacitor? (All my power supplies work well, I am just curious "why" and "how" these things work.) Update: I found two more interesting information: Capacitor should be approximately 500 uF for each .1 Amper of output (or more). This applies to full wave rectifier. Since I saw the same values in half wave rectifiers, it isn't enough and they are bad design. 4-diode rectification cannot be used when we want to have a combined 5V/12V output (or any other two voltages) with a simple transformer, because it can't provide a common ground for two different circuits. (A more complicated real example: I have got a power supply with four output wires from transformer -7/0/+7/+18 Volt. Then it uses 2-diode rectification to get full wave 7V output, and 1-diode rectification to get half wave 18V output. The 18V line can't be "upgraded" to 4-diode rectification here.)
Either can work correctly if designed properly. If you have a dumb rectifier supply feeding a 7805, then all the rectifier part needs to do is guarantee the minimum input voltage to the 7805 is met. The problem is that such a power supply only charges up the input cap at the line cycle peaks, then the 7805 will drain it between the peaks. This means the cap needs to be big enough to still supply the minimum 7805 input voltage at the worst case current drain for the maximum time between the peaks. The advantage of a full wave rectifier is that both the positive and negative peaks are used. This means the cap is charged up twice as often. Since the maximum time since the last peak is less, the cap can be less to support the same maximum current draw. The downside of a full wave rectifier is that it takes 4 diodes instead of 1, and one more diode drop of voltage is lost. Diodes are cheap and small, so most of the time a full wave rectifier makes more sense. Another way to make a full wave rectifier is with a center tapped transformer secondary. The center is connected to ground and there is one diode from each end to the raw positive supply. This full wave rectifies with only one diode drop in the path, but requires a heavier and more expensive transformer. A advantage of a half wave rectifier is that one side of the AC input can be directly connected to the same ground as the DC output. That doesn't matter when the AC input is a transformer secondary, but it can be a issue if the AC is already ground-referenced.
{ "source": [ "https://electronics.stackexchange.com/questions/25181", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/7575/" ] }
25,308
On a printed circuit board, I see lots of tiny letters and numbers. Is there some kind of standard that dictates what letter indicates what type of component?
The technical term for the markings is "reference designators" (aka "refdes") and there are a few standards can define them. Take a look at this wikipedia page for a quick overview. http://en.wikipedia.org/wiki/Electronic_symbol http://blogs.mentor.com/tom-hausherr/blog/tag/reference-designator/ For schematic components, most EDA tools start off with one or few alphabets and then a sequential number. For example, R1 for the first resistor, C1 for the first capacitor, IC1 for the first IC and so on. You can download a free EDA tool such as Eagle to play around. Also, see the wikipedia page for a few more examples. For PCB footprints, different vendors do make naming convention suggestions. See Altium's suggestions here , for example. Edit: I do NOT know anyone personally that refers to this as a strict standard or a standard at all. It's mostly what you are used to and familiar with.
{ "source": [ "https://electronics.stackexchange.com/questions/25308", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/7612/" ] }