source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
280,446 | I am trying to create an arc generator and I have read about the marx generator but I am looking into more compact modules like the image below. All the ones I have found seem to be fake and actually supply less than 1/10th of what they are advertising. Is there any reliable way of generating a (non-continuous) super high voltage arc? | Is it really possible to “boost” 6 V DC to above 50kV? Or even 400kV? Of course. One common example of something similar (although not as extreme as your specs) is using 12 V in a car to make several 10s of kV to fire the spark plugs. The same concept can be scaled up to make higher output voltages. It won't be easy to build something with that stepup ratio and output voltage yourself, but the physics is certainly possible. | {
"source": [
"https://electronics.stackexchange.com/questions/280446",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/57889/"
]
} |
280,457 | I'm looking for a concise term to describe something. Suppose I'm making a gizmo which has application-specific circuitry but also requires substantial computing power. Not wanting to reinvent the wheel, I decide to incorporate a pre-made board, call it PCB "B", which is a single-board computer like an Udoo x86 or Raspberry Pi, for said computing power. I design a larger PCB, call it PCB "A", which has "B" attach as a mezzanine board via pin headers, stand-offs, etc. PCB "A" has the application-specific circuits and maybe other stuff such as a power supply, connectors to the outside world, etc. What exactly do you call PCB "A"? Once upon a time, the physical relationship of "A" and "B" would have qualified "A" as a motherboard. Problem is, since the ubiquity of PCs, that term now carries specific connotations; in particular, it would tend to imply the CPU and computer chipset are on "A" when really they're on "B". Mainboard has the same problem, as it's generally understood as a synonym of motherboard. Backplane is not really applicable either, because "A" is not just a bus interconnection board. Various SBCs have their own terms like shield or cape to describe daughter cards, but while "A" may use some of the same pin headers, using these terms to describe "A" seems to misrepresent it as being diminutive to "B". Is there some generally understood term for PCB "A" that doesn't carry the wrong connotation? (For lack of anything better I might call it the "application board", but I prefer to use standard terminology where possible.) | What exactly do you call PCB "A"? A motherboard and the PCB that mounts on it is called a daughterboard. I hear what you say about the implication of it being mistaken for a PC motherboard but it still gets called a motherboard in my book. PC motherboards have not cornered the exclusive use of the term. | {
"source": [
"https://electronics.stackexchange.com/questions/280457",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/25711/"
]
} |
280,673 | Why does hardware division take so much longer than multiplication on a microcontroller? E.g., on a dsPIC, a division takes 19 cycles, while multiplication takes only one clock cycle. I went through some tutorials, including Division algorithm and Multiplication algorithm on Wikipedia. Here is my reasoning. A division algorithm, like a slow division method with restoring on Wikipedia, is a recursive algorithm. This means that (intermediate) results from step k are used as inputs to step k+1 , which means that these algorithms cannot be parallelized. Therefore, it takes at least n cycles to complete the division, whereas n is a number of bits in a dividend. For 16-bit dividends, this is equal to at least 16 cycles. A multiplication algorithm doesn't need to be recursive, which means that it is possible to parallelize it. However, there are many different multiplication algorithms, and I don't have a clue which one may be used by microcontrollers. How does multiplication work on a hardware/microcontroller? I've found a Dadda multiplier algorithm, which is supposed to take only one clock cycle to finish. However, what I don't get here is that Dadda's algorithm proceeds in three steps, whereas results from step 1 are used in step 2, etc. According to this, this would take at least three clock cycles to finish. | A divider maps much less elegantly to typical hardware. Take Lattice ICE40 FPGAs as examples. Let us compare two cases: this 8x8 bit to 16 bit multiplier: module multiply (clk, a, b, result);
input clk;
input [7:0]a;
input [7:0]b;
output [15:0]result;
always @(posedge clk)
result = a * b;
endmodule // multiply and this divider that reduces 8 and 8 bit operands to 8 bit result: module divide(clk, a, b, result);
input clk;
input [7:0] a;
input [7:0] b;
output [7:0] result;
always @(posedge clk)
result = a / b;
endmodule // divide (Yes, I know, the clock doesn't do anything) An overview of the generated schematic when mapping the multiplier to an ICE40 FPGA can be found here and the divider here . The synthesis statistics from Yosys are: multiply Number of wires: 155 Number of wire bits: 214 Number of public wires: 4 Number of public wire bits: 33 Number of memories: 0 Number of memory bits: 0 Number of processes: 0 Number of cells: 191 SB_CARRY 10 SB_DFF 16 SB_LUT4 165 divide Number of wires: 145 Number of wire bits: 320 Number of public wires: 4 Number of public wire bits: 25 Number of memories: 0 Number of memory bits: 0 Number of processes: 0 Number of cells: 219 SB_CARRY 85 SB_DFF 8 SB_LUT4 126 It's worth noting that the size of the generated verilog for a full-width multiplier and a maximally-dividing divider aren't that extreme. However, if you'll look at the pictures below, you'll notice the multiplier has maybe a depth of 15, whereas the divider looks more like 50 or so; the critical path (i.e. the longest path that can occur during operation) is what defines the speed! You won't be able to read this, anyway, to get a visual impression. I think the differences in complexity are possible to spot. These are single cycle multiplier/dividers! Multiply Multiply on an ICE40 (warning: ~100 Mpixel image) Divide ( Divide on an ICE40 ) (warning: ~100 Mpixel image) | {
"source": [
"https://electronics.stackexchange.com/questions/280673",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/117580/"
]
} |
280,855 | Why did x86 designers (or other CPU architectures as well) decide not to include it? It is a logic gate that can be used to build other logic gates, thus it is fast as a single instruction. Rather than chaining not and and instructions (both are created from nand ), why no nand instruction?. | http://www.ibm.com/support/knowledgecenter/ssw_aix_61/com.ibm.aix.alangref/idalangref_nand_nd_instrs.htm : POWER has NAND. But generally modern CPUs are built to match automated code generation by compilers, and bitwise NAND is very rarely called for. Bitwise AND and OR get used more often for manipulating bitfields in data structures. In fact, SSE has AND-NOT but not NAND. Every instruction has a cost in the decode logic and consumes an opcode that could be used for something else. Especially in variable-length encodings like x86, you can run out of short opcodes and have to use longer ones, which potentially slows down all code. | {
"source": [
"https://electronics.stackexchange.com/questions/280855",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/57861/"
]
} |
281,920 | I need to compare a signal to a constant voltage; the signal ranges from 0 to 30mV, and I require a response time of 50ns at 250µV difference. The signal is a triangle wave with a slew rate in the range of a few mV/µs. When having a look at the comparators offered by TI , they start at an offset voltage of 750µV, with 10ns comparators starting at 3000µV. When, however, looking at the list of opamps , those start at 1µV offset voltage, with 100MHz amplifiers starting at 100µV. It's strongly encouraged to use comparators, not op-amps, for comparing signals, so the only option I'm seeing is to pre-amplify my signal with a precision, high-speed op-amp, then use a comparator. However, this sounds wrong. If this is possible, then why don't chip makers offer this as a monolithic solution? | High speed with a small difference is difficult to get. Note that not only do comparators tend to have higher input offset voltages than opamps, but also much higher effective noise, as to get high speed they are wideband beasts. Oliver Collins produced a paper a couple of decades ago showing that you get much better results, that is less time jitter, if you precede a fast comparator with one or more low noise, low gain opamp stages, each with single pole filtering on the output, to increase the slew rate stage by stage. For any given input slew rate and final comparator, there is an optimum number of stages, gain profile, and selection of RC time constants. This means that the initial opamps are not used as comparators, but as slope amplifiers, and consequently they do not need the output slew rate or GBW product that would be required for the final comparator. An example is shown here, for a two stage slope amplifier. No values are given, as the optimum depends on the input slew rate. However, compared to using the output comparator alone, almost any gain profile would be an improvement. If you used for example a gain of 10, followed by a gain of 100, that would be a very reasonable place to start experimenting. simulate this circuit – Schematic created using CircuitLab Obviously the amplifiers will spend a lot of their time in saturation. The key to sizing the RC filters is to choose a time constant such that the time it takes the amplifier to get from saturated to mid point, at the fastest input slew rate, is doubled by the chosen RC. The time constants obviously decrease along the amplifier chain. The RCs are shown as real filters after the opamp, not a C placed across the feedback gain resistor. This is because this filter continues the high frequency attenuation of noise at 6dB/octave to arbitrarily high frequencies, whereas a capacitor in the feedback loop stops filtering when the frequency gets to unity gain. Note that using RC filters increases the absolute time delay between the input crossing the threshold and the output detecting it. If you want to minimise this delay, then the RCs should be omitted. However, the noise filtering afforded by the RCs allows you to get better repeatability of the delay from input to output, which manifests itself as lower jitter. It's only the input opamp that needs high performance in terms of noise and offset voltage, the specs of all the subsequent amplifiers can be relaxed by its gain. Conversely, the first amplifier does not need as high a high slew rate or GBW as the subsequent amplifiers. The reason that this structure isn't provided commercially is that the performance is so rarely required, and the optimum number of stages is so dependent on the input slew rate and the specifications required, that the market would be tiny and fragmented, and not worth going after. When you need this performance, it's better to build it from the blocks you can get commerically. Here's the front of the paper, in IEEE Transactions on Communications, Vol 44, No.5 , May 1996, starting page 601, and a summary table showing what performance you get as you change the number of stages of slope amplification, and the gain distribution of the stages. You'll see from table 3 that for the specific case of wanting 1e6 slope amplification, while the performance does continue to improve above 3 stages, the bulk of the improvement has already occurred with only 3 stages. | {
"source": [
"https://electronics.stackexchange.com/questions/281920",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/108171/"
]
} |
281,921 | Is it possible to use a PCB pad as a button? I think of using it to switch on a curcuit that is only supposed to be enabled, when the user holds it in its hands. As inspiration, I used the pads that are being used on soft touch buttons on keyboards or in calculators: I know that the human body has a quite high resistance, so what would be an appropriate circuit to detect the touch input? Bare hardware only. I don't want to use any microcontroller here. | High speed with a small difference is difficult to get. Note that not only do comparators tend to have higher input offset voltages than opamps, but also much higher effective noise, as to get high speed they are wideband beasts. Oliver Collins produced a paper a couple of decades ago showing that you get much better results, that is less time jitter, if you precede a fast comparator with one or more low noise, low gain opamp stages, each with single pole filtering on the output, to increase the slew rate stage by stage. For any given input slew rate and final comparator, there is an optimum number of stages, gain profile, and selection of RC time constants. This means that the initial opamps are not used as comparators, but as slope amplifiers, and consequently they do not need the output slew rate or GBW product that would be required for the final comparator. An example is shown here, for a two stage slope amplifier. No values are given, as the optimum depends on the input slew rate. However, compared to using the output comparator alone, almost any gain profile would be an improvement. If you used for example a gain of 10, followed by a gain of 100, that would be a very reasonable place to start experimenting. simulate this circuit – Schematic created using CircuitLab Obviously the amplifiers will spend a lot of their time in saturation. The key to sizing the RC filters is to choose a time constant such that the time it takes the amplifier to get from saturated to mid point, at the fastest input slew rate, is doubled by the chosen RC. The time constants obviously decrease along the amplifier chain. The RCs are shown as real filters after the opamp, not a C placed across the feedback gain resistor. This is because this filter continues the high frequency attenuation of noise at 6dB/octave to arbitrarily high frequencies, whereas a capacitor in the feedback loop stops filtering when the frequency gets to unity gain. Note that using RC filters increases the absolute time delay between the input crossing the threshold and the output detecting it. If you want to minimise this delay, then the RCs should be omitted. However, the noise filtering afforded by the RCs allows you to get better repeatability of the delay from input to output, which manifests itself as lower jitter. It's only the input opamp that needs high performance in terms of noise and offset voltage, the specs of all the subsequent amplifiers can be relaxed by its gain. Conversely, the first amplifier does not need as high a high slew rate or GBW as the subsequent amplifiers. The reason that this structure isn't provided commercially is that the performance is so rarely required, and the optimum number of stages is so dependent on the input slew rate and the specifications required, that the market would be tiny and fragmented, and not worth going after. When you need this performance, it's better to build it from the blocks you can get commerically. Here's the front of the paper, in IEEE Transactions on Communications, Vol 44, No.5 , May 1996, starting page 601, and a summary table showing what performance you get as you change the number of stages of slope amplification, and the gain distribution of the stages. You'll see from table 3 that for the specific case of wanting 1e6 slope amplification, while the performance does continue to improve above 3 stages, the bulk of the improvement has already occurred with only 3 stages. | {
"source": [
"https://electronics.stackexchange.com/questions/281921",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/69869/"
]
} |
281,972 | I saw a battery charger that converts 220V AC to 6V DC without a transformer. Now I am wondering why many (if not all) power adapters are using a transformer, is it about efficiency or about drifting over time? Update: This circuit is inside of this torch | The power supply you've found in this device is of a type known as a capacitive dropper . (More information in the Wikipedia article " Capacitive power supply ".) The primary reason why you don't see this type of power supply often is simple: it is unsafe . This is because one leg of the AC power supply must, by necessity, be connected directly to the circuit. Ideally this should be the neutral leg, but it is difficult to guarantee this -- badly wired outlets, or non-polarized plugs, may result in part of the circuit being energized by the hot leg of the AC supply. | {
"source": [
"https://electronics.stackexchange.com/questions/281972",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/100268/"
]
} |
282,346 | Everyone seems to have different definitions everywhere I look. According to my lecturer: \$ R_{bit} = \frac{bits}{time} \$ \$ R_{baud} = \frac{data}{time} \$ According to manufacturers : \$ R_{bit} = \frac{data}{time} \$ \$ R_{baud} = \frac{bits}{time} \$ Which is the correct one and why? Feel free to give the origins of why it is defined as such too. Related question: link . | Baud rate is the rate of individual bit times or slots for symbols . Not all slots necessarily carry data bits, and in some protocols, a slot can carry multiple bits. Imagine, for example, four voltage levels used to indicate two bits at a time. Bit rate is the rate at which the actual data bits get transferred. This can be less than the baud rate because some bit time slots are used for protocol overhead. It can also be more than the baud rate in advanced protocols that carry more than one bit per symbol. For example, consider the common RS-232 protocol. Let's say we're using 9600 baud, 8 data bits, one stop bit, and no parity bit. One transmitted "character" looks like this: Since the baud rate is 9600 bits/second, each time slot is 1/9600 seconds = 104 µs long. The character consists of a start bit, 8 data bits, and a stop bit, for a total of 10 bit time slots. The whole character therefore takes 1.04 ms to transmit. However, only 8 actual data bits are transmitted during this time. The effective bit rate is therefore (8 bits)/(1.04 ms) = 7680 bits/second. If this were a different protocol that, for example, used four voltage levels to indicate two bits at a time with the baud rate held the same, then there would be 16 bits transferred each character. That would make the bit rate 15,360 bits/second, actually higher than the baud rate. | {
"source": [
"https://electronics.stackexchange.com/questions/282346",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/135453/"
]
} |
282,384 | I've always found circuits containing LEDs hard to understand, please bear with me. I know most people find it easy, but I'm confused by them so some of my assumptions might not be correct, please correct me if that's the case. So onto the question: Since LEDs are, after all, diodes, they essentially act as conductors with forward voltage, right? Which is why we need a pull-down resistor to regulate the current that flows through the circuit. For example, let's say we have an LED with a Vf of 2 V and an operating current of 20 mA. (I think those numbers are reasonable right? Again, if not, please let me know.) And our power supply is a constant 4V. This means we need the resistor to draw 20 mA at 2 V, so it would be a 100 Ω resistor, with 40 mW going through it. That's a tiny power usage, but half of the power supplied is wasted through heat. So in this case, isn't the best case efficiency 50%? Which isn't really efficient in terms of DC power supplies, I would have thought. So when people refer to LED's high efficiency, are they referring to the fact that the LEDs themselves convert the power they use into light efficiently, or is it considered efficient even after considering the 50% max wall plug efficiency? Or is it just that I've given an example that happens to be a horrible circuit design that would never be found in production applications? | You seem to be getting confused between the efficiency of the LED and the efficiency of the circuit to drive the LED. In terms of light output per unit of energy used by the LED they are an efficient way to generate light. In absolute terms they aren't great, they are around 10% [1] efficient in that respect however that is still far better than the ~1-2% of a conventional incandescent bulb. But what of that power wasted in the resistor. A series resistor is the simplest way to drive an LED, it is far from the only way to do so. Even sticking to a resistor what if we put 20 of your 2V LEDs in series and supply it with 45V? Now you are using 45*0.02 = 900mW of which 800mW is going into the LEDs and only 100mW (11%) is being used by the series resistor. But we can make it even more efficient, the reason for the resistor is that the LEDs needs a constant current and most electronics are designed to supply a constant voltage. The easiest way to convert from one to the other (assuming a constant load) is to throw in a series resistor. You can get constant current power supplies. If you use one of those to drive your LED then the resistor can be eliminated and you can get an efficiency of well over 90% of your total system power going into the LEDs. For a home project or a simple indicator on a signal a resistor is a lot cheaper and simpler but if you are driving a lot of LEDs then the logical choice is to pay a bit more, have a slightly more complex circuit and use a dedicated constant current LED driver IC. As noted in comments, 10% is a good ballpark for current household lighting and probably also about correct for cheap commodity LEDs using older processes. Newer single colour parts can achieve significantly higher levels of efficiency. | {
"source": [
"https://electronics.stackexchange.com/questions/282384",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/35926/"
]
} |
282,398 | I am looking at a portable inverter that claims to be able to output two modes through its Type B socket: 120V AC or 150V "HVDC". The documentation says the 150VDC mode can be used for resistive loads or for switched mode power supplies, avoiding conversion loss from the inverter. How can I tell what black box device normally rated for 120VAC can take 150VDC? Some SMPS have filters before rectification and I'm concerned the filter will short / not work on DC. | You seem to be getting confused between the efficiency of the LED and the efficiency of the circuit to drive the LED. In terms of light output per unit of energy used by the LED they are an efficient way to generate light. In absolute terms they aren't great, they are around 10% [1] efficient in that respect however that is still far better than the ~1-2% of a conventional incandescent bulb. But what of that power wasted in the resistor. A series resistor is the simplest way to drive an LED, it is far from the only way to do so. Even sticking to a resistor what if we put 20 of your 2V LEDs in series and supply it with 45V? Now you are using 45*0.02 = 900mW of which 800mW is going into the LEDs and only 100mW (11%) is being used by the series resistor. But we can make it even more efficient, the reason for the resistor is that the LEDs needs a constant current and most electronics are designed to supply a constant voltage. The easiest way to convert from one to the other (assuming a constant load) is to throw in a series resistor. You can get constant current power supplies. If you use one of those to drive your LED then the resistor can be eliminated and you can get an efficiency of well over 90% of your total system power going into the LEDs. For a home project or a simple indicator on a signal a resistor is a lot cheaper and simpler but if you are driving a lot of LEDs then the logical choice is to pay a bit more, have a slightly more complex circuit and use a dedicated constant current LED driver IC. As noted in comments, 10% is a good ballpark for current household lighting and probably also about correct for cheap commodity LEDs using older processes. Newer single colour parts can achieve significantly higher levels of efficiency. | {
"source": [
"https://electronics.stackexchange.com/questions/282398",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/123812/"
]
} |
282,972 | I recently bought two 3300uf 100v capacitors and have connected them in parallel. I charge them up to 100v and discharge them. I then hook up a multimeter and notice the voltage going up very slowly, about .01 volts every 20-40 seconds. So I discharge the capacitors and the voltage goes back to zero. When I woke up this morning, I checked the capacitors and it had gone up to 5 volts! And I am able to power an LED with them. What is going on here? Edit: Thanks to Robert's comment in one of the answers, I think he's right. This is probably dielectric Absorption. | What you've observed is called "dielectric absorbtion" or "recovery voltage phenomenon". It's cause by kind of interia of the dipoles (ions) in the electrolyte while charging and discharging. From wikipedia : Dielectric absorption is the name given to the effect by which a capacitor, that has been charged for a long time, discharges only incompletely when briefly discharged. Although an ideal capacitor would remain at zero volts after being discharged, real capacitors will develop a small voltage from time-delayed dipole discharging, a phenomenon that is also called dielectric relaxation, "soakage", or "battery action". For some dielectrics, such as many polymer films, the resulting voltage may be less than 1–2% of the original voltage, but it can be as much as 15% for electrolytic capacitors. Further: When the capacitor is discharging, the strength of the electric field is decreasing and the common orientation of the molecular dipoles is returning to an undirected state in a process of relaxation. Due to the hysteresis, at the zero point of the electric field, a material-dependent number of molecular dipoles are still polarized along the field direction without a measurable voltage appearing at the terminals of the capacitor. This is like an electrical remanence. From a Mouser note 7 Recovery Voltage Where a capacitor is once charged and discharged with both of the terminals short-circuiting and then left the terminals open for a while, a voltage across the capacitor spontaneously increases again. This is called “recovery voltage phenomenon”. The mechanism for this phenomenon can be interpreted as follows: When charged with a voltage, the dielectric produces some electrical changes within, and then the inside of the dielectric is electrified with the opposite polarities (dielectric polarization). The dielectric polarization occurs in both ways of proceeding rapidly and slowly. When a charged capacitor was discharged until the voltage across the capacitor disappears, and then being left the terminals open, the slow polarization will discharge within the capacitor and appear as recovery voltage. (Fig. 28). | {
"source": [
"https://electronics.stackexchange.com/questions/282972",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/137314/"
]
} |
283,564 | In my digital logic lecture today, my professor introduced a symbol code called Patty Code . I copied this table off the whiteboard: | Symbol | Binary | Odd Patty | Even Patty |
|--------+--------+-----------+------------|
| a | 00 | 100 | 000 |
| b | 01 | 001 | 101 |
| c | 10 | 010 | 110 |
| d | 11 | 111 | 011 | I did try Googling for both "patty code" and "patti code" with nothing substantial. I asked my professor after lecture what is Patty Code. He said it's used occasionally. I asked him if it's another name for excess-3 or grey code, to which he said it is different to both of those codes. Is my professor yanking my class's chain, or is this actually a real code? EDIT I want to leave record that the professor that gave me this misconception has a relatively strong accent. He is very knowledgeable and always gives me a satisfactory answer to EE related questions. So I finished that class, and am happy with the result. Hopefully nobody else has to Google this question! | Parity . The word is Parity. Hopefully it was just misheard. | {
"source": [
"https://electronics.stackexchange.com/questions/283564",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/137631/"
]
} |
283,569 | Do I need an UL approval (or any test lab) if I build a machine and use it in my own company? Where can I find a legit documentation on this subject. | Parity . The word is Parity. Hopefully it was just misheard. | {
"source": [
"https://electronics.stackexchange.com/questions/283569",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/137634/"
]
} |
283,760 | "RS" in communication standards RS232 and RS485 stands for "Recommended Standard". But what information does "232" or "422" or "485" convey in the name? What naming convention is used for numbers succeeding the letters "RS" when naming the RS standards? | It's the document serial number of the standard. Same reason why the HTTP protocol is also known as RFC2616 and the Javascript programming language is also known as ECMA262. The numbers themselves have no meaning. For example while EIA232 specifies the electrical characteristics of a digital serial communications system, EIA222 specifies standards for antenna masts and RS225 is a standard for RF connectors. Wikipedia has an incomplete list of popular RS/EIA standards: https://en.wikipedia.org/wiki/EIA_standards | {
"source": [
"https://electronics.stackexchange.com/questions/283760",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/77072/"
]
} |
283,765 | I'm adding an input bias current compensation network to the positive op-amp input of my band-pass multiple-feedback filter. I'm setting the source impedance for the positive input pin equal to that of the negative pin. Should I be matching the impedance at DC or at the frequency of interest? | It's the document serial number of the standard. Same reason why the HTTP protocol is also known as RFC2616 and the Javascript programming language is also known as ECMA262. The numbers themselves have no meaning. For example while EIA232 specifies the electrical characteristics of a digital serial communications system, EIA222 specifies standards for antenna masts and RS225 is a standard for RF connectors. Wikipedia has an incomplete list of popular RS/EIA standards: https://en.wikipedia.org/wiki/EIA_standards | {
"source": [
"https://electronics.stackexchange.com/questions/283765",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/116719/"
]
} |
284,401 | Transformers have hundreds of turns on both the secondary and primary winding, and as a result use very thin copper wires for each. But, why do they not just use fewer turns on each winding and get the same voltage ratio? More importantly, why not use fewer turns of a thicker wire for an increased VA?
(instead of 1000:100 turns of 22 awg wire, why not 100:10 turns of 16 awg wire if this would increase VA) | When you apply voltage to the primary winding of a power transformer, some current will flow, even when the secondary is open circuit. The amount of this current is determined by the inductance of the primary coil. The primary must have a high enough inductance to keep that current reasonable. For 50 or 60 Hz power transformers, this inductance is pretty high, and you typically cannot get there with a small number of turns in the winding. | {
"source": [
"https://electronics.stackexchange.com/questions/284401",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/138078/"
]
} |
284,404 | I'm building a specific circuit I found here: http://makingcircuits.com/blog/2015/07/48v-inverter-circuit.html Part 1: I'm pulling my hair out trying find a transformer like the one shown in the diagram. I'm pretty new to building circuits from diagrams, so maybe I've got this wrong, but it seems to be asking for a for a type of transformer which is grounded in the middle of the primary coil. Specifically it's asking for a 36v-0-36v 1000VA transformer. Is this a specific type of transformer? If so, what is it called? Please provide a link if you are able to locate one, some alternative solution,... or a correction for my misunderstanding. Part 2:
Also, I've modified the inverter to provide 400Hz rather than 50Hz... and I would like to get approximately 570V from the secondary coil... But I would settle for less. Should I just wind my own transformer? Regardless of this question, I still need the information in Part 1 answered. Thank you | When you apply voltage to the primary winding of a power transformer, some current will flow, even when the secondary is open circuit. The amount of this current is determined by the inductance of the primary coil. The primary must have a high enough inductance to keep that current reasonable. For 50 or 60 Hz power transformers, this inductance is pretty high, and you typically cannot get there with a small number of turns in the winding. | {
"source": [
"https://electronics.stackexchange.com/questions/284404",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/138077/"
]
} |
284,583 | I've recently started up a new project and have been looking for an audio codec. I was able to find a simple voice band codec ( here which seems like it might work for my project. However, something that stuck out is that this data sheet was originally produced in 2001 which leads me to believe this chip has been around for a long time. So my question: are there any general methods for determining if an IC will soon become obsolete by the manufacturer? This is something I've never really put much thought into, however, it seems like it is a major item to consider when selecting components. I assume the answer will be dependent on manufacturer, the IC itself (e.g. 555 timers will probably be around forever), any many other factors. I would like to get a 'best practices' answer. Thanks! | Manufacturer's life-cycle statements Most manufacturers have a section on their datasheets giving the status of the part. For example TI classify their parts as: PREVIEW: Device has been announced but is not in production. Samples may or may not be available. ACTIVE: Product device recommended for new designs. NRND: Not recommended for new designs. Device is in production to support existing customers, but TI does not recommend using this part in a new design. LIFEBUY: TI has announced that the device will be discontinued, and a lifetime-buy period is in effect. OBSOLETE: TI has discontinued the production of the device. So you would look for an active device, not any of the other categories. Other manufacturers use other systems, but they are generally similarly easy to understand. Choosing among active parts There's rarely any way of telling when a device is going to move from 'active' to one of the other classes. A very profitable part will never move, whereas, a part which is developed and then doesn't sell well, might get discontinued very quickly. But there are some extra things you can do if you want to make extra sure the part will remain available Talk to the manufacturer . Maybe they'll tell you that the part might be moving to NRND in a couple of months. Or maybe they won't, because they aren't ready to publicly announce it, or because the sales guy you're talking too simply isn't in on the discussions about what to discontinue. If you're a big customer, you're more likely to get a good answer. If you're a very big customer, they might promise to keep making if you keep buying. Buy a popular product. Manufacturers don't discontinue chips which are bringing in lots of money. If it's selling well, it'll be available for a while. Either that, or a drop-in replacement will appear. Look for chips which all the distributors have lots of stock of, or that are general "go-to" chips for a certain type. Consider second sourcing. Some chips are made by more than one manufacturer, e.g. LM317 voltage regulators. Prefer that chip over something more esoteric, and if TI stop making them (unlikley!) you can buy them from ON or Linear instead. Consider replacement beforehand . Maybe you want chip X, and it's available in package A, B or C. Chip X is a bit niche, and might get discontinued, and chip Y would do the job, but costs more. Chip Y comes in package B, C or D, and the C package is pin-compatible with chip X package C. So design around package C, buy chip X, and keep chip Y as a backup plan. | {
"source": [
"https://electronics.stackexchange.com/questions/284583",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/69481/"
]
} |
285,096 | Does a component exist that functions as a switch for multiple connections at once, 3 in my case. So power, ground, and data are coming in from the top. When the switch is in the left position (as pictured) they will get connected to the power, ground, and data lines on the left side. When the switch is flipped to the right position all 3 power, ground, and data are moved over to their respective matches on the right side. Essentially it's like having a 3 switches, one each line and flipping them all at the same time. | Each of your "lanes" is a "pole" in switch terminology. 1P (or SP), 2P (or DP), 3P, 4P... The number of ways you can connect those to outputs is a "throw" (T) - Single being ST, double being DT and after that, numbers. 3T, 4T... Some switches add "center-off." Single-throw switches are "off" one way and "on" the other, while DT are always on in one direction or the other UNLESS they are "center off." 3P DT will do. Input goes to common. I have some lovely old modem A/B switches that switch all 25 pins (though most do fewer pins, all 25 being overkill for the average modem) so those are 25P-DT | {
"source": [
"https://electronics.stackexchange.com/questions/285096",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/7002/"
]
} |
285,404 | I recently started working in a small company that produces automotive diagnose related electronics. My boss, who is in his mid 50's, said that he was using 8051 derivatives, and they were doing the job. I went on to search Google to learn if 8051 is still popular today. And in Quora, I saw these: ...it is probably the simplest MCU architecture around. Every peripheral seems like the bare-bones version. My concepts of timers, clocking, UART etc. cleared up a lot! And, I then began appreciating other architectures - because I actually understood the differentiation. ...obviously , it won't be used by any industry to develop a product because of its simplicity... But why? So far, at least as a student, I used to do a lot of things without messing with architecture. I happily coded with C, I used LCD peripherals, connected to other IC's with different protocols (SPI, I2C etc.) Why should I bother with architecture of my microcontroller, apart from limited fields of real-time & time-critical applications? | Are 8051 and other low-bit microcontrollers still in use today? Yes, nearly everywhere. They're small and easy, there's a lot of cores floating around that you can put into your custom silicon at low or no cost, there's mature compilers. This all makes the 8051 still one of the most popular core architecture amongst silicon manufacturers. ARM cores might be available in more different products, but then again, when you talk to someone who's building a lot of devices at a very strict pricing constraint, chances are he's going to prefer a cheaper/free 8051 core if it gets the job done. Just to oppose @Nitro2k01 claim of niche-only usage: Mouser has nearly 800 models of 8051 microcontrollers on stock ¹. And the fact that these start, even at Mouser, at prices below 40ct might be an indication of what they're used for: mainstream, low-performance, high-volume MCUs thus: ...obviously , it won't be used by any industry to develop a product because of its simplicity... is high-quality utter nonsense. Especially since you're delivering a counter-example yourself My boss, who is in his mid 50's, said that he was using 8051 derivatives, and they were doing the job. Exactly! They're used everywhere, they're well-proven and cheap, and they are sufficient ; never underestimate the advantage of having a solution to a common problem in a drawer somewhere! Of course, it's often the case that you might need a solution with let's say two typical automotive busses, a high-speed interface to an ADC, some reliable watchdog timers, three PWM units... and then you start piecing together something consisting of four 8051 and 8080 derivates.. uh. That's a bad situation, and could very likely be solved much faster and more reliable using a single, more versatile, more powerful MCU (e.g. an ARM). But that "we have company knowledge on how something works with old technology" vs "we are future-proof by having the ability to run on modern hardware" is a classical investment security tradeoff. If you encounter one of that kind of projects, I'd try to talk to the boss in that context. For easy small jobs, yeah, 8051. Should I bother to learn about MCU architectures in general? Yes! I think @jfkowes explains that very well. But honestly: this is a bit like asking "should I learn how the internal combustion engine works if I want to be a car mechanic"; the answer is "you might just live fine if you can just execute repair manuals well enough, but you will probably be a much better technician (leave alone engineer) if you understand what your hardware does. As soon as you face a problem that can't be google'd, you'd be pretty much a turtle on your back if you didn't roughly understand how your processor works. Should I bother to learn the 8051 architecture? Probably not. In the sense that, yes, as long as cost is not your primary focus, you can most likely just use much mightier and versatile MCUs based on ARM cores or other, more modern architectures. Then again, the 8051 core is so easy that I'd actually recommend understanding what its units are before trying to tackle a more modern, complex, MCU core. It's a nice example. So if 8051 isn't the core I'm looking for in a low-volume application, what am I looking for? So, personally: go for an ARM Cortex-M0, -M3, -M4F; these are abundant in all kinds of affordable microcontrollers, easy to program (yay, mature GCC support, CMSIS standard libs, lots of embedded OSes running on these), and commonly come with standard debug interfaces (which is a great plus). ARMs are, from the outside, usually relatively easy to understand, as you'd typically map every peripheral into memory space, and that's it. Internally, they have varying degrees of sophistication, and speed/robustness/size optimizations, making them not perfectly easy to understand en detail, but I guess that might be a bit much to ask for unless you're into CPU design. If you're into CPU design, I think (this is really a personal belief based on my observation of research activities and "promised" industry investments) we're currently observing the rise of a new important ISA – the RISC-V. There's various implementations of this architecture for FPGAs or silicon, and people like Nvidia seem to also play with the though of replacing their stream multiprocessors with these kinds of cores. ¹: It's very likely I'm missing more than half of the actual 8051s that mouser has (because, hey, I just selected all MCUs whose core name was *80*5*). Chances are that if you pick a random 8bit microcontrollers, it's likely that its core is at least partially derived from 8051. I mean, just look at wikipedia's "list of [8051] derivate vendors" . | {
"source": [
"https://electronics.stackexchange.com/questions/285404",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/94481/"
]
} |
285,469 | I simulated this circuit from a reference design but I'm not entirely sure how it's working or how you would go about designing such a thing. In simulation it looks like it's designed to hold the current through D1 constant at around 5mA despite having an input voltage range of up to 25V. I see the gate voltage for M1 is held at about 1.6V, and the base voltage for the BJT rises as the input voltage rises. So as the voltage rises the current through the BJT increases so it's acting like an adjustable impedance there I guess to hold the gate voltage constant. Is that right? Is this the kind of thing you just do in spice or is some kind of current mirror circuit that's well defined somewhere and I just don't recognize it? | This circuit is designed to provide a constant current to the LED independent of the supply voltage. The MOSFET is turned on by the voltage at the collector of Q1. As soon as the current through R1 (which is the same as through the LED) results in a drop of about 0.6V, Q1 will start to turn on and divert current through R2. This will then reduce the voltage at M1 gate to control the current through M1 and the LED. The negative feedback will stabilize the current through D1, M1 and R1 at about 5mA as that will result in 0.6V at Q1 base. The current will vary slightly as the supply voltage varies but much less than just using a resistor. Also vary with temperature as the Vbe of the transistor will have ~2.2mV/deg temperature coefficient. The same circuit can be used where M1 is a BJT (such as 2n2222) rather than a MOSFET. The value of R2 will be more critical as the transistor will require some base current from the R2. | {
"source": [
"https://electronics.stackexchange.com/questions/285469",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/24564/"
]
} |
286,006 | The WIRED YouTube video Inside Facebook's Quest to Beam the Internet Via Solar Drone and article Inside Facebook’s First Efforts to Rain Internet from the Sky show a dish antenna (starting after 02:00) with what looks like a Cassegrain secondary reflector. The context of the video and article suggests it is for testing E-band millimeter-wave data up/down link to an aircraft (about 60 to 90 GHz according to the article, or 5 to 3 millimeter wavelength). I noticed that the secondary mirror is spinning. By watching the wobble and checking individual frames it seems to be turning at at least 4 revolutions per second. It could be much faster and aliasing makes it look this slow. I can not think of any reason why this would be turning. It's rotating about the optical axis, so it's not switching between primary and secondary horn locations. Why is the reflector on this millimeter-wave antenna spinning? above: GIF made from extracted and cropped frames from this WIRED YouTube video . above: Right-click for larger view; Ground station for millimeter-wave data linking to aircraft, from WIRED . Photo credit Damon Casarez. | From what I can tell it's a conical scanning antenna. From my limited understanding, it allows precise targeting with a wider beam. Image Source Wikimedia Commons | {
"source": [
"https://electronics.stackexchange.com/questions/286006",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/102305/"
]
} |
286,683 | Here is a technical drawing for a 3.5mm stereo audio plug, taken from a product datasheet ( http://www.tensility.com/pdffiles/50-00396.pdf ): The right-hand end is a standard 3.5mm audio plug. The left-hand end also seems to be some sort of male plug. Into what is it designed to be plugged? | The ends of wires in a cable get soldered to the left end of that assembly. Then the assembly is placed into a mold and hot plastic is injected around the wire to form a handle for the end of the cable. The plastic molding often includes a flexible strain relief for the cable. Typical result: Often times the best way to understand something is to take it apart. Here I show the wire attachment to a miniature jack by having cut the cable molding away. You can see that the wires are simply solder tacked onto their appropriate pads. Note that the back ends of these things will vary from manufacturer to manufacturer. | {
"source": [
"https://electronics.stackexchange.com/questions/286683",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/64088/"
]
} |
286,688 | There are 3 stages that every VHDL program undergoes: parsing, analysis and elaboration. Elaboration is circuit instantiation once top entity is specified. Analysis is something that determines available entities. What is it actually? It is probably applied computer science question, but I see the vhdl tag specified here. | The ends of wires in a cable get soldered to the left end of that assembly. Then the assembly is placed into a mold and hot plastic is injected around the wire to form a handle for the end of the cable. The plastic molding often includes a flexible strain relief for the cable. Typical result: Often times the best way to understand something is to take it apart. Here I show the wire attachment to a miniature jack by having cut the cable molding away. You can see that the wires are simply solder tacked onto their appropriate pads. Note that the back ends of these things will vary from manufacturer to manufacturer. | {
"source": [
"https://electronics.stackexchange.com/questions/286688",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/15287/"
]
} |
286,938 | My understanding of resistance and voltage is horrible. I heard that with Kirchhoff's law, (in my words, please correct) the voltage used by the circuit must equal the voltage supplied. For example, if I had a 9 V battery, I must use all 9 V of it. Lets say I have a LED with a typical forward bias voltage of 3.1 V, which means it loses 3.1 V while generating light. Will the LED if 9 V is used, burn out? It is most likely true, but a nice example will really make my understanding more intuitive. | This is one of those situations where your problem is not how good you are at analysis or what base knowledge you might have, but simply that you have no clue what you don't know. This always makes the first step into electronics a very high one. In the case of your example, what don't you know about a battery? The terminal voltage of an ideal battery would never change (at least until all the energy storage capacity is used). So there must be factors that affect the terminal voltage and its useful energy capacity. A quick list is chemistry, volume of materials, temperature and anode/cathode design. A practical battery has limited capacity and many of the other factors influencing terminal voltage and potential current capability can be rolled into a model element called 'Internal resistance'. In the model for most larger batteries this will be fractions of an ohm. However the battery also has other elements such as capacitance and inductance to make the situation more complex. You could start by reading about battery models with texts such as this . A great example of a larger battery with very small internal resistance is a 12 V car battery. Here, when you start the car it takes hundreds of Amps (kW of power and current in the 600 A range) to turn over the motor and the terminal voltage might drop from 13.8 V (a fully charged lead-acid car battery) to only 10 V when cranking. So the internal resistance might be (using Ohms Law) only 6 milliohms or so. You can scale the thinking for this example to smaller batteries such as AA, AAA and C batteries and at least begin to understand the complexity of a battery. Now what don't you know about an LED? The complexity of the electrical model for a diode (whether just a rectifier or an LED) is immense. But we could simplify it here and say that at it's most simple you can represent a diode by it's Bandgap voltage with a series resistor. You could start here by starting to learn on of the many SPICE packages and this discussion on StackExchange might be a good kickoff point. All semiconductor devices have a practical limitation in the amount of power they can dissipate. This is related primarily to the physical size of the device. The bigger the device the more power it can typically dissipate. Now you can consider your LED. You should begin by trying to understand the datasheet for the device. While many of the characteristics you won't understand you already know one (from your question), the forward voltage (Vf) and you could probably find the current limit and maximum power dissipation in the datasheet. Armed with those you could figure out the series resistance you need to limit the current so you don't exceed the power dissipation limit of the LED. Kirchhoff's Voltage Law gives you a big hint that since the voltage across the LED is about 3.1 V (and the datasheet current curve tells you you could never apply 9 V), you must need another lumped model component in the circuit. simulate this circuit – Schematic created using CircuitLab Note: the battery internal impedance shown above is simply specified to make calculation easy. Depending on the battery type (primary or rechargeable) the internal resistance can vary. Check your battery data sheet. Could the unknown element above simply be a piece of wire (no element)? It could ....but we can calculate the results easily. With two ideal voltage elements (9 V and 3.1 V) the resistors must have 5.9 V across them (Kirchhoff's voltage loop). The current flow must therefore be 5.9/10.1 = 584 mA. The power dissipated in the LED is (3.1 * 0.584) + (0.584^2 * 10) = 5.2 Watts.
Since your LED is probably rated at only 300 mW or so, you can see that it will heat up dramatically and in all probability fail within seconds. Now if the unknown element is a simple resistor, and we want the current through the LED to be let's say 20 mA, we have enough to calculate the value. The terminal voltage of the battery would be (9 - (0.02 * 0.1)) = 8.998 V
The terminal voltage of the LED would be (3.1 + (0.02 * 10)) = 3.3 V So the voltage across the unknown resistor is 5.698 and the current through it 20 mA. So the resistor is 5.698/0.02 = 284.9 Ohms. Under these conditions the loop voltages balance and the LED passes its designed value of 20 mA. It's power dissipation is therefore ( (3.3 * 0.02) + (0.02^2 * 10)) = 70 mW ....hopefully well within capability of a small LED. Hope this helps. | {
"source": [
"https://electronics.stackexchange.com/questions/286938",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/135121/"
]
} |
287,705 | I am having a hard time answering one particular question about our experiment.. In our experiment, R1 and R2 were set to 1Meg each and later to 10k... I understand the need for R1 and R2 a bit. Without R1 and R2, the voltage sharing wouldn't exactly be 50-50 for both D1 and D2 because no two diodes are completely identical. D1 and D2 will both have the same leakage currents (without R1 and R2) since they are just in series. However, they probably will have non-identical IV curves, so this particular leakage current will result to V@D1 /= V@D2. The question I am having a hard time is that, why is V@R1 + V@R2 /= 10v when R1 = R2 = 1Meg?... One the other hand, those two voltages add up (to 10v) when R1 = R2 = 10k... I included the 60 ohm source resistance in my diagram for completeness. However, as I can see, both D1 and D2 are reversed biased and thus, they offer a very large (reverse resistance) which should be much greater than the 60 ohms. Even with the parallel combination of 1Meg and D1 reverse resistance, it should still be much greater than the 60 ohms. I tried thinking of an answer in terms of the RD1reverse//R1 = Req1 and RD2reverse//R2 = Req2. Req1 + Req2 (series) should still be much more than 60ohms and I thought that the 10v should still show up at the node of D1 cathode. Yet in our experiment, V@R1 + V@R1 < 10v. Can anyone point me out if I am thinking this in a wrong way? Some tips/first step hint would really be appreciated Edit: question answered thanks to @CL. Assuming D1 and D2 are open during reverse bias for simplicity and noting that Rmultimeter = 10Meg, V@R2 (shown on multimeter) = 10v * (1Meg//10Meg)/((1Meg//10Meg)+1Meg+60) = 4.76v measured. | The input impedance of your multimeter changes the circuit: With 10k resistors, the difference would not matter, but the 1M resistors pass so little current that the additional current through the multimeter has a noticeable effect. If you knew your multimeter's input impedance, you would be able to calculate the voltage that you would get without it. | {
"source": [
"https://electronics.stackexchange.com/questions/287705",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/139731/"
]
} |
287,710 | I am trying to optimize the power usage of a custom application (C#) running on an embedded computer (Intel i7, Windows 7). To do so, I am measuring the current that the system requires while running the application. This current fluctuates; every second I get a different reading using a regular multimeter (Fluke 175). This makes me wonder about the accuracy of the reading. 1) If I would calculate the average of these readings, would this be a good indication of the power usage? Or would I miss high frequency peaks / dips in the current? 2) Could this be solved by taking multiple measurements per second using some sort of current sensor with a microcontroller and averaging the values? 3) If so, what would be a good rate to take measurements at; 10Hz? 1000Hz? 4) Could adding an RC-filter between the current sensor and the ADC of the microcontroller help in some way? This is my first question on StackExchange, please let me know if I'm making any mistakes. Edit:
The goal is to get a realistic power consumption estimation so we can improve and predict battery life. | The input impedance of your multimeter changes the circuit: With 10k resistors, the difference would not matter, but the 1M resistors pass so little current that the additional current through the multimeter has a noticeable effect. If you knew your multimeter's input impedance, you would be able to calculate the voltage that you would get without it. | {
"source": [
"https://electronics.stackexchange.com/questions/287710",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/-1/"
]
} |
288,876 | If you look at the pinout for VGA, there are several ground pins: I was curious as to why, and I found this answer . To sum it up, the extra ground pins are so that each pin has its own ground in order to prevent interference in the analog signal. But here's a DVI-I connector that supports analog signals: The analog pins are on the right side. The big cross is ground, and the four smaller pins surrounding it are for the red, green, blue, and horizontal sync. What is interesting here is that the ground is shared by all three color channels, unlike VGA where each has its own. Why are the additional ground pins necessary to prevent signal interference when using VGA but not DVI-I? They're the same pins that send the same data, just with a different physical connector, so it doesn't really make much sense as to why the number of ground connectors are different. | First: What's critical isn't so much that there's a ground pin for each signal as much as that there's a ground pin near each color signal. The cross-shaped ground pin largely satisfies that requirement. Second: DVI doesn't prioritize high-quality analog video -- it's a Digital Video Interface, after all. The small loss of quality incurred by using a single analog ground pin was probably considered acceptable by the designers. | {
"source": [
"https://electronics.stackexchange.com/questions/288876",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/89186/"
]
} |
288,930 | I am playing with a speaker and a microcontroller and in some documentation the speaker is called buzzer. The assistant told us that is a mistake and we are dealing with a speaker. I am wondering what exactly is the difference between the two, where can I learn more about these kind of devices and if there are any other similar or not so similar devices that are able to output sound. EDIT:
What is the difference between a piezo buzzer and why isn't buzzer a tag but piezo-buzzer is a tag ? | A buzzer usually has an oscillating transistor circuit inside - to make the buzzing noise when voltage is applied, so it makes a tone. Applying voltage to a speaker will not make a tone, so you'd need an external oscillating circuit (e.g. a 555, or transistor oscillator). A speaker can play all kinds of sounds, however due to its built in circuits, a buzzer may not be capable of playing tones other than its oscillator's tone. Buzzers are usually piezo buzzers - based on a tiny bit of crystal inside. You can usually see a metallic flat surface there, with a larger box/circuit underneath for the built in oscillator circuit. | {
"source": [
"https://electronics.stackexchange.com/questions/288930",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/115245/"
]
} |
289,474 | Sorry if this is the wrong community for this question but in my mind, it's the best fit. Please close or move to a more appropriate community if it's off topic. The other day, several home alarm companies came to look at my home to quote an alarm system. They all gave me a wireless option since the wired option they said would be super expensive. I asked if it was possible for someone to jam the wireless sensors and break in. One representative said no but didn't elaborate why. The other said no because the communication between the sensor and the main panel is encrypted so people can't jam it. I don't believe this is true but I don't have an EE degree. I think it's not true because I've heard on the news people have built cell phone jammers so it probably isn't hard to jam these sensors too. I think these sensors operated in the 200Mhz range (if I remember correctly), if that matters, although he said there's some encryption going on between the panel and sensor. That confuses me because the encryption is digital but the communication is analog? | A "denial-of-service" wireless attack is very easy. It will disrupt radio communication between sensor and panel. Hopefully, the panel is smart enough to detect that one (or more) of its sensors has failed to report-in. A non-reporting sensor should be assumed under attack. Ask your supplier what protocol is followed if your panel reports that a sensor has failed to report-in. A much more difficult attack is a "spoof" attack, where the communication between sensor and panel is overpowered by an attacker with a valid message. An "all-OK" signal is very difficult for an attacker to generate because of encryption. Because these signals are regularly sent, it is vulnerable to a determined attacker who is willing to capture signals over a long period. | {
"source": [
"https://electronics.stackexchange.com/questions/289474",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/35913/"
]
} |
289,714 | Wikipedia links to a Sallen-Key filter as a active low pass, so I tried it out with LTSpice. The frequency response and phase response are not linear, instead frequency response even gets higher after 10kHz. Why is that, and why would I use a Sallen-Key filter instead of a "normal" low pass filter? The Sallen-Key is on the blue line. | What you call "normal" is a simple two-stage RC filter with very bad selectivity (two real poles only). In contrast. the Sallen-Key topology is capable of producing a second-order lowpass response with much better selectivity (higher pole Qp) and various possible approximations (Butterworth, Chebyshev, Thomson-Bessel,...). However, there is one big disadvantage of the Sallen-Key structure - if compared with other active filter topologies (multi-feedback, GIC-filters, state-variable,...): There is a direct path (in your example: C4) from the input network to the opamp output. That means: For frequencies much larger than the cut-off frequency the output voltage from the opamp is - as desired - very low. However, there is a signal coming directly through the C4 path which creates an output signal at the finite output resistance of the opamp. And this resistance is increasing with frequency! As a consequence, the damping charactersitics of this filter are not as good as it should/could be. And that`s what you have observed: The magnitude shows a rising characteristic for larger frequencies.
(This unwanted damping degradation is not caused by limitations of the gain-bandwidth product). Improvement: The situation can be improved by scaling the parts values: Smaller capacitors and larger resistor values. Comment 1 : This undesired property of any opamp circuit with a feedback capacitor (between output and input circuitry) can be observed also for the classical MILLER integrator. Comment 2: So - are there any advantages the Sallen-Key filters have in comparison to other active filter structures? Yes - there are. Let`s compare the two most frequently used topologies: (1) Sallen-Key has very low "active sensitivity" figures (sensitivity against opamp non-idealities) and rather high "passive sensitivity" figures (sensitivity against passive tolerances). (2) Multi-feedback filters (MF): High "active sensitivity" and low "passive sensitivity" figures. Both sensitivities are rather important properties of all filters because they determine the deviations between desired and actual filter response (under IDEAL conditions all filter types would have identical performance properties). | {
"source": [
"https://electronics.stackexchange.com/questions/289714",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/103826/"
]
} |
290,059 | Most cars use an AC generator then convert its voltage to DC via a bridge diode rectifier to charge the 12v battery. Why wouldn't a DC dynamo be used instead? Is it because AC dynamo gives better efficiency? (Even bike and wind energy use AC dynamo/turbine). | No, it's not for efficiency reasons. DC generators typically have commutators, i.e. contacts with brushes that reverse the polarity of the voltage at the generator clamps every half rotation. In essence, DC generators are just AC generators that have a "mechanical" rectifier. You can build generators without any electrical contacts between moving parts, but you cannot build commutators without those. Since such contacts are very likely to fail under constant use, in dirty and vibrating environments, it's very desirable not to use them in cars. I'd also go as far as to say that unless you build a very expensive one, the contact resistance might be higher than what you lose over a bridge rectifier. | {
"source": [
"https://electronics.stackexchange.com/questions/290059",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/103430/"
]
} |
290,064 | While implementing an RF amplifier circuit, I came across the fact that micro-strip RF chokes are better for narrowband applications and easy to realize than physical inductor. My question is whether it is possible to implement DC blocks using micro-strip line. If it is possible what are the formulae/relations I can use to implement DC blocks using micro-strip lines? Thank you! | No, it's not for efficiency reasons. DC generators typically have commutators, i.e. contacts with brushes that reverse the polarity of the voltage at the generator clamps every half rotation. In essence, DC generators are just AC generators that have a "mechanical" rectifier. You can build generators without any electrical contacts between moving parts, but you cannot build commutators without those. Since such contacts are very likely to fail under constant use, in dirty and vibrating environments, it's very desirable not to use them in cars. I'd also go as far as to say that unless you build a very expensive one, the contact resistance might be higher than what you lose over a bridge rectifier. | {
"source": [
"https://electronics.stackexchange.com/questions/290064",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/138577/"
]
} |
290,846 | Since I've spent a good portion of my career trying to get opamp's rails as steady as possible at their intended voltage, I haven't really spent any time thinking about what would happen if the rails are moving away from a fixed value. Since I have only briefly studied the internal workings of op amps, I'm not so sure I could come up with a definite answer. So, what does happen to the signal if the rails are moving? (lets just say there moving slowly, like less than 5Hz, maybe a 1V shift from time to time) Is it more than just clipping at different levels? | In theory, the OpAmp should perform well no matter what the supply is doing. As we leave the theoretical model of an OpAmp (remember there aren't even supply pins on the basic symbol, just IN+, IN- and OUT), we have to consider more and more details brought in by the real circuit. Many will of course be obvious to you, but trust me - we'll eventually get to an answer. First, the output can never exceed the voltage supplied to the Amp. Then, the performace gets worse when the output is trying to push or pull the voltage close to the rails. This will, of course, depend heavily on the design of the OpAmp - and Rail-to-Rail amps promise to give you all the available voltage at the output. As long as we look at a DC-supplied OpAmp, any signal well within the specification of the maximum output swing will work, and you can supply the OpAmp with any positive and negative voltages allowed by the data sheet (with regard to each other and to ground, but note that the OpAmp has no way of knowing where ground actually is; supplying +3 V and -7 V is no problem at all - and your amp will try to remain working within this range of 10 V). Internal current sources, differential stages and output drivers are designed such that the OpAmp cancels out any variations on the supply rails as quickly as it possibly can. Only if the variations on the supply rails change quickly enough, you will start to notice an effect. Usually, this sets in somewhere between some 100 Hz to some 10 kHz. And the best part: It's specified in the data sheet; look for PSRR (Power Supply Rejection Ratio). The value is usually very high for DC to low frequencies (60...120 dB) and starts to degrade with what looks like a simple low-pass characteristic above a certain point. Note that we're talking about rejection , so it's actually a high-pass even though the slope goes down on the diagram: Note that the text in the image says: ±15 V - so what is actually done to the OpAmp's supply pins? As with any good data sheet specification, there's also a test circuit that tells you how it's measured: This also explains why there are two lines in the diagram (-PSR and +PSR). The OpAmp's internal current sources, for example, are sometimes feeding their loads from the positive supply, sometimes into the negative supply, and the internal design is not absolutely symmetrical. Take the good ol' 741 as an example: Only the output stage at the very right is symmetrical, everything else is not. More advanced parts will still follow this basic principle to a certain degree. In a nutshell: For DC and low frequencies, look at the DC specifications (rail-to-rail with what limitations for gain and distortion?). For higher frequencies, look at the PSRR. If you apply a step to the supply volatge, you have a mixture, because a step is composed of some high-frequency part besides the obvious jump from one DC level to another DC level, resulting in a disturbance at the output caused by whatever higher-frequency part of the step that can't be rejected by the OpAmp. What I haven't covered here might be answered in Analog Devices' tutorial MT-043 . This is also where I've taken the images from (except for the 741 circuit). | {
"source": [
"https://electronics.stackexchange.com/questions/290846",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/96810/"
]
} |
290,897 | I use a 7805 for a project where the circuit needs a higher current (~2.8 A) at 5 V. So I assume that if I use both ICs in parallel I can increase the maximum current capacity. Would it work? | As others have already said, paralleling multiple linear voltage regulators is a bad idea. However here is a way to effectively increase the current capability of a single linear regulator: At low currents, there is little voltage across R1. This keeps Q1 off, and things work as before. When the current builds up to around 700 mA, there will be enough voltage across R1 to start turning on Q1. This dumps some current onto the output. The regulator now needs to pass less current itself. Most additional current demand will be taken up by the transistor, not the regulator. The regulator still provides the regulation and acts as the voltage reference for the circuit to work. The drawback of this is the extra voltage drop across R1. This might be 750 mV or so at full output current of the combined regulator circuit. If IC1 has a minimum input voltage of 7.5 V, then IN must now be at 8.3 V or so minimum. A Better Way Use a buck regulator already! Consider the power dissipated by this circuit, even in the best case scenario. Let's say the input voltage is only 8.5 V. That means the total linear regulator drops 3.5 V. That times the 2.8 A output current is 9.8 W. Getting rid of 10 W of heat is going to be more expensive and take more space than a buck switcher that makes 5 V from the input voltage directly. Let's say the buck switcher is 90% efficient. It is putting out (2.8 A)(5 V) = 14 W. That means it requires 15.6 W as input, and will dissipate 1.6 W as heat. That can probably be handled just by good part choice and placement without explicit heat sinking or forced air cooling. | {
"source": [
"https://electronics.stackexchange.com/questions/290897",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/49772/"
]
} |
291,313 | I saw many videos on YouTube in which people delid processors and then apply better liquids for cooling the processor. Example: i5 & i7 Haswell & Ivy Bridge - FULL Delid Tutorial - (Vice Method) However I also saw that people working in fabs are wearing special costumes, because the silicon wafers are extremely sensitive to all kinds of particles. What actually happens when delidding a processor? | Wafers are extremely sensitive during manufacture, because if any dust or dirt particle settles on it between any process steps, then the following process steps will fail on the contaminated spot. Once manufacture is finished, and the chip receives its last layer, dust will no longer bother it. I would venture a guess that desktop CPUs which have thermal spreading lids on them will receive a proper surface treatment for application of the chosen thermal paste. | {
"source": [
"https://electronics.stackexchange.com/questions/291313",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/115245/"
]
} |
291,640 | Aluminum refineries use electricity to separate aluminum from minerals it naturally occurs in. This electricity typically takes the form of low voltage DC ("low" meaning 4 to 6 volts), at very high current (on the order of tens of kiloamps). This much power poses an electrocution danger, but I don't understand how. If the entire electrical system runs at, say, 5 volts, and the human body acts like a resistor, then how can enough current actually make it through a human body to be dangerous? Similarly, how can an electrical arc through air happen, if it takes hundreds of volts to arc over a very short distance? | The voltage for the Hall–Héroult process is inconveniently low (and the current too high) for efficient parallel operation so they use a whole bunch of cells in series. From this source ("Studies on the Hall-Heroult Aluminum Electrowinning Process"): The optimum current density is around 1 A cm-2 with a total cell current of 150-300 kA and a cell voltage -4.0 to -4.5 V. A typical cell house will contain about 200 cells arranged in series on two lines. So the voltage at any given cell with respect to earth can be quite high, and the voltage across a cell if it opens up will be almost 1kV. Currents like that will easily vaporize metal so they can sustain a very long arc if it opens up relatively slowly and does not have a blow-out mechanism (DC is worse than AC). To understand the efficiency issue- consider a simple full wave rectifier made with 6 silicon rectifiers. It will have a drop of (say) 2V at full current so the loss will be the output current x 2V. At 150kA that's 300kW lost. If you run 200 cells in parallel you would be wasting 60MW. Even at the cheap electricity prices that smelters pay that will add up- of the order perhaps 25-50 million dollars a year. In series, the loss is 'only' 300kW. The capital cost is also much less to make 150kA at 800V vs. 30MA at 4.5V because far more rectifiers and heat sinking would be required. | {
"source": [
"https://electronics.stackexchange.com/questions/291640",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/54453/"
]
} |
292,580 | Hello Guys. I need help identifying which pair of leads can be used for 110V. I tried continuity test with multimeter and got continuity on the following pairs (1&3) and (2&4). Thanks in advance.
Cheers | It's not as obvious as it might be, but the label tells you to put line voltage onto pins 1 and 4 whatever voltage you want to run from. Then for use on 230V, you link pins 2 and 3 together, putting the windings in series without cancelling each other out. Or for use on 115V, you link pins 1 and 2 together, and pins 3 and 4 together. That way means the 115V windings act in parallel, again without cancelling each other out. Edit: This arrangement allows the use of a DPDT switch as a line voltage selector in a way that makes it safe to change while powered up, and there are plenty on the market suitably labelled (usually slide switches). Here's how you'd connect it up, it has a rather pleasing symmetry... | {
"source": [
"https://electronics.stackexchange.com/questions/292580",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/133537/"
]
} |
293,915 | I read somewhere that bad VHDL code can lead to FPGA damage. Is it even possible to damage a FPGA with VHDL code?
What kind of conditions would cause this and what are the worst case scenarios? | Adding to @Anonymous's answer, there are designs you can build which can damage the fabric of an FPGA. For starters if you build a very large design consisting of huge quantities of registers (e.g. 70% of the FPGA) all clocked at nearing the FPGAs maximum frequency, it is possible to heat the silicon considerably. Without sufficient cooling this can cause physical damage. We lost a $13k FPGA because it overheated due to the dev-kit having a terrible cooling system. Another simpler case can be combinational loops. For example if you instantiate three not gates chained together in a ring, and disable or ignore the synthesizers warnings about such a structure, you can form something which is very bad for an FPGA. In this example you'd make a multi-GHz oscillator which could produce a lot of heat in a very small area, probably damaging the ALM and surrounding logic. | {
"source": [
"https://electronics.stackexchange.com/questions/293915",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/73491/"
]
} |
293,925 | I'd like to create an input module for arduino, which should read 64 digital inputs using (chained) PISO shift registers. My system voltage is 24V, so the best way will be any PISO shift register (with integrated isolation) which should be able to "read 24V inputs directly" and protect arduino from overvoltage on that inputs. I found the only way - K847PH with CD4021, but this will make my scheme much more complicated and it will rapidly increase number of used parts and pcb size, so I am looking for (near)single-chip solution. Thank you. | Adding to @Anonymous's answer, there are designs you can build which can damage the fabric of an FPGA. For starters if you build a very large design consisting of huge quantities of registers (e.g. 70% of the FPGA) all clocked at nearing the FPGAs maximum frequency, it is possible to heat the silicon considerably. Without sufficient cooling this can cause physical damage. We lost a $13k FPGA because it overheated due to the dev-kit having a terrible cooling system. Another simpler case can be combinational loops. For example if you instantiate three not gates chained together in a ring, and disable or ignore the synthesizers warnings about such a structure, you can form something which is very bad for an FPGA. In this example you'd make a multi-GHz oscillator which could produce a lot of heat in a very small area, probably damaging the ALM and surrounding logic. | {
"source": [
"https://electronics.stackexchange.com/questions/293925",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/142912/"
]
} |
294,550 | I have been learning about Ohm's law and testing the resistance across the plug of my household appliances and calculating the current. For example, my kettle was 22 ohms (10.45 amperes) and is protected by a 13 A fuse. This makes sense, and I'm okay with it, but then I tested the vacuum cleaner which had a resistance of 7.7 ohms which equates to 29.8 amperes which surely should blow the 13 A fuse, but it doesn't. I have now tested two different vacuum cleaners which have the same small resistance reading across the live and neutral. Surely this would be a direct short, but it works fine so does the resistance change or what? | The 7.7 ohms you measured is the winding resistance of the motor. But that is not the only factor that determines its operating current. Your vacuum cleaner might draw close to the calculated 30A the instant power is applied, but as soon as the motor starts to rotate, it generates a voltage that is proportional to speed (called back emf) that opposes the applied voltage, decreasing the net voltage available to drive current through the windings. As the motor speed increases, the current (and therefore the torque produced by the motor) decreases, and the speed settles at the point where the torque produced by the motor matches the torque required to drive the load at that speed. Fuses don't blow instantly. But if you were to lock the motor so it couldn't rotate, that fuse wouldn't last long. | {
"source": [
"https://electronics.stackexchange.com/questions/294550",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/143221/"
]
} |
294,940 | First of all let me just state that I do not feel confident enough to tell anyone anything about how electric circuits work or anything about the physics behind them because I simply do not know or understand it all. But I have many times read that there has to be a closed path for current to flow in a circuit, leading to a fact that if there isn't a closed conductive loop nothing can happen. And I have taken that to be a definitive truth, but I wonder about something(and I might just as well be terribly far of the path of reason here). If I was to design a circuit board which contains traces through which very high frequency signals(currents) will flow then I have to consider things like signal reflections, I don't know what reflections consists of in purely physical terms(but I have to imagine that a reflected signal is a certain amount of the current(s) that was originally sent through the trace) but apparently if I send a high-frequency signal down a trace(or wire) then under certain conditions the signal can travel down the trace(wire) only to bounce off of something and then travel all the way back to where it first came from. Where it might bounce off of something again and so it can bounce back and forth travelling the length of the trace over and over again getting smaller and smaller until it dies out. This is just stuff from the top of my head, stuff that I have never acquired a fair understanding of in the first place. But if we restrict the scenario to this very high frequency situation, if a signal or current can be reflected back towards where it came then why would it even have to be relevant whether there are a closed loop or not. Couldn't a broken loop present paths for such currents to bounce around in? I know that I am at a relatively very low level of insight into these complex matters but I don't now why that wouldn't be possible.
I would be very happy if anyone could enlighten me. I have one single hypothesis without anything what so ever to support it, but perhaps the very high frequency scenario alters the way that a traces copper is utilized so that it in some respect is a closed loop in it self? | You are completely right. The "closed loop" rule comes from a simplification that we often use in circuit analysis called the "lumped component model". This model provides a good approximation to actual circuit behavior at DC and low frequencies, where the effects of parasitic inductance, capacitance and the speed of light can be ignored. However, these factors become significant at high frequencies and can no longer be ignored. Any circuit of nonzero size has inductance and capacitance, and is capable of radiating (or receiving) an electromagnetic wave. This is why radio works at all. Once you start considering parasitic capacitances, you'll discover that everything is connected to pretty much everything else (moreso to nearby objects), and there are closed loops where you wouldn't normally expect to find them. | {
"source": [
"https://electronics.stackexchange.com/questions/294940",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/130414/"
]
} |
294,974 | When you have a diode with a certain barrier voltage (e.g., 0.7 V for Si) and you apply a voltage higher than this barrier potential, why does the voltage across the diode remain at 0.7V? I understand that the output voltage across the diode will increase as a sinusoidal input is applied until it reaches the 0.7 mark, I don't seem to understand why it remains constant after that point however. It makes sense to me that any potential greater than this barrier potential will allow current to pass, and correspondingly, the potential across the diode should be the applied voltage minus the 0.7 V. | The voltage across the diode does not remain at about 0.7 V. When you increase the current, the forward voltage also increases (here: 1N400x): And when you increase the current even further, the power dissipation becomes too large, and the diode eventually becomes a LED (light-emitting diode) and shortly afterwards a SED (smoke-emitting diode). So a larger forward voltage cannot happen in practice. | {
"source": [
"https://electronics.stackexchange.com/questions/294974",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/125564/"
]
} |
295,256 | I am working with the STM32F303VC discovery kit and I am slightly puzzled by its performance. To get acquainted with the system, I've written a very simple program simply to test out the bit-banging speed of this MCU. The code can be broken down as follows: HSI clock (8 MHz) is turned on; PLL is initiated with the with the prescaler of 16 to achieve HSI / 2 * 16 = 64 MHz; PLL is designated as the SYSCLK; SYSCLK is monitored on the MCO pin (PA8), and one of the pins (PE10) is constantly toggled in the infinite loop. The source code for this program is presented below: #include "stm32f3xx.h"
int main(void)
{
// Initialize the HSI:
RCC->CR |= RCC_CR_HSION;
while(!(RCC->CR&RCC_CR_HSIRDY));
// Initialize the LSI:
// RCC->CSR |= RCC_CSR_LSION;
// while(!(RCC->CSR & RCC_CSR_LSIRDY));
// PLL configuration:
RCC->CFGR &= ~RCC_CFGR_PLLSRC; // HSI / 2 selected as the PLL input clock.
RCC->CFGR |= RCC_CFGR_PLLMUL16; // HSI / 2 * 16 = 64 MHz
RCC->CR |= RCC_CR_PLLON; // Enable PLL
while(!(RCC->CR&RCC_CR_PLLRDY)); // Wait until PLL is ready
// Flash configuration:
FLASH->ACR |= FLASH_ACR_PRFTBE;
FLASH->ACR |= FLASH_ACR_LATENCY_1;
// Main clock output (MCO):
RCC->AHBENR |= RCC_AHBENR_GPIOAEN;
GPIOA->MODER |= GPIO_MODER_MODER8_1;
GPIOA->OTYPER &= ~GPIO_OTYPER_OT_8;
GPIOA->PUPDR &= ~GPIO_PUPDR_PUPDR8;
GPIOA->OSPEEDR |= GPIO_OSPEEDER_OSPEEDR8;
GPIOA->AFR[0] &= ~GPIO_AFRL_AFRL0;
// Output on the MCO pin:
//RCC->CFGR |= RCC_CFGR_MCO_HSI;
//RCC->CFGR |= RCC_CFGR_MCO_LSI;
//RCC->CFGR |= RCC_CFGR_MCO_PLL;
RCC->CFGR |= RCC_CFGR_MCO_SYSCLK;
// PLL as the system clock
RCC->CFGR &= ~RCC_CFGR_SW; // Clear the SW bits
RCC->CFGR |= RCC_CFGR_SW_PLL; //Select PLL as the system clock
while ((RCC->CFGR & RCC_CFGR_SWS_PLL) != RCC_CFGR_SWS_PLL); //Wait until PLL is used
// Bit-bang monitoring:
RCC->AHBENR |= RCC_AHBENR_GPIOEEN;
GPIOE->MODER |= GPIO_MODER_MODER10_0;
GPIOE->OTYPER &= ~GPIO_OTYPER_OT_10;
GPIOE->PUPDR &= ~GPIO_PUPDR_PUPDR10;
GPIOE->OSPEEDR |= GPIO_OSPEEDER_OSPEEDR10;
while(1)
{
GPIOE->BSRRL |= GPIO_BSRR_BS_10;
GPIOE->BRR |= GPIO_BRR_BR_10;
}
} The code was compiled with CoIDE V2 with the GNU ARM Embedded Toolchain using -O1 optimization. The signals on pins PA8 (MCO) and PE10, examined with an oscilloscope, look like this: The SYSCLK appears to be configured correctly, as the MCO (orange curve) exhibits an oscillation of nearly 64 MHz (considering the error margin of the internal clock). The weird part for me is the behavior on PE10 (blue curve). In the infinite while(1) loop it takes 4 + 4 + 5 = 13 clock cycles to perform an elementary 3-step operation (i.e. bit-set/bit-reset/return). It gets even worse on other optimization levels (e.g. -O2, -O3, ar -Os): several additional clock cycles are added to the LOW part of the signal, i.e. between the falling and rising edges of PE10 (enabling the LSI somehow seems to remedy this situation). Is this behavior expected from this MCU? I would imagine a task as simple as setting and resetting a bit ought to be 2-4 times faster. Is there a way to speed things up? | The question here really is: what is the machine code you're generating from the C program, and how does it differ from what you'd expect. If you didn't have access to the original code, this would've been an exercise in reverse engineering (basically something starting with: radare2 -A arm image.bin; aaa; VV ), but you've got the code so this makes it all easier. First, compile it with the -g flag added to the CFLAGS (same place where you also specify -O1 ). Then, look at the generated assembly: arm-none-eabi-objdump -S yourprog.elf Notice that of course both the name of the objdump binary as well as your intermediate ELF file might be different. Usually, you can also just skip the part where GCC invokes the assembler and just look at the assembly file. Just add -S to the GCC command line – but that will normally break your build, so you'd most probably do it outside your IDE. I did the assembly of a slightly patched version of your code : arm-none-eabi-gcc
-O1 ## your optimization level
-S ## stop after generating assembly, i.e. don't run `as`
-I/path/to/CMSIS/ST/STM32F3xx/ -I/path/to/CMSIS/include
test.c and got the following (excerpt, full code under link above): .L5:
ldr r2, [r3, #24]
orr r2, r2, #1024
str r2, [r3, #24]
ldr r2, [r3, #40]
orr r2, r2, #1024
str r2, [r3, #40]
b .L5 Which is a loop (notice the unconditional jump to .L5 at the end and the .L5 label at the beginning). What we see here is that we first ldr (load register) the register r2 with the value at memory location stored in r3 + 24 Bytes. Being too lazy to look that up: very likely the location of BSRR . Then OR the r2 register with the constant 1024 == (1<<10) , which would correspond to setting the 10th bit in that register, and write the result to r2 itself. Then str (store) the result in the memory location we've read from in the first step and then repeat the same for a different memory location, out of lazyness: most likely BRR 's address. Finally b (branch) back to the first step. So we have 7 instructions, not three, to start with. Only the b happens once, and thus is very likely what's taking an odd number of cycles (we have 13 in total, so somewhere an odd cycle count must come from). Since all odd numbers below 13 are 1, 3, 5, 7, 9, 11, and we can rule out any numbers larger than 13-6 (assuming the CPU can't execute an instruction in less than one cycle), we know that the b takes 1, 3, 5, or 7 CPU cycles. Being who we are, I looked at ARM's documentation of instructions and how much cycles they take for the M3: ldr takes 2 cycles (in most cases) orr takes 1 cycle str takes 2 cycles b takes 2 to 4 cycles. We know it must be an odd number, so it must take 3, here. That all lines up with your observation: $$\begin{align}
13 &= 2\cdot(&c_\mathtt{ldr}&+c_\mathtt{orr}&+c_\mathtt{str})&+c_\mathtt{b}\\
&= 2\cdot(&2&+1&+2)&+3\\
&= 2\cdot &5 &&&+3
\end{align}$$ As the above calculation shows, there will hardly be a way of making your loop any faster – the output pins on ARM processors are usually memory mapped , not CPU core registers, so you have to go through the usual load – modify – store routine if you want to do anything with those. What you could of course do is not read ( |= implicitly has to read) the pin's value every loop iteration, but just write the value of a local variable to it, which you just toggle every loop iteration. Notice that I feel like you might be familiar with 8bit micros, and would be attempting to read only 8 bit values, store them in local 8 bit variables, and write them in 8 bit chunks. Don't. ARM is a 32bit architecture, and extracting 8 bit of a 32bit word might take additional instructions. If you can, just read the whole 32bit word, modify what you need, and write it back as whole. Whether that is possible of course depends on what you're writing to, i.e. the layout and functionality of your memory-mapped GPIO. Consult the STM32F3 datasheet/user's guide for info on what is stored in the 32bit containing the bit you want to toggle. Now, I tried to reproduce your issue with the "low" period getting longer, but I simply couldn't – the loop looks exactly the same with -O3 as with -O1 with my compiler version. You'll have to do that yourself! Maybe you're using some ancient version of GCC with suboptimal ARM support. | {
"source": [
"https://electronics.stackexchange.com/questions/295256",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/143588/"
]
} |
295,629 | I'm guessing it's a Faraday cage around the receiver, but don't know why they might need one. Is there some sort of common interference around 38kHz (their operating frequency)? It's the only component I think I've used that gets this special treatment. A larger cage may be around one in a VCR,
and a little baby cage sometimes appears around the standalone PC mount component: Thanks for your insight! | [ added 2_D resistor_grid methodology for exploring shielding topologies ] You want that IR receiver to respond to photons, not to external electric fields. Yet the photodiode is a fine target for trash from fluorescent lights (200 volts in 10 microseconds) as the 4' tube has that restrike-the-arc action 120 times a second. [or 80,000 Hertz for some tubes] Using the parallel-plate model of capacitance, $$C = E0*Er*Area/Distance$$
with diode area of 3mm*3mm and distance of 1 meter, the capacitance is
$$9e-12Farad/meter * (ER=1 air) * 0.003*0.003/1$$ or ~~ 1e-11 * 1e-5 = 10^-16Farad What current from a fluorescent light, at 20Million volts/second slewrate?
$$I = C * dV/dT$$ or I = 1e-16Farad * 2e+7 Volt/second = 2nanoAmp That ---- 2 nanoAmp ---- apparently is a big deal (the edge rate, 10 us, is close to 1/2 period of 38 kHz). The metal cage protects by attenuating the Efield in an exponentially improving manner; thus the further the cage is in front of the photodiode, the more dramatic the Efield attenuation. Richard Feynman discusses this, in his 3-volume paperback on physics [I'll find a link, or at least a page #], in his lecture on Faraday cages and why the holes are acceptable IF the vulnerable circuits are spaced back several hole-diameters. [again, exponential improvement] Are other Efield trash sources near? How about digitally noisy logic0 and logic1 for LED displays; 0.5 volts in 5 nanoseconds, or 10^8 volts/second(standard bouncing of "quiet" logic levels, as MCU program activity continues). How about a switching regulator, inside the TV; regulating off the ACrail, with 200 volts in 200 nanoseconds, or 1Billion volts/second, at 100 kHz rate. At 1 billion volts/second, we have 100 nanoAmps aggressor currents. Of course, there should be no line-of-sight between a switchreg and the IR receiver, is there? Line-of-sight does not matter. The Efields explore all possible paths, including up-and-back-down or around-corners. simulate this circuit – Schematic created using CircuitLab HINT TO BEHAVIOR: the Efields explore all possible paths. ================================================ From the master of clear-thinking himself, in his own words, I offer the explanation of Mr "Why did the space shuttle explode high over Cape Canaveral?", the gleeful Dr. Richard Feynman. He provided a 2 year introduction to physics at Caltech, approximately 1962. His lectures were transcribed, very carefully to serve as reference material,
[its worth getting these 3, and re-reading them every 5 years; also, the curious teenager will savor the realworld discussions in Feynman's style] and published in 3 paperback volumes as "The Feynman Lectures on Physics". From Volume II, focused on "mainly electromagnetism and matter", we turn to Chapter 7 "The Electric Field in Various Circumstances: Continued", and on page 7-10 and 7-11, he presents "The Electrostatic Field of a Grid". Feynman describes a infinite grid of infinitely long wires, with wire-wire spacing of 'a'. He starts with equations [introduced in Volume 1, Chapt 50 Harmonics] that will approximate the field, with more and more terms optionally usable to achieve greater and greater accuracy. The variable 'n' tells us the order of the term. We can start with "n = 1". Here is the summary equation, where 'a' is the spacing between grid wires: $$Fn = An * e^-Z/Zo$$ where Zo is $$Zo = a/(2*pi*n)$$ At distance Z = a above the grid, thus we are 3mm above a grid spaced 3mm, and using only the "n = 1" part of the solution, we have
$$Fn = An * e^-(2 * pi * 1 * 3mm)/3mm$$ Since this Fn is e^-6.28 smaller than An, we have rapid attenuation of the external electric field. With 2.718^2.3 = 10, 2.718^4.6 = 100, 2.718^6.9 = 1000, then e^-6.28 is about 1/500. ( 1/533, from a calculator) Our external field of An has been reduced by 1/500, to 0.2% or 54dB weaker, 3mm inside a grid spaced at 3mm. How does Feynman summarize his thinking? "The method we have just developed can be used to explain why electrostatic shielding by means of a screen is often just as good as with a solid metal sheet. Except within a distance from the screen a few times the spacing of the screen wires, the fields inside a closed screen are zero. We see why copper screen---lighter and cheaper than copper sheet---is often used to shield sensitive electrical equipment from external disturbing fields." (end quote) Should you seek a 24 bit embedded system, you need 24*6 = 144dB attenuation; at 54dB per unit_spacing, you need to be 3*wire-wire spacing, behind the grid. For a 32 bit system, that becomes 32*6 = 192 dB, or nearly 4*wire-wire spacing, behind the grid. Caveat: this is electrostatics. Fast Efields cause transient currents in the grid wires. Your mileage will vary. Notice we only used the "a = 1" part of the solution; can we ignore the additional parts of the harmonic/series solution? Yes. With "n = 2", we get the attenuation * attenuation, and "n = 3" yields atten * atten * atten. ================================================= EDIT To model more common mechanical structures, to determine the ultimate trash levels as an Efield couples into a circuit, we need to know (1) the impedance of the circuit at the aggressor frequency, and (2) the coupling from a 3_D trash aggressor to a 3_D signal chain node. For simplicity, we'll model this in 2_D, using the available grid_of_resistors simulate this circuit | {
"source": [
"https://electronics.stackexchange.com/questions/295629",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/143804/"
]
} |
296,202 | I was asked to find the four colour stripes of a 1M ohm (tolerance of 5%) My question now is don't we have 2 options? I came up with: Black(0) Brown(1) Blue(x1M) Gold (5%) Brown(1) Black(0) Green(x100K) Gold(5%) Is there one right and one wrong or are both of these correct? | You are not allowed to start with black except for zero ohm jumper, so only your second suggestion is valid. | {
"source": [
"https://electronics.stackexchange.com/questions/296202",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/102975/"
]
} |
296,879 | I am trying to make a logic level converter using transistor BC547.
This is to convert Voltage level of Rpi Gpio from 3.3 to 5V.
I have wired the circuit according to this diagram: I have done this to convert 3.3V to 5V for PWM application.
I have connected the circuit to GPIO no 17 and set it high Questions : 1) why is there no ground in the circuit? 2)I tried to measure voltage at the other end wrt ground, does not show anything.what is the problem? Thankyou. | I hate adding an answer here, especially since the OP doesn't even need bidirectional operation. But the circuit is laid out terribly (for understanding it.) And the description about dogs and tails does not help, excepting perhaps alchemists trying to write down allegorical and mystifying bits of their "art." (There are shared terms, developed over time and used in electronics to help communicate. A "pull-down" might be such an example. But they have survived the test of time and they do communicate using the general idea of pulling at a node, which isn't difficult to communicate when someone asks and is trying to learn the term. And it can be adapted easily to discuss "pulling harder", for example, without a loss of meaning. The idea of weak and strong are commonly held, as is the idea of pulling, and these are easily applied once someone has acquired the ideas of Ohm's law, voltage, current, and resistance.) One way to use a BJT for level shifting is to use it in a common-base mode. Just wire the base to a rail and "pull down" on its emitter. You can place the resistor either at the base or at the emitter. All that's left to do is to use a pull-up on the collector. Given that we hope to achieve bidirectional use, the resistor will be placed at the base. Here's an example when going from a \$3\:\textrm{V}\$ logic output towards a \$5\:\textrm{V}\$ logic input: simulate this circuit – Schematic created using CircuitLab Going in the the other direction it is very tempting to use a symmetrical approach: simulate this circuit But that doesn't work. Why? Because the base has \$5\:\textrm{V}\$ available to it and the collector's pull-up is hooked towards a lower voltage, \$3\:\textrm{V}\$. This means that the base-collector diode (no longer commonly shown on the symbol, though it once was when BJTs were themselves made more symmetrically) can be (and will be) forward biased. So when the BJT is supposed to be turned off , it actually isn't. Instead, there's a forward biased diode caught between \$5\:\textrm{V}\$ and \$3\:\textrm{V}\$ with two resistors to limit the current. So the output will be at some middling value above \$3\:\textrm{V}\$ but also not quite \$5\:\textrm{V}\$. The symmetry fails. It's easy to fix. We can just change the base voltage back to \$3\:\textrm{V}\$: simulate this circuit And that works. Suppose you want to make this bidrectional. Could you just use two of these circuits, one for each direction? simulate this circuit And the answer is, yes you can. In fact, what I did is simply reproduce that dog-eating-tail circuit that the OP presented. It's the same thing. But now you can see the progression that led to it. And it's not as confusing as some odd, cross-wired dog-tail thing anymore. It's just two individually worked out circuits put together into one larger one. But do you remember the earlier problem with the wrong circuit? The fact that there is sneaky base-collector diode that caused the circuit to operate incorrectly? This fact should remind us that all BJTs can also be operated in a reverse-active mode. Doing so, especially with the modern asymmetrical designs for their collectors and emitters, means that the \$\beta\$ in one mode will be different than the other (among some other differences.) But it does not mean they don't work. So what if we just returned to our first circuit and merely add that extra pull-up: simulate this circuit Would this work? The answer is yes, it will indeed work. The only remaining question might be about which way to point the emitter. And this is where a good answer "depends." There are issues of charge storage to take into account, for example. (And this is a reason why there is a difference in behavior for the rising edge vs falling edge behavior shown in the graph by the OP.) The answer will depend on what you care about as there will be rising edge vs falling edge considerations and no one particular answer is always right. For my purposes here, I'm going to avoid dragging this out any further and instead leave that question as something to ponder. It's enough that this circuit works, regardless. Note : The actual value of the resistors used in the above circuits isn't meant to imply that these are the only right values to use in some particular circumstance. Typically, digital outputs can sink more than \$1\:\textrm{mA}\$ of drive current and, typically, digital inputs will sink significantly less than \$100\:\mu\textrm{A}\$. But these assumptions may be wrong for specific cases. It's not hard to adjust the details, though. So the basic idea may still apply, though with reasoned changes in the resistor values. There are more steps one might take, now. And Trevor found a nice example of where one might head. I'm going to include it here in order to capture that result. It's worth having. Those interested can consider the whys and wherefores. Without further explanation from me, enjoy Trevor's addition below: | {
"source": [
"https://electronics.stackexchange.com/questions/296879",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/114882/"
]
} |
296,885 | My HP Pavilion 27C Monitor and my HP Notebook 15 both have similar types of AC adapter that supplies 19 volts, but they both have that third wire in the middle, also. I am measuring 0 volts on it, on the monitor supply. Everything works great. I am just asking for advice on what kind of circuit I need to supply whatever signal that third wire provides. Or, even better, can I just ignore it? I'm starting with a 24 volt battery bank connected to a solar power system that varies in voltage from 23 to about 30 volts. The goal, obviously, is to convert the Pavilion 27C monitor to run on DC so I don't incur the losses of going through both the inverter and the AC adapter. And for times when the inverter is off, like at night. I need to design a way to use this power for this device. I have looked at many web sites that have intricate circuits but am hoping for something simpler that can handle the voltage variations nad still put out a pure 19 volts. In searching around I found and bought this 24-to-19 volt regulated converter. Here is the link to the converter: 24-to-19 volt converter . The 19 volts seems to be very steady so far. It actually seems to tolerate the whole voltage range the solar MPPT controller puts out to the battery bank. So maybe the power supply part is going to be OK. But what about that third center wire? I don't know what it does or what to do with it. I don't know what the devices want from it. My HP laptop uses the same voltage and has the same plug with the same third wire in the center, and I would like to convert it also. But I will start with the monitor. So my question is: What do I do about that third wire? Can I simply cut into the cable and provide power to the two obvious wires that are +19 and Gnd, and just ignore the center wire? Or, if not, what can I do to make this work? Especially for the laptop, what circuit will I need to supply that third wire with whatever the laptop is looking for? Please help. | I hate adding an answer here, especially since the OP doesn't even need bidirectional operation. But the circuit is laid out terribly (for understanding it.) And the description about dogs and tails does not help, excepting perhaps alchemists trying to write down allegorical and mystifying bits of their "art." (There are shared terms, developed over time and used in electronics to help communicate. A "pull-down" might be such an example. But they have survived the test of time and they do communicate using the general idea of pulling at a node, which isn't difficult to communicate when someone asks and is trying to learn the term. And it can be adapted easily to discuss "pulling harder", for example, without a loss of meaning. The idea of weak and strong are commonly held, as is the idea of pulling, and these are easily applied once someone has acquired the ideas of Ohm's law, voltage, current, and resistance.) One way to use a BJT for level shifting is to use it in a common-base mode. Just wire the base to a rail and "pull down" on its emitter. You can place the resistor either at the base or at the emitter. All that's left to do is to use a pull-up on the collector. Given that we hope to achieve bidirectional use, the resistor will be placed at the base. Here's an example when going from a \$3\:\textrm{V}\$ logic output towards a \$5\:\textrm{V}\$ logic input: simulate this circuit – Schematic created using CircuitLab Going in the the other direction it is very tempting to use a symmetrical approach: simulate this circuit But that doesn't work. Why? Because the base has \$5\:\textrm{V}\$ available to it and the collector's pull-up is hooked towards a lower voltage, \$3\:\textrm{V}\$. This means that the base-collector diode (no longer commonly shown on the symbol, though it once was when BJTs were themselves made more symmetrically) can be (and will be) forward biased. So when the BJT is supposed to be turned off , it actually isn't. Instead, there's a forward biased diode caught between \$5\:\textrm{V}\$ and \$3\:\textrm{V}\$ with two resistors to limit the current. So the output will be at some middling value above \$3\:\textrm{V}\$ but also not quite \$5\:\textrm{V}\$. The symmetry fails. It's easy to fix. We can just change the base voltage back to \$3\:\textrm{V}\$: simulate this circuit And that works. Suppose you want to make this bidrectional. Could you just use two of these circuits, one for each direction? simulate this circuit And the answer is, yes you can. In fact, what I did is simply reproduce that dog-eating-tail circuit that the OP presented. It's the same thing. But now you can see the progression that led to it. And it's not as confusing as some odd, cross-wired dog-tail thing anymore. It's just two individually worked out circuits put together into one larger one. But do you remember the earlier problem with the wrong circuit? The fact that there is sneaky base-collector diode that caused the circuit to operate incorrectly? This fact should remind us that all BJTs can also be operated in a reverse-active mode. Doing so, especially with the modern asymmetrical designs for their collectors and emitters, means that the \$\beta\$ in one mode will be different than the other (among some other differences.) But it does not mean they don't work. So what if we just returned to our first circuit and merely add that extra pull-up: simulate this circuit Would this work? The answer is yes, it will indeed work. The only remaining question might be about which way to point the emitter. And this is where a good answer "depends." There are issues of charge storage to take into account, for example. (And this is a reason why there is a difference in behavior for the rising edge vs falling edge behavior shown in the graph by the OP.) The answer will depend on what you care about as there will be rising edge vs falling edge considerations and no one particular answer is always right. For my purposes here, I'm going to avoid dragging this out any further and instead leave that question as something to ponder. It's enough that this circuit works, regardless. Note : The actual value of the resistors used in the above circuits isn't meant to imply that these are the only right values to use in some particular circumstance. Typically, digital outputs can sink more than \$1\:\textrm{mA}\$ of drive current and, typically, digital inputs will sink significantly less than \$100\:\mu\textrm{A}\$. But these assumptions may be wrong for specific cases. It's not hard to adjust the details, though. So the basic idea may still apply, though with reasoned changes in the resistor values. There are more steps one might take, now. And Trevor found a nice example of where one might head. I'm going to include it here in order to capture that result. It's worth having. Those interested can consider the whys and wherefores. Without further explanation from me, enjoy Trevor's addition below: | {
"source": [
"https://electronics.stackexchange.com/questions/296885",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/103685/"
]
} |
296,888 | I have a pressure system with eight solenoid valves on it and I'm controlling it from an Arduino. For each valve I have a solid state relay that an Arduino controls to switch on and off 12V from the supply. I've gotten this circuit to work properly with a single relay and solenoid: Now I'm beginning to scale it and it all works but my setup is pretty messy. Here is a single SSR hooked up to a single valve and then the holders for all the other SSRs on the board: So I was planning on using that terminal connector and then that small proto
board mostly because this will be changing around a lot and I may be throwing some more pieces in so I don't want it to be 100% permanent but I do want it to be more durable and a little more elegant. Any thoughts on how I should wire this up? Or anything better than that terminal connector? Here is a schematic of half of the full setup: | I hate adding an answer here, especially since the OP doesn't even need bidirectional operation. But the circuit is laid out terribly (for understanding it.) And the description about dogs and tails does not help, excepting perhaps alchemists trying to write down allegorical and mystifying bits of their "art." (There are shared terms, developed over time and used in electronics to help communicate. A "pull-down" might be such an example. But they have survived the test of time and they do communicate using the general idea of pulling at a node, which isn't difficult to communicate when someone asks and is trying to learn the term. And it can be adapted easily to discuss "pulling harder", for example, without a loss of meaning. The idea of weak and strong are commonly held, as is the idea of pulling, and these are easily applied once someone has acquired the ideas of Ohm's law, voltage, current, and resistance.) One way to use a BJT for level shifting is to use it in a common-base mode. Just wire the base to a rail and "pull down" on its emitter. You can place the resistor either at the base or at the emitter. All that's left to do is to use a pull-up on the collector. Given that we hope to achieve bidirectional use, the resistor will be placed at the base. Here's an example when going from a \$3\:\textrm{V}\$ logic output towards a \$5\:\textrm{V}\$ logic input: simulate this circuit – Schematic created using CircuitLab Going in the the other direction it is very tempting to use a symmetrical approach: simulate this circuit But that doesn't work. Why? Because the base has \$5\:\textrm{V}\$ available to it and the collector's pull-up is hooked towards a lower voltage, \$3\:\textrm{V}\$. This means that the base-collector diode (no longer commonly shown on the symbol, though it once was when BJTs were themselves made more symmetrically) can be (and will be) forward biased. So when the BJT is supposed to be turned off , it actually isn't. Instead, there's a forward biased diode caught between \$5\:\textrm{V}\$ and \$3\:\textrm{V}\$ with two resistors to limit the current. So the output will be at some middling value above \$3\:\textrm{V}\$ but also not quite \$5\:\textrm{V}\$. The symmetry fails. It's easy to fix. We can just change the base voltage back to \$3\:\textrm{V}\$: simulate this circuit And that works. Suppose you want to make this bidrectional. Could you just use two of these circuits, one for each direction? simulate this circuit And the answer is, yes you can. In fact, what I did is simply reproduce that dog-eating-tail circuit that the OP presented. It's the same thing. But now you can see the progression that led to it. And it's not as confusing as some odd, cross-wired dog-tail thing anymore. It's just two individually worked out circuits put together into one larger one. But do you remember the earlier problem with the wrong circuit? The fact that there is sneaky base-collector diode that caused the circuit to operate incorrectly? This fact should remind us that all BJTs can also be operated in a reverse-active mode. Doing so, especially with the modern asymmetrical designs for their collectors and emitters, means that the \$\beta\$ in one mode will be different than the other (among some other differences.) But it does not mean they don't work. So what if we just returned to our first circuit and merely add that extra pull-up: simulate this circuit Would this work? The answer is yes, it will indeed work. The only remaining question might be about which way to point the emitter. And this is where a good answer "depends." There are issues of charge storage to take into account, for example. (And this is a reason why there is a difference in behavior for the rising edge vs falling edge behavior shown in the graph by the OP.) The answer will depend on what you care about as there will be rising edge vs falling edge considerations and no one particular answer is always right. For my purposes here, I'm going to avoid dragging this out any further and instead leave that question as something to ponder. It's enough that this circuit works, regardless. Note : The actual value of the resistors used in the above circuits isn't meant to imply that these are the only right values to use in some particular circumstance. Typically, digital outputs can sink more than \$1\:\textrm{mA}\$ of drive current and, typically, digital inputs will sink significantly less than \$100\:\mu\textrm{A}\$. But these assumptions may be wrong for specific cases. It's not hard to adjust the details, though. So the basic idea may still apply, though with reasoned changes in the resistor values. There are more steps one might take, now. And Trevor found a nice example of where one might head. I'm going to include it here in order to capture that result. It's worth having. Those interested can consider the whys and wherefores. Without further explanation from me, enjoy Trevor's addition below: | {
"source": [
"https://electronics.stackexchange.com/questions/296888",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/60140/"
]
} |
296,897 | It's common (for me) to hear that a HV power line can kill a person without touching it "if they entered its field." I'm not convinced by this reason and I think it quite naive and that it doesn't clarify anything. Warning: Video shows people dying. Video with information about the accident(longer than the YouTube version) Here's a video of some workers killed by HV but it's not clear if the scaffold touched the line or just "entered the field of the lines." | Yes. The higher the voltage, the larger an air gap is needed to keep it from jumping or arcing between conductors. You, a wet fleshy human, can provide an ideal path between two high voltage conductors if you get in the middle. You do not need to touch one. This is basically what allows lightning, Tesla coils, and Jacob's Ladders to exist. As pointed out in the comments, the higher the voltage carried, the taller its transmission towers will be and the further each conductor will be from the others. | {
"source": [
"https://electronics.stackexchange.com/questions/296897",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/72683/"
]
} |
298,683 | In this video explaining how to make an audio amplifier , the narrator warns Don’t be misled by crappy schematics…with low values like 100 µF. You want to use at least 1000 µF [between the LM386's output and the speaker] to guarantee good bass with a wide variety of speakers. However, in the circuit that I made following this video's instructions, replacing a 1000 µF cap with one of lower value resulted in no difference in sound quality which was audible to me. Is the assertion that a capacitor of 1000 µF or higher is necessary true? If so, and if the reason is that given in the video of allowing for "good" bass, how does such a capacitance allow for that? If this is not the reason (or the only reason), what is another? (I did not test with a wide variety of speakers; are there certain speakers for which the difference between capacitances is more pronounced?) | The capacitor referred in the video forms a high pass filter with the speaker (C-R filter). Its frequency response is given by the following formula: $$f_c = \frac1{2\pi \tau} = \frac1{2\pi RC}$$ Using this calculator you can see that a 4 ohm speaker and 1000uF give a "cut-off" frequency at about 40Hz. 100uF would give a "cut-off" frequency at about 400Hz. Whether it is audible or not it depends on the music you are listening to and your speaker system, etc. Notice that I mention "cut-off" in quotes because it is not a hard barrier. It's usually given by a graph like this: | {
"source": [
"https://electronics.stackexchange.com/questions/298683",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/143805/"
]
} |
299,629 | I was reviewing a design earlier, and I noticed something interesting. The designer had removed unused pads on the chip. I have never seen this done before. Is this something that is good practice? Is it even okay? | This is not a standard practice, and should be avoided. First: Along with providing electrical connectivity, pins also mechanically anchor a chip to the board. Each pad that's removed increases the stress on the remaining pins, which will increase the risk of the chip detaching from the board. Second: All of the remaining pins have nothing but soldermask between them and a trace underneath them which they aren't supposed to be connected to. Soldermask is not very thick, and it's not very durable either. If the mask is breached -- from a pin vibrating against it, for instance! -- the pin may become intermittently connected to something it wasn't supposed to be. | {
"source": [
"https://electronics.stackexchange.com/questions/299629",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/8139/"
]
} |
299,643 | Here is the circuit: Here is it being powered up and wirelessly turning on a lamp. Here are my questions: How does it actually work? I do know that 1T ( The red wire ) is being switched ON/OFF by Q2, BD243 which induces a voltage to 350T, which is about (VCC-18v) x 350 = 6.3kv. (I don't think 6.3kv is high enough to wireless powering on a lamp.) What is the purpose of Q1? It's a N-channel 68 V, 0.0082 Ω, 98 A Mosfet. LED1 is a red led diode. If it's blocking the 350T, how does circuit oscillate? ie, how does 350T turn off Q2? I don't know how the audio-in work either, but it does work quite well. Last but not least, After googling about Tesla Coil circuits, I either find very complex or very simple circuits. Is there a middleman out there? | This is not a standard practice, and should be avoided. First: Along with providing electrical connectivity, pins also mechanically anchor a chip to the board. Each pad that's removed increases the stress on the remaining pins, which will increase the risk of the chip detaching from the board. Second: All of the remaining pins have nothing but soldermask between them and a trace underneath them which they aren't supposed to be connected to. Soldermask is not very thick, and it's not very durable either. If the mask is breached -- from a pin vibrating against it, for instance! -- the pin may become intermittently connected to something it wasn't supposed to be. | {
"source": [
"https://electronics.stackexchange.com/questions/299643",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/-1/"
]
} |
299,889 | I want to know how much power is radiated by cell towers of GSM (1.8 Ghz), 3G (2.1 Ghz), 4G (2.6 Ghz) ? I want links to references if exist. | There's no general answer. First of all, you have a misconception about GSM, 3G, 4G: The frequency bands you list are some of the frequency allocations for these networks. These are different between different operators and in different countries. Then: Cellular networks are not broadcast transmitters. They don't work with constant output powers. The power they transmit depends on what they need to achieve. As noted in the comments above, a cell tower that covers a huge rural area will blast out more power per user on average than a small-cell tower in a city centre. Since power consumption is one of the biggest costs in operating a mobile network, carriers are extremely interested in keeping transmit power as low as possible. Also, lower maximum transmit power allows for smaller coverage area – this sounds like an anti-feature, but it means that the next base station using the exact same frequencies can be closer , which becomes necessary as operators strive to serve very many users in densely populated areas, and thus need to divide these users among as many base stations as possible, to even be theoretically able to serve the cumulative data rate of these. Then, as mentioned, the transmissions will be exactly as strong as necessary to offer optimal (under some economic definition of "optimal") service to the subscribers. Which means: when there are only a few devices basically idling in the cell, the power output will be orders of magnitude less than when the network is crowded and under heavy load. Loads have a very high dynamic. You can watch an LTE load monitor from a city centre live here . This goes as far as shutting down base stations or reducing the number of subbands served at nighttime – something we were able to see very nicely happen every night from the uni lab where I spent a lot of my days (and, obviously, far too many nights). So, there can't be "this is how much power all towers emits" number, since it depends on usage. Now, as also mentioned, there's completely different cell types. With 3G and 4G, we saw the proliferation of micro-, nano- and femtocells. Those are just radioheads that can be placed nearly anywhere and serve a very restricted space – for example, a single room. These obviously would use much less power than a single antenna mounted on a mast somewhere high. Antenna systems can be very complex, too – a modern base station will make sure to use a combination of antennas to form something like a beam that hits your phone as precisely as possible – motivation for that, again, is less necessary transmit power (lower cost) due to not illuminating anyone who's not interested in the signal you are receiving, and of course, possibility for denser networks. Then, there's aspects like interoperability. A carrier might offer both 2G and 4G, often closely co-located in spectrum, on the same mast. Now, turning up the 2G downlink's power too much might lead to saturation in 4G receiver (phone) amplifiers – and to drastic reductions in possible 4G rate for a slightly improve in 2G quality. This problem might get even more important as operators move to deprecate and shut down 2G, and might very soon be broadly adapting schemes where 2G service is "interweaved" into 4G operation in the same band (2G is very slow, and takes only very limited "useful" bandwidth, but still occupies very precious frequency bands, so it's only natural to use the very flexible 4G in a way that says "ok, dear handsets, this is our usage scheme, where we leave holes in time/frequency so that 2G can work 'in between'. Please ignore the content of these holes."). Then, the whole power/quality trade-off might become even more complicated. In essence, it's also important that when you're carrying around a phone, the most radio energy involved in the operation of the phone network that hits you is not the downlink power (i.e. base station -> phone), but the uplink power, simply because power goes down with the square of distance, and you're darn close to your phone compared to the base station antenna. An important corollary of that is, by the way, and I've seen dozens of people not understanding that, is that the more base stations there are, the less power you get hit by. Very simple: Your phone will need more power to reach a base station far away, and the power that the base station needs to reach your phone will always be adjusted so that your phone will have good reception (if possible!), but not more. | {
"source": [
"https://electronics.stackexchange.com/questions/299889",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/127784/"
]
} |
300,565 | Are there any problems that can be caused by using resistors of large resistances (in the order of megaohms)? I'm designing a feedback network that is just a voltage divider, and I want the feedback to drain as little current as possible from the circuit. The only thing that matters is the ratio between the resistors. So my question is: is there any reason why one would pick, for example, resistors of 1 and 10 Ohms instead of 1 and 10 MOhms? | There are many drawbacks to both low and high values alike. The ideal values will fall in between very large and very small for most applications. A larger resistor of same type will, for example, create more noise (by itself and through small induced noise currents) than a smaller one, though that may not always be important to you. A smaller resistor will drain more current and create more losses, as you have surmised yourself. A larger resistor will create a higher error with the same leakage current. If your feedback pin in the middle of your resistors leaks 1 μA when the resistor feeding that leak is 1 MOhm, that will translate to an error of 1V, while a 10k resistor will translate to an error of 10mV. Of course, if the leakage is in the order of several nA or less, you might not care much about the error a 1 MOhm resistor creates. But you might, depending on what exactly you are designing. Smaller resistors in feedback systems, e.g. with inverting amplifiers using op-amps, may cause errors on the incoming signal if the incoming signal is relatively weak. It's all checks and balances, and if that's not enough information at this point, you might want to ask a more direct question about specifically what you are doing. With schematics and that. | {
"source": [
"https://electronics.stackexchange.com/questions/300565",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/98323/"
]
} |
300,573 | I'm having issues setting the state of any of the D Flip Flops, I have jumpers and a multimeter to work with. does the clock pulse have to more precise than touching the jumper to +5v? 74s374J data sheet | There are many drawbacks to both low and high values alike. The ideal values will fall in between very large and very small for most applications. A larger resistor of same type will, for example, create more noise (by itself and through small induced noise currents) than a smaller one, though that may not always be important to you. A smaller resistor will drain more current and create more losses, as you have surmised yourself. A larger resistor will create a higher error with the same leakage current. If your feedback pin in the middle of your resistors leaks 1 μA when the resistor feeding that leak is 1 MOhm, that will translate to an error of 1V, while a 10k resistor will translate to an error of 10mV. Of course, if the leakage is in the order of several nA or less, you might not care much about the error a 1 MOhm resistor creates. But you might, depending on what exactly you are designing. Smaller resistors in feedback systems, e.g. with inverting amplifiers using op-amps, may cause errors on the incoming signal if the incoming signal is relatively weak. It's all checks and balances, and if that's not enough information at this point, you might want to ask a more direct question about specifically what you are doing. With schematics and that. | {
"source": [
"https://electronics.stackexchange.com/questions/300573",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/146790/"
]
} |
300,906 | I've designed some PCBs where I have a socket for a Teensy 3.2. The exact sockets I purchased were these turned pin open frame sockets . I also purchased these turned pin headers to solder to the Teensy. This works great, however the Teensy is really hard to remove without bending the header pins. Obviously a certain amount of force is required to remove the Teensy. Are there any tools that would help or a particular recommended method? | People will probably disagree, but I just use a screwdriver, its all about going slowly. I use a flat head, slip it between the chip and the socket then twist it slightly, then change ends and do the same, if you go in small increments you will be fine. Don't just try and do it all from one end. | {
"source": [
"https://electronics.stackexchange.com/questions/300906",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/72184/"
]
} |
301,221 | Most lower and middle end phone only support 3/4 LTE bands. High end phones, like the iPhone, or flagships from Samsung and LG also support a lot of bands. iPhone 7: 1, 2, 3, 4, 5, 7, 8, 12, 13, 17, 18, 19, 20, 25, 26, 27, 28, 29, 30 LG G4: 1, 2, 3, 4, 5, 7, 8, 20, 28 Samsung S7: 1, 2, 3, 4, 5, 7, 8, 12, 18, 19, 20, 29, 30 and the S8 has 22/24 of them. Meanwhile, every lower-end smartphone (in my region) only has 1, 3, 7, 20. Not a stupid choice because it gets LTE service in almost every European country, but you don't get complete coverage. And it's not only a choice of modem capability. Even the newly announced Xiaomi Mi6 only supports those four bands. And it has the latest and greatest of Qualcomm. Same SoC and the same modem as the Galaxy S8. These high-end smartphone don't have 35 antennas. I don't think there are 35 different signal paths. I understand that a higher-end smartphone could have more antennas with several distinct front-ends, allowing using multiple frequencies at the same time, but I don't see why a phone with only two antennas that support 800, 1800, 2100, and 2600 MHz would not be able to work with all those frequencies that are in between. | Simply because having more bands not only requires a very versatile chipset, but also extensive antenna design! To explain: It's impossible to make the perfect antenna for all frequencies, but you can make "compromise" broadband antennas. You can do that in a lot of ways, but in the end, you need to integrate those into a mobile device. And that's where it gets costly: Now, you don't only have to simulate and measure your antenna in isolation, mounted on a low-loss mount in an anechoic chamber, but as integral part of a transceiver system (phone). That leads to interesting solutions such as antennas being embedded in the plastic casing, a lot of time spent on tweaking the chipset's RF control registers, having multiple broadband antennas to even have a chance for diversity gain at all bands, and of course drastically increased development time and certification costs (you do have to get these approved!!). Adding more bands to these the chipsets can receive will also add the need for more noise testing – your tuner / LO synth will have different spurs at different frequencies! So, that's another design – test – improve cycle you add for every single band you add. You can make that easier by throwing money at the problem (filters, more board layers allowing for more supply nets allowing for better isolation). Those are all cost factors, so you don't do that if you just spin a new slightly modified phone for a specific market every two months. Or, if you really don't care for the non-{insert your home market here} market. These high-end smartphone don't have 35 antennas. I don't think there is 35 different signal paths. I'd agree on them not having 35 antennas. But really, the massive MIMO numbers people throw around when they're currently play the 5G-research-funding buzzword-bingo game are not that far away – mind you, not for mobile devices (physics doesn't let you have 35 statistically independent receptions in arbitrarily small receivers), and as said, you'd have to go for one wideband antenna (you can't have 35 narrowband ones close to each other and act like they are independent. Look at a Yagi antenna. There's at most one matched dipole in there. The rest is too short or too long, but still, the overall thing works as one antenna.), but yeah, having multiple active receiver chains is something we already do and will be doing more in the future. I'd have a talk I'd like you to watch: Inside The Atheros WiFi Chipset - Adrian Chadd at Defcon14's wireless village. Not about 4G, but Wifi, but somewhere in the last third, he explains why you do not want to tune your Wifi chip to frequencies that the atheros folks didn't test, though you technically can . Just another aspect that just hit me: Might be the same chipset, but who tells me that Qualcomm doesn't sell you devices that have some bands disabled at a lower price than those with all bands enabled? After all, yield of semiconductors is limited by damages in all pieces of a semiconductor, not only the digital parts. Factory calibration of the chips might be a relevant production time and thus, cost element, too, so it'd be also logical to sell chips that have only been qualified on some bands at a different price than those that are qualified for all bands. | {
"source": [
"https://electronics.stackexchange.com/questions/301221",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/7171/"
]
} |
301,226 | I'm wondering if is it possible to use an old HDD as an input device that would feed some kind of data to Arduino . If you manually positioned the head and platter (the disk is not spinning, so it would remain in one place)- would it be possible to get the coordinates (track: XXX, sector: YYY) of the reading head to arduino? Is it possible to get any kind of (reproducable) output that way? If yes - what should I study, read or know about? How to get started? If no - any ideas how could I use some other computer hardware in a similar manner? The idea is to make an interactive installation, where touching "the guts of old hardware" (spinning the platter and moving the head manually) would result in different LED colors - like mapping disk to a color wheel or sth like this. | Simply because having more bands not only requires a very versatile chipset, but also extensive antenna design! To explain: It's impossible to make the perfect antenna for all frequencies, but you can make "compromise" broadband antennas. You can do that in a lot of ways, but in the end, you need to integrate those into a mobile device. And that's where it gets costly: Now, you don't only have to simulate and measure your antenna in isolation, mounted on a low-loss mount in an anechoic chamber, but as integral part of a transceiver system (phone). That leads to interesting solutions such as antennas being embedded in the plastic casing, a lot of time spent on tweaking the chipset's RF control registers, having multiple broadband antennas to even have a chance for diversity gain at all bands, and of course drastically increased development time and certification costs (you do have to get these approved!!). Adding more bands to these the chipsets can receive will also add the need for more noise testing – your tuner / LO synth will have different spurs at different frequencies! So, that's another design – test – improve cycle you add for every single band you add. You can make that easier by throwing money at the problem (filters, more board layers allowing for more supply nets allowing for better isolation). Those are all cost factors, so you don't do that if you just spin a new slightly modified phone for a specific market every two months. Or, if you really don't care for the non-{insert your home market here} market. These high-end smartphone don't have 35 antennas. I don't think there is 35 different signal paths. I'd agree on them not having 35 antennas. But really, the massive MIMO numbers people throw around when they're currently play the 5G-research-funding buzzword-bingo game are not that far away – mind you, not for mobile devices (physics doesn't let you have 35 statistically independent receptions in arbitrarily small receivers), and as said, you'd have to go for one wideband antenna (you can't have 35 narrowband ones close to each other and act like they are independent. Look at a Yagi antenna. There's at most one matched dipole in there. The rest is too short or too long, but still, the overall thing works as one antenna.), but yeah, having multiple active receiver chains is something we already do and will be doing more in the future. I'd have a talk I'd like you to watch: Inside The Atheros WiFi Chipset - Adrian Chadd at Defcon14's wireless village. Not about 4G, but Wifi, but somewhere in the last third, he explains why you do not want to tune your Wifi chip to frequencies that the atheros folks didn't test, though you technically can . Just another aspect that just hit me: Might be the same chipset, but who tells me that Qualcomm doesn't sell you devices that have some bands disabled at a lower price than those with all bands enabled? After all, yield of semiconductors is limited by damages in all pieces of a semiconductor, not only the digital parts. Factory calibration of the chips might be a relevant production time and thus, cost element, too, so it'd be also logical to sell chips that have only been qualified on some bands at a different price than those that are qualified for all bands. | {
"source": [
"https://electronics.stackexchange.com/questions/301226",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/147084/"
]
} |
301,719 | What is the standard way to measure a current of about 10,000 A? DC clamp meters seem to have only scales up to 2,000 A. Edit Some background of this question: I am a high school physics teacher and I am trying to improve some classical experiments using high currents from ultra-capacitor discharge. In particular I am looking for a good way to measure the discharge current of a ultra capacitor for very short times like in this "jumping ring" experiment: A safe and effective modification of Thomson’s jumping ring experiment The second motivation for this question was just because I just want to know it out of curiosity for my background knowledge of what is the usual way to measure such high currents today. | No, DC clamp probes have scales well above ±10,000A. Does no-one even check Amazon for their ±12000A DC to 40kHz current probe needs any more? I jest. But you can totally buy that on Amazon. And they have 10 in stock. None of them qualify for Amazon Prime though :(. Whatever you do, ignore all these people telling you to use a shunt. No, do not use a shunt. There is absolutely no advantage to using a shunt in this application besides a very slight edge in measurement accuracy, and ridiculously huge downsides. Why a shunt is a terrible idea: Any solution that works by measuring the resistive voltage of a conductor (shunt) that can have any reasonable resolution will also require a prohibitively large voltage drop. As another poster mentioned, a typical 50mV shunt would dissipate 500W. This is an irresponsibly large waste of energy when you can measure the current for less than a watt of power consumption. It will need its own active cooling at all times. So there is that much more energy wasted, but more importantly, you've introduced a single point of failure into your power distribution system. What was once able to passively carry on the order of 10kA will fail very quickly if at any point the cooling for the shunt fails or has a lapse in performance, causing the shunt to melt and act like the world's most overpriced and slowest-blow 10kA fuse ever made. Let's not kid ourselves, one doesn't just casually put a 10kA shunt in series with a 10kA capacity cable using alligator clips and banana jacks. Installing such a device in series with that cabling is going to be a non-trivial task, and it will not be something you can easily remove on a whim. I would expect it to become a permanent liability in your system. I don't care if the cable is carrying 10kA at 1V (for whatever reason) - I (and you yourself should) demand galvanic isolation in such a measurement apparatus. 10kA is a lot of current, and it can't help but store terrifying amounts of energy in the magnetic field alone. I don't even know what the dimensions of a wire or bus bar capable of carrying that would be, but let's go with a relatively low-inductance geometry: a solid copper pole 2 inches in diameter. If in a simple, straight line, this will have ~728nH of inductance per meter. At 10kA, this conductor will have roughly 35J of energy stored in its magnetic field alone! Of course, in practice, it will be much much lower as the return conductor will be close by and it would probably be large, flat bus bars, further lowering the inductance. But still - you should plan for a 10kA cable to induce some spectacular failures in anything connected to it should anything go wrong. Including (or especially?) stuff like a $1800 NI DAQ board. There is a law that one can derive from Murphy's law that states that the more expensive the data acquisition gear, the more thoroughly it will be destroyed in the event of a fault. I jest, but you get my point - isolation is not something to be dismissed in this situation. Now, there is one reason to use a shunt:
Accuracy. Though I would expect that some of this advantage is degraded by error introduced from thermocouple effects at the junctions where the shunt is connected to the actual current carrying conductors, as well as the sense lines.
Additional error sources will enter the picture if this current is not DC as well. But, regardless, a shunt is not going to be that much more accurate than the reasonable solution which I am about to suggest. The difference is on the order of 0.25% (best case) vs 1% (worst case). If you're measuring 10,000 amps though, what's ±100A among friends? So, in conclusion, do not use a shunt. I honestly can think of no worse option than a shunt . Use one of the dozens of suitable Hall Effect clamp-on probes. The reason most hand-held clamp meters only go up to maybe 2,000A is because much beyond that and the conductor would be too large or in a unusual shape (wide and flat bus bar, for example) that would require the clamp to be too large to go on anything portable of hand held. But they certainly make clamp-on or loop current probes that have measurement ranges not only to 10,000A, but well above it as well. So just use one of those. They are high quality, safe, purely magnetic (operate on the Hall Effect), fully isolated and characterized, sensitivities on the order of 0.3mV/A. Something like Clamp-on Current Probe (earlier linked to its page on Amazon). And they have nice huge windows as large as 77mm to 150mm to fit your cabling. Unless you've gone with something more exotic... and chill. Either way, I assume your cabling looks similar to one of the solutions in this picture: Anyway, have fun. Be safe. Hopefully you're not a super villain. | {
"source": [
"https://electronics.stackexchange.com/questions/301719",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/7602/"
]
} |
301,823 | I have a cheap Chinese Bluetooth audio amplifier like this: It has four screw terminals for the speaker connections, namely L+, L-, R+ and R-. I accidentally connected a single speaker to L+ and R-, and was surprised to find that I could hear both the left and right channel over the single speaker. I decided to experiment and found out that if you connect the speaker the proper way (so either to L- and L+, or to R- and R+) it would only play the single channel you'd expect it to play, but if you connected either L+ and R- or R+ and L-, it would "downmix" both stereo channels to a single channel and play them both over the single speaker. I've been trying to wrap my head around how that works, but I can't figure it out. Is the board more sophisticated than I expected and does it output mono when it discovers no load on the different terminals, or is there something else going on? | This can be easier to understand if you look at the waveforms. In a push-pull, or bridge amplifier both lines are driven as shown below. Notice \$L-\$ is literally the inverse of \$L+\$ Similarly the other signal, \$R-\$ is the inverse of \$R+\$ The difference between the +/- voltages is that excites the speakers. Now, if you connect Opposites to one speaker the difference in signal becomes the mixture of both signals. Hey-Presto.. you have a mono system. Note however, the amplitude of each "Side" is now effectively reduced by half. If you can't understand that, consider the case where there is no signal on the \$R\$ side. The difference between the blue lines is now only half what is was between \$L+\$ and \$L-\$. If the original sound was recorded central, that is, an equal waveform on both left and right channels, the \$L+\$ waveform would be identical to the \$R+\$ waveform, same for the negatives. As such, joining \$L+\$ with \$R+\$ would result in no voltage difference for that sound. That is why you need to cross connect them. | {
"source": [
"https://electronics.stackexchange.com/questions/301823",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/65626/"
]
} |
303,305 | There is no dedicated GND in the classical Ethernet 8P8C ("RJ45") pinout. [1] Why does the Ethernet spec not include a ground, unlike many other cable types used for interconnecting devices that may both have its independent power source as well, e.g. RS-232 or USB ? | If you just ignore the POE 48 Volts in the image below, you can see Ethernet uses transformers on both sides . This way there is no need for common ground as long as the common mode voltage stays below 1500V generally. The isolation specification of the transformers. And as a bonus you now also know how POE works. ( 802.3at ) However, CAT6A often has a shielded connector. The shield is then grounded to chassis using the little flaps inside the socket. Source image | {
"source": [
"https://electronics.stackexchange.com/questions/303305",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/98349/"
]
} |
303,537 | I found a PCB that has had some rework done. When I saw it, I thought that someone had actually repaired it after purchase:- It's got additional wiring (white & brown) and under the hot glue are two ceramic capacitors. The capacitors have been sleeved with insulation (difficult to see). They smell like decoupling capacitors. I can understand altering my own boards, but this is a commercial speaker from a successful company (Labtec):- The work seems to be based around the main amplifier chip (TDA2005). I've seen other post design changes to boards, but that's only been a wire here or there snaking across a PCB. Why does a commercial board get sold with this state of rework? PS. Also note the hand thickened tracks in the top left. PPS. The two circles also show hand soldered surface mount capacitors added to this side of the board which is clearly a through hole design. | This looks like it's due to a combination of bad engineering, bad management, and cheap labor. Circuits don't always perform as expected when first designed. Even experienced designers make mistakes occasionally. Usually these are caught in the first prototypes, then the board respun. Not all engineers have the experience and skills to get a circuit mostly right the first time, and the discipline to test it properly before committing to production. Couple that with management that doesn't understand the engineering process, and a junior engineer hired into a position over his head who won't or can't stand up to management. It's exactly those types of managers that hire a junior engineer for such a role in the first place. "After all, it's just engineering, and all engineers are plug-replaceable, so I might as well hire the cheap one right out of school that won't give me all this crap about can't this, and test that." You've got 10,000 populated boards and someone finally discovers that this thing oscillates when the volume knob is turned to 60% and you use one of the wall warts from the last shipment of 5,000 you just received. Your junior engineer doesn't understand what's happening, but determines that the wall warts are within spec. Now you've got a big problem, so you hire a consultant to look over the design and fix it. The consultant shakes his head, tells you the whole design is a mess. You don't want a new design. After all you already paid for one. You tell the consultant you absolutely need a fix for this design. He comes up with the kludge you see above. You have the same factory in the far east that made the boards rework them. The labor cost is cheap, so it's better than scrapping 10,000 finished boards. Good engineering is expensive. Bad engineering is even more expensive. | {
"source": [
"https://electronics.stackexchange.com/questions/303537",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/56469/"
]
} |
303,609 | In the following image: Why can't current flow across the following wire?
It's a simple question, but I've kind off always wondered. Thanks! | What you have to understand is that electrons don't move on their own but as a chain... like a bunch of kindergarten kids tied together hand in hand. Consider the following drawing of a series of balls in a track system. It is fairly obvious that you can use your finger to push the chain of balls around either loop and they will move freely. However, you can NOT push any balls across the joining trough at the bottom because there is nowhere for the ball to go. That's what also happens in wires. If you DID manage to force an electron into the right loop, perhaps using an inductive coil or something, there would be charge difference generated between the two loops which would quickly force the electron back once you took the force away. | {
"source": [
"https://electronics.stackexchange.com/questions/303609",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/135896/"
]
} |
304,521 | Simple enough question. Why not use a 741 op-amp in a target circuit or anyone's target circuit? What are the reasons not to use it? What might be the reasons to still choose this part? | There are many good reasons not to use the 1968-vintage LM741: - Minimum recommended power supply rails are +/- 10 volts Modern op-amps have power supplies that can be as low as 1.8 volts. Input voltage range is typically from -Vs + 2 volt to +Vs - 2 volt Modern op-amps can be chosen that are rail-to-rail Input offset voltage is typically 1 mV (5 mV maximum) Modern op-amps can easily be as low as a few micro volts and have low drift. Input offset current is typically 20 nA (200 nA maximum) Modern op-amps are commonly available that are less than 100 pA Input bias current is typically 80 nA (500 nA maximum) Modern op-amps are commonly less than 1 nA Input resistance is typically 2 MΩ (300 kΩ minimum) Modern input resistance starts at hundreds of MΩ Typical output voltage swing is -Vs + 1 volt to +Vs - 1 volt Many cheap rail-to-rail op-amps get to their supplies within a few mV Guaranteed output voltage swing is -Vs + 3 volt to +Vs - 3 volt Supply current is typically 1.7 mA (2.8 mA maximum) Modern op-amps with this current consumption are ten times faster and better in many other ways too. Noise is 60 nV/sqrt(Hz) for LM348 (quad version of 741) GBWP is 1 MHz with a slew rate of 0.5 V/us The LM741A is slightly better but still a dinosaur in most areas. Things of importance that the 741 data sheet does not appear to list (and that may depend on the age and manufacturer): - Input offset voltage drift versus temperature Input offset current drift versus temperature Common mode rejection ratio versus frequency Output resistance (closed or open loop) Phase margin likeliness of latchup (and gain reversal) I can't think of any valid reasons to use the 741 other than "that's all I will ever have or own". Common reasons why they are still used in actual devices appear to be: - Someone had a design that they didn't want to change from the 70s Someone had millions of them lying around and wanted to put them to use Someone actually determined that all the parameters are fine for their design, and at that moment the 741 was the cheapest to acquire and in millions of units it saved a few thousand dollars in total. I've been an electronics designer since 1980 and I have never used or specified a 741 in any design I've been associated with. Maybe I'm missing out on something? | {
"source": [
"https://electronics.stackexchange.com/questions/304521",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/20218/"
]
} |
304,788 | Most smartphones are tilt-sensitive, but what device makes this possible? Additionally, how does it (and the sensors associated with it) work? Also, since the working of these sensors seem, almost certainly, based on the presence of an external gravitational field (for instance, the earth's), this begs the second question: Do smartphones retain their tilt-sensitivity under zero-gravity (hypothetical) conditions? (Recently played an aircraft simulator game on my phone...the fact that the plane responded so well to tilting took me aback; hence the urge to ask this question) Extras: I put some thought into this myself, so I'll be putting that up here too. For all intents and purposes, my question ended after the second paragraph, but what I've added after this might help tailor an answer that fits my current understanding of physics. I'm currently in high-school, and if I recall correctly, there are six degrees of freedom for a particle in a 3D Cartesian system. From my experience with the aircraft simulator app, smartphones seem to detect motion in only three degrees of freedom: pitch, roll and yaw Speaking of tilt-sensitive sensors: The way I assume these sensors/transducers work, is by detecting the minute changes in gravitational potential energy (which may manifest itself as small-scale motion of some tiny components of the sensor) that is associated with the phone's change in spatial orientation. The way I see it, such a sensor would require moving parts, and cannot simply be another chip on a circuit board. Under these circumstances, if I were tasked with building a tilt-sensitive device that perceives minute changes in gravitational potential energy, I would probably require at least 3 pairs of sensors (a pair in each of the three coordinate axes). Also, seeing how very sensitive my smartphone appears to be to tilting, I'd have to build a ridiculously large device, with each sensor in a pair placed several meters apart to achieve tilt-sensitivity comparable to that of my phone. However, smartphones have dimensions smaller than that of a typical sandwich, so having "sensors in a pair placed several meters apart", apart from being impractical, is clearly not the case. ^ I went ranting about this, so that you can get a feel of my genuine perplexity in the sub-question that follows: How come these sensors are so sensitive, despite their small size? | You are right, in a sense. These sensors do need moving components. However, they are a chip on your board. Tiltsensors (actually, accelerometers), and gyroscopes (and pressuresensors, ...) are part of a family called MEMS: Micro-electromechanical systems. Using similar techniques as already common in integrated circuit fabrication, we can make amazing little devices. We use the same processes of etching away things, depositing new layers, growing structures, etc. These are incredibly tiny devices. this is an example of a gyroscope: link to the original website. Most of these work by sensing changes in capacitance. A gyro would sense the changes due to rotation (the big thing in the picture would twist around the center axis. This will bring the tiny teeth that are interleaved closer together, and increase capacitance. Accelerometers work under a similar principle. These teeth can be spotted in the rightbottom corner of the second image. What about zero-gravity? It would not change much in terms of the functioning of the devices. You see, accelerometers work by sensing acceleration. The key however is that gravity is the same to them - it just feels like you are being accelerated up at 1G, all the time. They use this "constant" to get an idea where "down" is.
This also means that while the chips will function just fine in micro gravity, your phone would not - it will be confused as there seems to be no "down". Quick addition to address a (very good) point that user GreenAsJade brings up: When you look at the common definitions of gyroscopes on sources like wikipedia they are often described as something along the lines of a spinning disk. The pictures above don't seem to have any spinning parts. What's up with that? The way they solve this is by replacing the rotation with vibration . The disk shaped object in the pictures here are only connected with very thin and flexible structures to the center axis. This disk is then made to vibrate around it's axis at high frequency. When you move the entire structure along an angle, this will cause the disk to try and continuously resist this - similar to a classic gyroscope. This effect is called the Coriolis effect . By sensing the amount of tilt of the disk compared to the surrounding solid material, it can measure how fast it is spinning. | {
"source": [
"https://electronics.stackexchange.com/questions/304788",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/136017/"
]
} |
304,825 | Most countries use AC power that is supplied to each house. This AC power is in the form of a sine wave. Suppose I have 2 electrical power sockets in two rooms that are at the opposite ends of my house. Will both the electrical sockets provide the same sine wave that is in phase or will the voltage have a constant phase shift between the two? Which of the two graphs does it resemble?:- | Here in North America, each house is fed from a single phase of the distribution system thru a step-down transformer. The secondary of that transformer is 240 V center tapped. All three lines go into your house. The center tap is earth grounded near where it enters the house. Ordinary 120 V circuits are between one of the ends and the center, which is ground. High-power 240 V circuits, like for a range or dryer, are between both ends. Therefore the hot side of one 120 V circuit will either be the same phase or 180° out of phase with others. It will also be the same phase as one side of the high power circuits, and 180° out of phase with the other side of these high power circuits. | {
"source": [
"https://electronics.stackexchange.com/questions/304825",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/114476/"
]
} |
304,991 | Take a look at this evaluation board for a variable gain RF amp ( datasheet ): J5-J10 are intended to connect to DC power (with the exception of J6, which is a DC analog control voltage). All of these lines have three capacitors in parallel. Take the trace connected to J10, for example. On your way from J10 to the pin on the chip, you go through these three capacitors: A 2.2 µF capacitor in a big package (called "CASE A" in the datasheet) A 1000 pF capacitor in an 0603 package A 100 pF capacitor in an 0402 package Why are three parallel caps used instead of one 3.3 µF cap? Why do they all have a different package size? Is the order important (i.e. is it important that the smallest-value capacitors be closer to the chip? | Given a dieletric type, the smaller the capacitor, typically less parasitic inductance it will have (better response at higher frequencies), but also less capacitance. You can mix sizes, values and types of capacitors to achieve a required response that is broader than what a single one can provide. It's not just about the capacitance value. These images sum it up pretty well: From " EEVblog #859 - Bypass Capacitor Tutorial ". And From " Intersil - Choosing and Using Bypass Capacitors - AN1325 " From " TI - High-Speed Layout Guidelines " | {
"source": [
"https://electronics.stackexchange.com/questions/304991",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/36229/"
]
} |
305,605 | This question covered it for enclosures. However, from the point of view of the fan attached to a heatsink does it matter whether air is blown through the fins, or sucked through the fins. In other words, is the pattern of airflow different enough to matter? | This is such a wide subject it really isn't one you can answer with a simple one is better than the other answer. Standing alone, the blow side of a fan does produce a more concentrated, faster moving, and more turbulent "river" of air compared to the intake side where air is drawn almost equally from all directions. You can test this easily enough with pretty much any fan. Put you hand in front of the blow side and you will feel the airflow and cooling effect. Put your hand behind and the effect is much harder to detect. The turbulence also greatly improves the efficiency of the heat transfer. Turbulence is in fact your friend. So from those points of view alone, the blow side does appear the better cooling side. However, it is not just about the fan. The geometry of the heat-sink chosen also greatly affects the performance of the fan. A rotary fan slapped on top of your typical linear finned heat-sink will actually be quite inefficient. In fact the region directly under the centre of the fan will get virtually no air movement at all. This of course is unfortunate, since that is normally where the thing you are trying to cool is located. Further, unless the fins are quite deep the airflow is badly distributed in general. Too shallow, and the resultant back-pressure can actually "stall" the fan. In those circumstances, installing the fan in the "suck" direction can actually improve the situation since the air will enter the sides of the heat-sink more linearly to fill the void in air pressure created by the fan. Arguably, the heat-sink shown above might be more efficient with longer fins and the fan mounted at one end. Better designs use radial heat-sinks like the one below. As you can see, the style here is radially symmetric to the airflow on the entire circumference of the fan and consequently delivers a more even heat transfer around the central core. However, even with this style, the core itself is still badly ventilated. As such it is usually manufactured as a solid high thermal conductance core which acts as a heat-pipe. Even then, looking at the image below, the area around the core in the square section that touches the chip actually is an air void that is quite inefficient. A better design would have that area filled with metal in a rounded conical structure. However, that would of course be impossible to extrude. If fact materials and surface preparations also make a huge difference in heat-sink design. Highly thermally conductive materials are obviously best, but the surface should also be smooth enough not to allow pockets of air to form or to grab at dust particles, but also not so smooth that air passes too easily over it. One could of course spend years getting that little formula perfect, but in general you don't want a high polish chrome heat-sink. Sandblasted aluminum, or gold coated sandblasted copper, if you can afford it, would work a lot better. Another serious issue is contamination. Dust and dirt is going to get into your fan and your heat-sink. Over time this builds up and severely degrades the performance of the unit. It is therefore prudent to design your fan and heat-sink arrangement to be as self flushing as you can. This is where a blower fan usually wins out. With controlled airflow and if the air coming in can be kept clean, it tends to blow dust out of the heat-sink. Which brings me to the next point. Air Sourcing and Removal You can spend thousands of dollars developing the perfect arrangement of fan and heat sink and it will all be for naught if you do not deal with the rest of the air around your cooling system, especially in a tight enclosure. The heat not only has to be removed from your device to air, but that hot air then needs to be removed from the vicinity. Failing do to so will just recirculate the hot air and thermal failure will still occur on the device you are trying to protect. As such your cabinet needs to be vented and you should also include cabinet fans to draw in cool air from outside the enclosure. These fans should always include removable mesh and or foam filters to control the amount of ambient dust sucked into the unit. Open grill type exhaust panels are acceptable, however, for best operation a positive pressure should be maintained within the cabinet so airflow is maintained in the out direction to again limit contamination entry. Special Cases Wherever the unit is to be installed in an extreme environment special measures need to be taken. High dust environments like floor mills etc., or high ambient temperature environments will require either ducted air direct to the chassis, or a sealed unit and a two stage, possibly liquid, cooling system. Critical Cases If your system is controlling something critical then it is prudent to include thermal sensing and possibly active fan control as part of your heat-sink system. Such systems should include the feature of going into a safe state and warning the user to clean the filters or otherwise reduce the ambient heat around the system when necessary to prevent critical failures. One More Point You can spend a half years development money getting the best heat-sink design in the world with expensive fans and a perfect air distribution system all locked down then burn out devices for the lack of 2 cents worth of thermal compound. Getting the heat from the device you are trying to protect into the heat-sink can often be the weakest point in the system. Components not properly mounted to the heat-sink with an appropriate thermal bonding material kills more units than the rest of the issues combined. Your manufacturing process and procedures should be developed to give those aspects first priority. For example, if say you are using three or four TO220 style transistors mounted to a single heat-sink, it is prudent to mechanically mount them to that heat-sink, and if appropriate, the heat-sink to the board, BEFORE going through the soldering process. This ensures the thermal connection takes priority. Either thermally conductive pastes, creams, gels and or electrically isolated thermal pads should always be included between device and heat-sink to fill any air gaps caused by non-flatness, or bumps on either the device or the heat-sink surface. And keep it clean. A contaminate the size or a grain of salt, or even a stray hair, can cause thermal failure. | {
"source": [
"https://electronics.stackexchange.com/questions/305605",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/56642/"
]
} |
305,866 | I am using a 7805 IC along with a diode bridge Br805D (bridge rectifier) IC on a general purpose PCB, with a 220V / 50Hz AC input. Both the components tend to heat up really quick and I plan to set up a heat sink. My question is: Is it correct to use a single heat sink / fan combination for both the ICs or should I separate heat sinks for both? What are the potential concerns I should know about? | Using a single heatsink is very common - sometimes it can even increase performance (thermal matching of discrete transistors for helping against thermal runaway or better matching). However, there are a few things to watch out for: How big does your heatsink need to be? Using multiple parts on a single heatsink could make thermal calculations a bit more tricky. Are all your parts going to be putting out the maximum heat they can during normal operating conditions at the same time? If not, you don't need as big a heatsink. However, the math on figuring out what size you need can be a bit trickier. What about isolation? When using multiple devices on a single heatsink, you need to take care that you are not shorting out bits of your circuit. Many packages have metal tabs to connect to heatsinks (such as the TO220 package). However, these metal tabs are often also connected to a certain pin internally. Check your part's datasheet to see how it is connected. Take a look at the following schematic: simulate this circuit – Schematic created using CircuitLab If we were to connect these two parts to the same heatsink, without isolation pads, we would be shorting the input transformer! This is clearly not a good thing. Therefor, we often use either separate heatsinks, or isolation pads. These are pads commonly made out of thin pieces of ceramic (Mica is common) or some other electrically insulating material. Make sure that when you use these pads, you also use the propper screws or protection sleeves for the screws! If you isolate the metal tab, but don't isolate the metal screw, you will just short it out through the screw. | {
"source": [
"https://electronics.stackexchange.com/questions/305866",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/77205/"
]
} |
306,569 | Every resistor has a tolerance, this provides the user with an idea of the accuracy of the product. This tolerance is represented by a percentage. This means: a big value resistor will be less accurate than a small resistor with the same tolerance. $$1\text{ k}Ω 10\% \in [900Ω , 1100Ω] → 100Ω$$ $$100Ω 10\% \in [90Ω , 110Ω] → 10Ω$$ The 100Ω 10% resistor will be closer to 100Ω than a 1kΩ 10% will be close to 1kΩ. Why is that? Because high value resistors are harder to produce than small ones? If not, why is tolerance a percent and not a fixed amount of Ohms? Why are tolerance relative and not absolute? These questions are also valid for capacitors, but I'm pretty sure the answer will be the same. | I'll try to simplify this for you... Hopefully successfully. If you imagine making a resistor just by cutting pieces of a material, let's say a special metallic film; You want your resistor to fit in a usable box, else it's pointless, so you cannot make super long strips or incredibly short ones. So you use film that's different thickness of the same metal. Now, say that you have a bunch of thicknesses, each thickness is ten times less resistive than the one that's one step thinner. And they all have to be 10mm long to fit your box, so that you can only cut away from a standard strip width, let's say 5mm. If you want to make 10 Mohm, you take the thinnest one, and you have to remove half of its width. So you have to remove 2.5mm. If the material works linearly, which we'll assume for the ease of it, that means you "cut away" 10 Mohm in 2.5mm. To remove 10 Ohm more or less, that would mean cutting with an accuracy of (brackets for clarity of order, not because they are needed): (10 / 10000000) * 2.5mm = 2.5nm. 2.5nm is smaller than what we can do in silicon chip technology.
Written in meters that is 0.0000000025m, where for the uninitiated, one meter is close to one yard, or about the size of a long stride of an adult human. If you wanted to get the same 10 Ohm error on a 100 Ohm resistor, you'd take the foil that's five steps up, which if it's still linear would get you about 50 Ohm (2 bits of 100 Ohm in parallel), so you'd have to cut off 2.5mm again. But this time, you can cut away only accurate to: (10 / 100) * 2.5mm = 0.25mm. That's something a practised person could do with a pair of scissors. See the difference in difficulty there? Scissors versus can't even do it in microchips? And that's when your resistor's box is allowed to be 10mm x 5mm, which is around 10 times the size of the most commonly used types these days. Now, obviously resistors aren't made in an elf workshop full of reels of metal film... anymore... We've gotten much better at making more different thicknesses of different materials, so it's gotten better. But, it does illustrate the point, even if you would use laser-trimming on everything, trimming to one part per million, which is 10 Ohm on 10 Mohm, is going to be a very difficult process to get consistent and it will even then still create a lot of parts that are over or under trimmed. By accepting that any process in engineering is governed by statistics and percentages, as well as rules of average, we can very easily cope with resistors that are 10%, or 1% or 0.1% accurate, so there is no need to do it better for most cases. Only when you need a very accurate reference, which is uncommon if your name isn't Fluke, Keysight, Keithley or any of those others, will you want someone to give you a resistor that's better than 0.001% and those are usually large ceramic plates with very accurately applied layers of resistive material, which then get cut to a very accurate recipe and will cost ridiculous amounts of money, even now. Though the 0.01% are finally getting close to affordable. | {
"source": [
"https://electronics.stackexchange.com/questions/306569",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/116604/"
]
} |
306,772 | I was looking at this schematic for an old (pre USB) mouse: when I noticed it had a crystal on it (Y1). I was curious why a mouse would need a timer? Wouldn't it be able to use the clock from the computer? Also if it has its own crystal couldn't it get out of sync with the CPU clock? | That MOS 5717 thing is most likely a microcontroller or some part that executes code. It needs a clock to run. However, just a clock for a micro doesn't need crystal accuracy. That is probably for communication. USB requires a fairly high-accuracy clock. A mouse doesn't need to track real time, and there is no reason for it to be in sync with the CPU clock. Its USB clock must only be close enough to the host's USB clock for communication to work. Timing about how fast mouse events are occurring or time between mouse events is handled in the host. The mouse just sends info about what it senses happening. Added The above was written in response to the original question, which made no mention of this mouse not being USB. Since pretty much all new mice have been USB for a decade or more, it was reasonable to answer in that context. When you ask about something unusual, it's your responsibility to make that clear. Despite not having USB, this mouse still had a processor that needed to be clocked. It also apparently used timing to measure the positions of pots connected to a joystick, something else the OP failed to mention. It seems now that a comment by supercat is most relevant, so I am copying it into the answer: The Commodore 64 has potentiometer inputs that measure the time required to charge fixed capacitors through variable resistances. Software expects that a mouse will read as a resistance value in the range 0-255, and that it will wrap cleanly 254, 255, 0, 1, etc. which means the mouse has to accurately time its output pulses to within less than 0.4% | {
"source": [
"https://electronics.stackexchange.com/questions/306772",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/146255/"
]
} |
306,862 | I am about to design my first PCB as part of my graduation project.
Of course, as the first step, I try to learn as much as possible. A part of the research I found this 3 part article , that suggests that it is not neccessary and in some cases is even harmful to split the ground plane into analog and digital part, which contradicts what I had learned from the prof. I also read all threads on this site that are concerned with the ground planes/pours. Although majority concur with the article, there are still some opinions that advocate split ground plane. eg https://electronics.stackexchange.com/a/18255/123162 https://electronics.stackexchange.com/a/103694/123162 As a PCB design novice, I find it confusing and hard to decide who is right and which approach to take. So, should I divide the ground plane into analog and digital parts? I mean physical division, either with a PCB cut or having separate polygons for DGND and AGND (either not connected, or connected in one point) Perhaps to enable you to make a recommendation, that is tailored to my prospective PCB, I tell you about it. The PCB will be designed in the free version of Eagle=> 2 layers The PCB is for testing and precise measurement (current & voltage) of lithium batteries. The board is to be controled from Raspberry Pi over digital interface (GPIO/SPI (40 kHz)). There will be 3 data converters on board (AD5684R, MAX5318, AD7175-2),and connectors for a prebuilt RTC module on the digital side. Analog power comes from external regulated power supply over onboard LT3042 voltage regulator (5.49 V). Additionally there is LT6655B 5 V voltage reference. Analogue part is essentially a DC circuit, the only really HF is internal 16 MHz master clock of the ADC. Digital 3.3 V (mainly for powering of the digital interfaces) will be sourced from Raspberry PI. Thus, there will be 2 ground connection: external power supply and to digital interface of Raspberry Pi. In this connection another question: referring to Figure 3 , how do I make sure that return currents from the digital interfaces flow to the right ground connection (remember I have 2 of them)? Additional concern: could the power distribution curcuit disturb sensitive measurements? I was going separate them by routing power on the bottom layer, but that is no longer a good idea in case of monolithic ground plane And while I am still at asking: Assuming more or less monolithic ground plane on the bottom and signal/component layer on top, what is the best way to connect the negative side of bypass capacitors to the ground plane? | You got to think in terms of shared impedance (not resistance, really impedance). Consider the parts of the circuit that use GND as a 0V reference for sensitive analog purposes. Obviously you want each of these "0V references" to be at the same "0V" potential. However current running through the GND plane will introduce an extra error voltage on top of each chip's "0V". Now draw a schematic of your GND, with the currents running through it. If you do not split the plane, but you have high currents running through it, because you put the power input connector on the left side, the power output connector on the right side, and the super sensitive analog bits in the middle, then you might have a problem due to high current flowing in GND and creating a voltage gradient. Depending on frequency, consider impedance (ie, inductance, not just resistance). Now, there are several solutions to this. You could put your power connectors in more reasonable places (ie, power input next to power output) so the high currents do not travel in your GND plane. This applies to all current loops which carry large, noisy, or high di/dt currents, like the internal loops of a DCDC, or the loops between it and its load (say, a cpu) or even the ground path between a decoupling cap and the chip it decouples. Make sure you know where these loops are! Order them by troublesomeness (roughly "area * di/dt" for AC or "area*I" for DC). Placement is essential. A good placement with tight current loops makes layout much less of a headache. You could use differential amplifiers and ADCs which ignore common mode noise. This is mandatory if the voltage to sense sits on a high-side current shunt. Now let's say you use a current sense amp for example. Dont forget whatever voltage is on its "output reference" pin (often mislabeled "GND") is directly added to the output... so dont stick the sense amp between two MOSFETs with its "GND" pin in the middle of the "motor current return" path... You could also split the plane, but then you need to decide where you gonna split it. And (this is where things get nasty) where you link your two grounds together at DC (or at high frequencies if you use isolators... Let's name your two grounds AGND and PGND (analog and power). Some say to split, and join AGND/PGND or AGND/DGND under the ADC. This means any current that runs between AGND and PGND has to flow in the ground link under the ADC now, which is the worst possible place. A solution that makes lots of sense is the "hidden split". Placement is essential. For example you put the power/noisy stuff on the right, and the sensitive stuff on the left. You place your decoupling caps so the supply currents loops running through GND are short and well placed. Then, since your board has two well defined zones, you can narrow down the width of ground plane connecting them, to ensure high currents do not run in the sensitive bits' ground. It's very visual and difficult to explain, and placing your connectors properly is essential. These tutorials are good: https://learnemc.com/emc-tutorials | {
"source": [
"https://electronics.stackexchange.com/questions/306862",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/123162/"
]
} |
307,007 | I noticed, that on all my evaluation boards that I had up to this point in time. The LEDs were all connected in active low to the Microcontroller Port.
I understand that from a safety view it's better to have active low RESET lines and such. But why LEDs? | It's still the case that MCU I/O pins often have weaker drive sourcing current than sinking current. In a typical CMOS MCU output, when they drive LOW, they turn on an N-channel MOSFET; and when they drive HIGH they turn on a P-channel MOSFET. (They never turn both of them on at the same time!) Because of the differences in mobility that apply for N-channel vs P-channel (about a factor of 2 to 3 difference), it takes extra effort to make the P-channel device exhibit similar "quality" as a switch. Some go to that extra effort. Some do not. If not, the ability to sink (N-channel) or source (P-channel) current will be different. Some of them are almost symmetrical, in that they can source almost as much as they can sink. (Which just means they are about as good of a switch to ground as they are a switch to the power supply rail.) But even when extra trouble is attempted, there are other issues that make it unlikely the two devices will be fully similar and it still is usually the case that the sourcing side is still at least somewhat weaker. But in the final analysis, it's always a good idea to go look at the datasheet itself to see. Here's an example from the PIC12F519 (one of the cheapest parts from Microchip that still includes some internal, writable non-volatile storage for data.) This chart shows the LOW output voltage (vertical axis) vs the LOW sinking current (horizontal axis), when the CPU is using \$V_{CC}=3\:\textrm{V}\$ : This chart shows the HIGH output voltage (vertical axis) vs the HIGH sourcing current (horizontal axis), also when the CPU is using \$V_{CC}=3\:\textrm{V}\$ : You can easily see that they don't even bother trying to show the same sinking vs sourcing current capabilities. To read them, pick a current that is of similar magnitude on both charts (very difficult, isn't it?) Let's select \$5\:\textrm{mA}\$ on the first chart and \$4\:\textrm{mA}\$ on the second one. (About as close as we can get.) You can see that the PIC12F519 will typically drop about \$230\:\textrm{mV}\$ on the first one, suggesting an internal resistance of about \$R_{LOW}=\frac{230\:\textrm{mV}}{5\:\textrm{mA}}\approx 46\:\Omega\$ . Similarly, you can see that the PICF519 will typically drop about \$600\:\textrm{mV}\$ on the second chart, suggesting an internal resistance of about \$R_{HIGH}=\frac{600\:\textrm{mV}}{4\:\textrm{mA}}\approx 150\:\Omega\$ . Not very similar. (NOTE: I've extracted data from the curves for \$25^\circ\textrm{C}\$ .) So if you were designing this particular MCU into a circuit where you wanted to directly drive a \$2\:\textrm{V}\$ LED at about \$10\:\textrm{mA}\$ , which way would you wire it? It's clear that you'd have to consider LOW as ON, since that is the only way that the datasheet says you might be successful, at all, without the need for an external transistor to boost the current compliance of the output. [You may also take note that the above calculations at nearby sinking vs sourcing currents appear to show two resistance values that are approximately a factor of three from each other (about \$50\:\Omega\$ vs \$150\:\Omega\$ .) This is probably not coincidental to the differences in mobility that I mentioned at the outset, that between P-channel and N-channel mosfets.] Historical Note The TTL series of ICs that became more widely available in the early 1970's were built upon NPN BJTs and were only able to sink significant currents. They could only source relatively small currents. Here's an example table from TTL: Note that \$I_\text{OL}=16\:\text{mA}\$ and is sufficient to drive an LED, but that \$I_\text{OH}=-400\:\mu\text{A}\$ and isn't sufficient (in most cases.) In those days, there wasn't much of an alternative. As a result, you'd often find TTL family ICs using inverted outputs preferred for earlier release than equivalent packages with non-inverting outputs. A classic example I remember very well, because I was wire-wrapping my own 7400 computer back in 1974, is the 7489. This is a 64-bit 16x4 RAM with complementary , open-collector outputs, released a year beforehand in about 1973. It complemented the stored data on output. I used this fact to display the data directly, using LEDs, while also still capable of driving other logic depending on the RAM data output values. I'm pretty sure that I'm not the only one who enjoyed that fact and I suspect it informed the design choice made for this early RAM part. At the time, we'd all been making extensive use of open-collector outputs with pull-up resistors. These provide a kind of "poor-man's" version of tri-stating and it was about the only convenient way to support a bus with multiple outputs riding on it. The better and less power hungry tri-stating outputs eventually came onto the scene to support multiple "talkers" on a bus. They were very attractive when they arrived and I'd started playing with them. But I never did actually did create a large project using them. By the time they appeared, I was onto other things. So it was much later that we saw the 74189. This is also 64-bit 16x4 RAM with complementary output, but instead of open-collector it provided tri-state capable output. Here, they chose to match the ID number with the prior 7489 and so the output was also the complemented version of the stored value. But no longer open-collector. I suppose this was billed as a "replacement" in cases where there was a transition from open-collector projects towards tri-state and where the remaining logic still expected to see complementary outputs and the project owner(s) wanted to focus only on the bus method and didn't want to add inverters to the design. The 74219 capped this off with yet another version of a 64-bit 16x4 RAM, also with tri-state capable outputs, but now with the uncomplemented output. This was there for new designs, which didn't have to keep with the prior complementary output style from prior years. | {
"source": [
"https://electronics.stackexchange.com/questions/307007",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/150005/"
]
} |
307,487 | I have a P1004B746400 B7464 REV.A board which is a 72-pin SIMM RAM with two TI TMS418169DZ chips on it. According to 72-pin SIMM configuration , the pins #1, #39 and #72 are ground V ss and pin #10 is supply V cc . However, those ground and supply pins are connected to no where, I mean, there is no trace on the board connecting to those pins. Now, I wonder what I'm missing and how the ground and supply pins work. Pins #1 and #72 are shown below which have no trace connecting to them. | The board is a multi-layer stackup, probably 4-layer. This means that there are more layers inside the PCB on which other connections are routed. You can tell this from the seemingly disappearing routing, but also from the colour of the board. Notice how it is light around the edges (where light can shine through), but then suddenly gets dark. The dark region is where there is more copper inside the board. The internal layers in this case are most likely just power planes - one of Vcc and one for Vss. All connections for power and ground will connect to one or other of the planes providing nice low impedance power routing. | {
"source": [
"https://electronics.stackexchange.com/questions/307487",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/107890/"
]
} |
307,728 | I was ordering some resistors online, and I saw that 0 Ω resistors have a power rating. Why is that? Power through a resistor is calculated with the equation \$P = UI\$ or \$P = RI^2\$. Since \$R = 0\ Ω\$, \$P=0\ W\$. According to this post ( How to calculate Power Rating for Zero Ohm Resistors? ), a 0 Ω resistor has no power rating... But Farnell tells me the opposite: | While it may be true that distributors don't want to check every single part individually, in this case it is not down to laziness that the 0Ω resistor has a specified rated power of 125mW. As pointed out by @BumsikKim's answer, the datasheet for the series does in fact specify this rating - the distributor product page is correctly representing the manufacturers specifications. From Page 5, we have the following table entry: Notice how for the entire RC0805 size series, there is a specified rating of 0.125W (1/8W). This includes the 0Ω resistors in that series. There is also however crucially another specification - Jumper Criteria . This column specifies the rated current for an 0805 jumper (i.e. 0Ω resistor). We can see from the table your jumper is rated for 2A, with an absolute maximum of 5A (presumably short pulse). So why might a "zero ohm" resistor have such ratings? Simple, it's not a 0Ω resistor. Unless the manufacturer of the resistor you are using have secretly made a room temperature superconductor, the jumper is actually still a resistor, just a very small one. According to the datasheet it is specified to be ~50mΩ or less. Because the resistance is non-zero, some power will be dissipated. If we plug in the provided numbers, we actually find that the power rating is real and sensible: $$P = I^2R = 2^2\times0.05=0.2W$$ So in the worst case resistance of 50mΩ, and at the rated current of 2A, it will be dissipating more than the 125mW rating. Still think the rating is silly? In a power supply design I had the pleasure of surge testing, the designer had added an 0805 0Ω resistor in series with a 24V DC input, just prior to a TVS diode. During the test, we charged a 10mF capacitor up to 200V and then connected the capacitor to the input of the power supply. Naturally the TVS started conducting, and the 0Ω resistor turned quite literally into a firework... | {
"source": [
"https://electronics.stackexchange.com/questions/307728",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/116604/"
]
} |
308,073 | I was wondering what set the rotational speed in a cassette recorder? I assume that the speed must be constant, but that means that the tape data would have different effective densities depending on where you were on the tape. Was it a stepper motor and a timer to create the rotations or was something more interesting used? | The hubs do not rotate at constant speed in normal tape recorders - there is what is called a capstan that is kept in contact with the tape by a rubber pressure wheel. The capstan rotates at constant speed and so drives the tape at constant speed. The take-up reel is usually driven through a friction wheel so that it keeps reasonably constant tension on the tape but the hub speed can vary as needed. Some tape recorders such as ones for voice recording often do drive the tape by rotating the take-up reel at a constant speed - in that case the recording density will change depending upon how much tape is on the reel. The variations in quality for voice are usually acceptable. The standard for cassettes is where the magnitude of the magnetic flux on the tape represents the instantaneous value of the signal being recorded. There are a couple of techniques used to improve the quality of the signal played back: 1) A high-frequency AC bias is added to the signal being recorded to avoid the non-linearity inherent in the flux recorded vs the current in the recording head, without this there would be a non-linear response around zero. A cheaper alternative where low quality can be tolerated is to use DC bias, that will give a lower signal to noise ratio on playback with more hiss. 2) Another technique is to use equalization where high frequencies are boosted in recording and attenuated on playback to improve the overall signal to noise ratio (i.e. reduce hiss). There are standards set by the RIAA so that tapes are interchangeable between different machines. Some also use noise reduction techniques such as Dolby Noise Reduction. The motors used for driving the tape were usually brushed DC motors with either a centrifugal governor (a small weight on a spring that open contacts at a defined speed to slow the motor) or by using the back-emf of the motor to sense and control the speed. (See another question I answered DC Motor speed control .) Stepper motors were never used in conventional cassette recorders. They tend to be inefficient (i.e. consume more battery power), have unsteady speed (e.g. cogging) and are not so easy to drive as DC brushed motors. | {
"source": [
"https://electronics.stackexchange.com/questions/308073",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/96401/"
]
} |
308,193 | I need to find R(AB). It is resistance across terminals A and B.
I really do not know how to solve this kind of circuits. Maybe I have to use transforming from triangle to star? But it is quite inconvinient.
I noticed that on the right side 1 kOhm and 2 kOhm resistors are series. We add them and get 3. But what's next step? | I find for some students, the angled components are visually confusing. Try redrawing the same schematic with only horizontal and vertical components to see if this helps you to analyze it. For example, if you start on the right and straighten out the 6k resistor, you will notice it is in parallel with the series combination of 1k and 2k. So you have 3k in parallel with 6k for a total of 2k. Now move to the left and you see that this 2k is in series with the 10k. Keep up this procedure to reduce the entire lattice to a single value. | {
"source": [
"https://electronics.stackexchange.com/questions/308193",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/148873/"
]
} |
308,215 | I was thrown off by a certain website while learning about a op amp Schmitt trigger design. The layout was the same inverting config as the above schematic but gave a confusing answer to calculating high threshold. Vout=5v high
Vout=0v low
Vref=5v For the Vout low hysteresis the calc was: (R2||R3)/(R1+R2||R3)*Vref For Vout high hysteresis was (R3)/(R3+R1||R2)*Vref Vout low made sense, but if Vout high was 5v I would have thought it would be (R2)/(R2+R3||R1)*Vref. When I checked online calculator apparently I was correct. Can anyone set the record straight? Thanks. | I find for some students, the angled components are visually confusing. Try redrawing the same schematic with only horizontal and vertical components to see if this helps you to analyze it. For example, if you start on the right and straighten out the 6k resistor, you will notice it is in parallel with the series combination of 1k and 2k. So you have 3k in parallel with 6k for a total of 2k. Now move to the left and you see that this 2k is in series with the 10k. Keep up this procedure to reduce the entire lattice to a single value. | {
"source": [
"https://electronics.stackexchange.com/questions/308215",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/123349/"
]
} |
308,218 | I have two questions. First paragraph is background, second is questions. Background: I have a photodiode amplifier (low capacitance APD + fast transimpedance amplifier) that has a nice response with small amplitude light pulses. It has good phase margin and there is no overshoot. When I increase the amplitude of the incoming light pulse, the amplifier output will get close to or hit the rail, and I get overshoot and ringing. Questions: How do I analyze amplifier phase margin with a large signal response that may saturate the amplifier? How do overdrive recovery circuits work? I can't find any reference to overdrive recovery in my Gray & Meyer book. Where can I learn more about overdrive recovery? Thanks | I find for some students, the angled components are visually confusing. Try redrawing the same schematic with only horizontal and vertical components to see if this helps you to analyze it. For example, if you start on the right and straighten out the 6k resistor, you will notice it is in parallel with the series combination of 1k and 2k. So you have 3k in parallel with 6k for a total of 2k. Now move to the left and you see that this 2k is in series with the 10k. Keep up this procedure to reduce the entire lattice to a single value. | {
"source": [
"https://electronics.stackexchange.com/questions/308218",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/107412/"
]
} |
308,458 | My understanding is that ESD safety things (mats, wrist straps, specially marked soldering irons) are designed to bring everything that can touch a component to the same electrical potential energy – ground. But it seems unreasonable to expect that there's no voltage between my desk and the factory where my components were produced. After all, the factory is likely halfway across the world, and the resistance between here and there is significant. So, say a component is carefully packaged and shipped to me in one of those little ESD-safe bags. Before opening the bag, I carefully ground myself and my workstation. Despite this, the component is destroyed as soon as I touch it because the ground that I tied myself to is much different from the ground that the component was tied to when it was produced. What precautions are taken against this? Is it just something that can happen in theory but that isn't an issue in practice? | Components are damaged by two or more of their pins being at a large enough potential difference. If the component has a conductive case, or pad, then that counts as a 'pin' too. It's possible to break them by trying to charge them up to a new potential through one sensitive pin, while the voltage of the other pins is held more or less constant through capacitance to ground. That can be the situation where you, perhaps charged to 15kV with respect to ground, pick up a component that's at ground potential by (say) the gate lead. Conductive packaging shorts all the pins together. What you do is to bring the conductive bag to your potential first. Any charging current that has to flow into the component does so through all pins, so does not damage the component. Let's say an insulated carton of components in conductive bags charged to 100kV arrives at your workstation. You and the workstation are grounded. You open the carton, and as soon as you touch a component bag, a current flows between you and the bag to discharge it down to ground potential. Meanwhile, the bag has maintained all the component pins at the same potential, so no damaging voltage is applied across the component. Now you and component are at the same potential, you can open and touch. Why did the component arrive at 100kV? Surely the other factory ground is not that different to yours? No, but the last bit of the trip might have been carried by a guy with nylon shoes. When stuff is properly packed, it doesn't matter if intermediate stages of the journey take it to potential way different from ground. | {
"source": [
"https://electronics.stackexchange.com/questions/308458",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/68402/"
]
} |
308,735 | This is a Sony EVO-9500A (old 8mm tape player/recorder). What is the cartridge fuse style component? It is 1/4" by 1 1/8" about the same size as a 3AG fuse. It has a scale on the front going from 0 to 10 and looks like a thermometer. The only markings on it are on the back "FC". The PCB has its slot labeled "FC901" (as can be seen in the last picture). This thing reads about 3.2k ohms resistance (readings fluctuate quite a bit 10k on the high end down to 2k ohms). I'm not sure if those reading can even help because I don't know if this component even works. (The tape player is not sending video or audio out, which is what started all this.) So what is that thing and what does it do? | It is an electrochemical hour meter, historically a thin tube filled with mercury with a drop of an electrolyte, current flowing causes ions of mercury to be transported across the electrolyte moving the bubble of electrolyte along the scale. It almost certainly still works just fine, but will gradually move back the other way if fitted in reverse.... You don't see them much anymore, counters in non volatile RAM being cheaper and more convenient. | {
"source": [
"https://electronics.stackexchange.com/questions/308735",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/94416/"
]
} |
308,736 | Assume our signal source has a white background noise. The output of this source is connected to our system. We know that increasing the input bandwidth of the system leads to more noise flowing into the system and hence higher noise power. Is there any way to somehow decouple the input bandwidth of the system and the noise power fed into the system, so that the power of the noise would be independent of the input bandwidth of the system seen by the source? | It is an electrochemical hour meter, historically a thin tube filled with mercury with a drop of an electrolyte, current flowing causes ions of mercury to be transported across the electrolyte moving the bubble of electrolyte along the scale. It almost certainly still works just fine, but will gradually move back the other way if fitted in reverse.... You don't see them much anymore, counters in non volatile RAM being cheaper and more convenient. | {
"source": [
"https://electronics.stackexchange.com/questions/308736",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/150932/"
]
} |
308,878 | I was looking at the schematic of an old HP power supply I bought. It can be found here , around page 60-61. This schematic was drawn long before CAD was a tool used by engineers. Things were still drawn by hand. I was wondering how drawing large schematics took place. Today, we are used to our fancy EDA tools that will have a lot of nice features to make good schematics. Alternatively, schematics for documentation are sometimes drawn in vector graphics programs like Inkscape or Illustrator, because they can give neater results. In our CAD packages we have nice auto-annotation, it builds us a nice BoM if we set it up right, and often they even allow us to extract SPICE netlists for simulation, and a whole heap of other information about design and electrical rules. If we discover that moving this component here makes our schematic more clear, we just drag-and-drop - no need to redraw the entire thing! The old schematics I see always have nice consistent symbols - not what you would expect from hand-drawn schematics. Did they use stencils to always have exactly the same transistor, resistor, capacitor, etc. symbol? Or were those symbols defined in their dimensions and just drawn and measured out every time again? Do they maybe have little pieces of paper with each symbol drawn out on it, and then just move those around to make the schematic without having to start from scratch every time (I'm thinking about those old IKEA desk-papers where you got little cutouts of their desks and you could lay them out to try out layouts in your office)? I'm aware this is somewhat of an open-ended question, but I'm curious how things were before we had OrCAD, Virtuoso, KiCAD and Altium. | Historical Context I was trained at Tektronix to be an electronics draftsman. Tektronix provided classes for anyone interested. It's quite similar to drafting for construction. You had the usual pencils, sharpeners, specialized erasers and paper, a tilted table, T-square, triangle, etc. The same basic tools of the trade for any draftsman. There were some additional tools added, such as some nice stencils for electronics components and descriptive picture items (like an oscilloscope tube -- see here for some idea of those.) But that's about all we had to work with, then. I'd been an electronics hobbyist of some kind since about age 10, or so. Like most, I struggled to understand circuits I saw in Popular Electronics and Radio Electronics magazines. They were actually pretty hard to understand, at least as presented, because they were made for people who wanted to wire them up. Not so much for people who wanted to learn more and to understand them better. These wiring schematics would bus around all the power wiring details, most of which (I found over time) doesn't really help in understanding how a circuit works. So, as a hobbyist, I gradually tumbled to the idea of redrawing schematics so that I could better understand them. I'd literally tear down a circuit layout to its bare parts (almost) and then rebuild them back up, after I'd arranged the parts better (in my mind.) I joined Tektronix as a software developer in 1979. I'd been working on operating systems -- such as the Unix v6 kernel in 1978 -- and software generally for large computing systems since 1972, and MCUs since 1975. But I also had a personal interest in understanding and using the products that Tektronix made, too. And when I joined Tektronix, I already had good experiences in redrawing schematics for my own understanding. I used the word joining , above. I meant that. Joining is exactly how it felt to be a Tektronix employee back then. Your boss encouraged your personal interests, if there was any way it could be of mutual rewards. They would pay you to continue your education at universities in the area, for example. And they offered high quality classes, themselves, too. You are provided with profit share . And if your position was no longer required, they'd encourage you to go around to various departments and see if there was another job elsewhere. They'd pay you your salary while you met people and sought some other position. (I was told there was almost no limit to this, though I'm sure someone would intervene if you took too long to find work elsewhere.) Employees paid that back after a fashion. If I decided to go to the office and work on a Sunday, for example, I'd often find many other employees also in the building and working diligently on some project needing extra effort to meet a schedule. Rarely did I walk into a building on Sunday and have it feel empty . There was almost always something going on and plenty of employees willing to provide their weekend or night time to Tektronix when needed. Since I'd been a hobbyist for some time before joining Tektronix, I was of course also actively encouraged by my boss to take these classes when they became available. Learning to Draw Schematics In my first class, the instructor pointed out two simple organizing concepts. So simple, in fact, that I was immediately able to recognize their value despite the fact that I'd never been exposed to them beforehand. Just these two: The idea of electron flow from bottom to top on the page. Or, more correctly, the idea of conventional current flow from top to bottom. The idea of signal flow going from left (inputs) to right (outputs.) With these, one could take any random schematic they saw, tear it completely down to the ground, and re-draw it from scratch so that it obeyed these rules. The result was something almost magical. A schematic which communicated concepts quickly to other electronics engineers (and us hobbyists, too!) The instructor also pointed out something I'd already learned on my own: Don't bus power around. That's important for understanding. No signal flows on those wires. So drawing wires all around a schematic, wires without any signal on them, just gets in the way and distracts you from actually understanding what you are looking at. It's lots better to get rid of those wires and just annotate the voltage, instead. The part of all this that takes a little patience (and it really is a continuing thing for one's entire life to be honest) is learning to recognize sections that are common to many schematics. Such things as: current mirrors, voltage references, analog amplifier stages, etc. This is something you cannot just be told about. Instead, we must see them, learn about them, grow to understand more of them, and then finally acquire them. And this just takes time. There's no magic bullet or pill to take here. How did people calculate sine and cosine or logarithms or even multiply big numbers before there were calculators? They used books with tables inside, along with the training to use those tables properly. Or they used slide rules. Life gets done. The tools change. But life still gets done. Rules for Re-Drawing Schematics One of the better ways to try and understand a circuit, one that at first
appears to be confusing, is to just redraw it. This simple practice is
more important than it may at first seem to be. But I recommend early and
continual practice at redrawing circuits. It's an essential skill and it
takes regular practice to yield some of its greater powers. Below are some rules you can
follow that will help get a leg-up on learning that process. But there
are also some added personal skills that gradually develop over time,
too. As mentioned at the outset above,
I first learned these rules in 1980, taking a Tektronix class that was
offered only to its employees. This class was meant to teach
electronics drafting to people who were not electronics engineers, but
instead would be trained sufficiently to help draft schematics for
their manuals. The nice thing about the following rules is that you don't have to be an expert
to follow them. And that if you follow them, even blindly almost, that
the resulting schematics really are easier to figure out. The rules are: Arrange the schematic so that conventional current appears to flow from the top towards the bottom of the schematic sheet. I like to
imagine this as a kind of curtain (if you prefer a more static
concept) or waterfall (if you prefer a more dynamic concept) of
charges moving from the top edge down to the bottom edge. This is a
kind of flow of energy that doesn't do any useful work by itself, but
provides the environment for useful work to get done. Arrange the schematic so that signals of interest flow from the left side of the schematic to the right side. Inputs will then
generally be on the left, outputs generally will be on the right. Do not "bus" power around. In short, if a lead of a component goes to ground or some other voltage rail, do not use a wire to connect it
to other component leads that also go to the same rail/ground.
Instead, simply show a node name like "Vcc" and stop. Busing power
around on a schematic is almost guaranteed to make the schematic less
understandable, not more. (There are times when professionals need to
communicate something unique about a voltage rail bus to other
professionals. So there are exceptions at times to this rule. But when
trying to understand a confusing schematic, the situation isn't that
one and such an argument "by professionals, to professionals" still
fails here. So just don't do it.) This one takes a moment to grasp
fully. There is a strong tendency to want to show all of the wires
that are involved in soldering up a circuit. Resist that tendency. The
idea here is that wires needed to make a circuit can be distracting.
And while they may be needed to make the circuit work, they do NOT
help you understand the circuit. In fact, they do the exact opposite.
So remove such wires and just show connections to the rails and stop. Try to organize the schematic around cohesion . It is almost always possible to "tease apart" a schematic so that there are knots of components that are tightly connected, each to another, separated then by only a few wires going to other knots . If you
can find these, emphasize them by isolating the knots and focusing
on drawing each one in some meaningful way, first. Don't even think
about the whole schematic. Just focus on getting each cohesive section
"looking right" by itself. Then add in the spare wiring or few
components separating these "natural divisions" in the schematic. This
will often tend to almost magically find distinct functions that are
easier to understand, which then "communicate" with each other via
relatively easier to understand connections between them. Not Entirely Improbable Example Here's an example of a less readable CE amplifier stage. It's a little more of a wiring diagram than a schematic. See if you can manage to recognize that this is a relatively standard, bootstrapped single BJT stage, CE amplifier: simulate this circuit – Schematic created using CircuitLab Here's a more readable example of the same circuit. Here, despite being a bootstrapped design (which is seen a little less often), you can recognize the basic CE topology and begin to pick out the similarities and differences better: simulate this circuit Note that I've rid it of the power supply and ground bus wires. Instead, I've simply noted that certain end-points are attached to one or the other of the power suppy (+) rail or ground. For someone wiring this up, it isn't as helpful because they might miss a connection they need. But for someone trying to understand the circuit, those connection-details just get in the way. Also note that I've carefully arranged the new circuit so that conventional current flows from the top of the schematic downwards towards the bottom of it. The general idea is to imagine this as a kind of "curtain" of electron flow (bottom to top) or positive charges from top to bottom (conventional.) Either way, it's like a force of gravity that causes the curtain to hang from top to bottom. Flowing through this curtain of top to bottom currents, the signal passes from left to right. This is also very helpful for others trying to understand a circuit. Combined, these details help orient a reader. Also, if you imagine that \$C_1\$ and \$C_2\$ are absent from the schematic (left open) and that \$R_6\$ is bypassed (shorted), then this is a very familiar single BJT CE stage found almost everywhere. So this provides some additional guidance or orientation for understanding the circuit. It allows you now to realize that \$C_1\$ acts as an AC-bypass across \$R_4\$ so that the AC gain can be independently set, separately from the DC operating point of the amplifier stage. The only remaining details are to work out what \$C_2\$ and \$R_6\$ are achieving (bootstrapping.) The original layout above (the confusing one) would greatly hinder the ability to zero in on the bootstrapping aspect (which may, or may not already be familiar.) But at least this means there is very much less to focus on and try and understand, if unfamiliar. (The first schematic would make all of this almost entirely hopeless from the start.) This may not be the best example, but at least it shows some of why it helps to avoid wires that simply bus power around and why it's important to arrange the schematic with a specific flow of conventional current from top to bottom and for signal to flow from left to right. More Likely Example Case A better example would include a more complex circuit (which as the one for the LM380.) This would help illustrate the knots of circuit groups that can be organized into separate sections (more tightly interwoven within themselves, but communicating to other sections via a sparser set of wires communicating signals.) So I'll end this by including a nicely divided LM380 schematic to illustrate that point: simulate this circuit Note that there are individual sections, now isolated as identifiable groups such as current mirrors, long-tailed differential amplifier (here, really, more of a \$\pi\$ type arrangement), and an output stage. The annotations also help. In fact, if possible it is a good idea to include design-note annotations on your schematic. This helps to draw attention towards the key ideas relating schematic subsections to each other. Try and imagine what this would have been like to read through had the power supply and ground rails been all connected up with additional wiring and/or with no particular arrangement of current flow on the page. | {
"source": [
"https://electronics.stackexchange.com/questions/308878",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/34873/"
]
} |
309,624 | I wanted to measure the current of a generic lithium ion battery charger. I set I plug my red lead in the A 10A max fused and black into COM. The charger is rated at 600 mA and my other lead plug is fused to 250 mA so obviously that wouldn't work. So I put my leads on either side of the battery and there's a small spark. Did I break something? | You placed an ammeter in parallel to a voltage supply, which created a short circuit through the meter and most likely blew the 10A fuse in the ammeter socket. What you want to do is put the meter between the charger and the battery, and then connect the other end of the battery to the charger. Diagram will help: simulate this circuit – Schematic created using CircuitLab Edit: If everything still works fine, it's likely you tripped some kind of over-current protection in the charger before the fuse in the meter blew. | {
"source": [
"https://electronics.stackexchange.com/questions/309624",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/150978/"
]
} |
309,785 | I'm repairing a heater that someone threw in the trash (this model): It has an internal thermostat next to the heating wires, plus a thermal fuse. What is the reason for a fuse in addition to the thermostat? It seems to me that the thermostat alone is sufficient protection against overheating, since the fan does not produce heat. | The fan doesn't produce heat, but if the fan never blows, the heating element might overheat and start a fire. Thermostats fail. Safety regulations generally work on a "single fault" principle. Meaning no single fault in a product should lead to a safety hazard. In this case, the thermal fuse provides a backup to prevent a fire in case the thermostat fails (or, as @winny points out, in case the fan is mechanically blocked). | {
"source": [
"https://electronics.stackexchange.com/questions/309785",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/-1/"
]
} |
310,212 | My device uses a silicone keypad to detect a key press rather than a physical button. After setting up, it works smoothly, even without applying much pressure to the keypad. However, after a while ( say 2 months), you will need to apply much pressure on the key pad before a key is detected. It continues like this for a while, and then no keys can be detected again. So we open, clean the PCB keypad traces with "methylated Spirit". And it works back as new. Sometime, we see black residue on the keypad PCB traces, which appears to come off the conductor of the silicone keypad. We wipe this off and everything is back to normal. My question is how to avoid this problem. | Electricity and water is the issue. The tin in the solder plating will grow a crystalline structure and form an oxide that doesn't conduct very well. Spent many months in the 1980s solving this problem and the bottom line is use gold plate. Don't be cheap on this. The company I worked for at the time sued the supplier a lot of money for their incompetence and they were big in the industry at that time. If you can't seal it (and clearly you can't because you can clean the contacts) then water will get in. It's inevitable. | {
"source": [
"https://electronics.stackexchange.com/questions/310212",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/3632/"
]
} |
311,693 | I want to install a relay to my fusebox which is controlling whole basement with machinery. Relays that I have have got 2A limit each, and I have 32 relays. Can I just use all of them for the same line and assume the relay's limit is 64A ? Is there any risk of doing this? | Unless you can guarantee that all the contacts will close and open at exactly the same instant of time the only safe current you can assume is 2A - that is, the capacity of the first contacts to close, or the last contacts to separate. | {
"source": [
"https://electronics.stackexchange.com/questions/311693",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/152425/"
]
} |
311,832 | Can a CPU (such as the Intel i3/i5/i7/Xeon) with on-chip cache RAM use that as its only functional RAM, without any external memory banks attached? Or must there be external RAM, and the cache cannot be accessed or used alone? Modern desktop/server CPUs often have more internal cache RAM than many 1990's computers had in entire system memory, so there should be plenty enough there to run simple code. CPUs from before cache existed such as the 6502 would be unable to do anything, as the internal CPU RAM only amounted to a few bytes for the address counter and accumulators. This is not a question of running any sort of modern operating systems, but running simple code programmed into a custom ROM, or hand-entered with a hex input keypad. | See this extremely detailed account of the PC boot sequence: http://www.drdobbs.com/parallel/booting-an-intel-architecture-system-par/232300699?pgno=2 Since no DRAM is available at this point, code initially operates in a stackless environment. Most modern processors have an internal cache that can be configured as RAM to provide a software stack. Developers must write extremely tight code when using this cache-as-RAM feature because an eviction would be unacceptable to the system at this point in the boot sequence; there is no memory to maintain coherency. That's why processors operate in "No Evict Mode" (NEM) at this point in the boot process, when they are operating on a cache-as-RAM basis. In NEM, a cache-line miss in the processor will not cause an eviction. Developing code with an available software stack is much easier, and initialization code often performs the minimal setup to use a stack even prior to DRAM initialization. You can observe this by running a PC without RAM: it will play a series of beeps. The program that plays those is run from the BIOS Flash ROM. I've also seen this behaviour on some ARM processors. There will be configuration registers inside the SoC that allow you to use the cache as RAM early on in the boot sequence, in order to run a program that finds, enumerates and configures the DRAM. | {
"source": [
"https://electronics.stackexchange.com/questions/311832",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/110180/"
]
} |
311,837 | I am reading this datasheet for an energy meter, to start understanding how this market works. A friend of mine wants to make a test in a small office building to see if it pays having one of these and he asked for my help, and finally I have an excuse to get to know this kind of devices. I am leaning towards that meter, good price (around 170eur ~ 200USD), but I don't know what CT clamp to buy for it. I want one clamp for 100A, 200A and around 600A or more. But looking at the datasheet: I am not able to tell which CTs to buy. So far the meters I've studied they have something like primary coil 100A -> 5A secondary or primary coil 100A -> 1A secondary , primary coil 100A -> 0.333A secondary , primary coil 500A -> 10A secondary , and so on. But this one is different... For this meter I am not being able to interpret what's the maximum secondary coil current it can handle, and consequently nor the primary. Can you help me with some examples covering their ratios ( 1.0 to 99.9/ 100 to 999 / 1.00k to 6.00k etc. ? Thanks! PS: I know the dangers of high voltages and current, and no, I am not going to handle the installation of these devices, nor touch electric boards, etc. | See this extremely detailed account of the PC boot sequence: http://www.drdobbs.com/parallel/booting-an-intel-architecture-system-par/232300699?pgno=2 Since no DRAM is available at this point, code initially operates in a stackless environment. Most modern processors have an internal cache that can be configured as RAM to provide a software stack. Developers must write extremely tight code when using this cache-as-RAM feature because an eviction would be unacceptable to the system at this point in the boot sequence; there is no memory to maintain coherency. That's why processors operate in "No Evict Mode" (NEM) at this point in the boot process, when they are operating on a cache-as-RAM basis. In NEM, a cache-line miss in the processor will not cause an eviction. Developing code with an available software stack is much easier, and initialization code often performs the minimal setup to use a stack even prior to DRAM initialization. You can observe this by running a PC without RAM: it will play a series of beeps. The program that plays those is run from the BIOS Flash ROM. I've also seen this behaviour on some ARM processors. There will be configuration registers inside the SoC that allow you to use the cache as RAM early on in the boot sequence, in order to run a program that finds, enumerates and configures the DRAM. | {
"source": [
"https://electronics.stackexchange.com/questions/311837",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/23636/"
]
} |
313,159 | Looking at the old cassette tape, From the POV of the head, let's say that it reads at speed \$v\$ (the magnetic medium scrolls at speed \$v\$ ). But looking at the right wheel, which is the one that's pooling the magnetic medium - its radius is growing(!) over time. Now, \$v=r\omega\$ , where \$\omega\$ is the angular velocity, i.e. a constant Question I don't think that's true. What is really going on here? Radius is growing over time, for sure. I also assume that \$\omega\$ is constant. so did \$v\$ increase? | The details of how a cassette drive works are well covered by this Wikipedia article . The tape is pulled by a capstan next to the playback head, and this capstan pulls the tape at a steady rate. (picture from the Wikipedia article) You probably need to click on the picture to see it full size. I have indicated the capstan by a red arrow. The take-up spool doesn't rotate at a fixed speed. It uses a slipping drive, as badjohn says in his answer, so it takes up the tape at the speed the capstan moves it. | {
"source": [
"https://electronics.stackexchange.com/questions/313159",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/11284/"
]
} |
313,277 | The isolation transformer I'm looking at: http://www.mouser.com/ds/2/336/HX1188NL-515471.pdf My question is why is it there? Is it not enough to have just the transformer on the left? | That "transformer" is a common mode choke . It's used to suppress EMI (either being induced onto the line and affecting the circuit or being transmitted from the circuit out over the line). It's called "common mode" because it's very effective in suppressing HF currents that are common to both lines. | {
"source": [
"https://electronics.stackexchange.com/questions/313277",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/129751/"
]
} |
313,930 | I've ran into quite a few engineers from unrelated backgrounds that put unpopulated components on the BOM. Some will do a section clearly labelled DNP at the bottom, others will leave them dispersed throughout the BOM, but highlight the rows. Having a DNP section seems like the way to go if you must do this, the only downside I can think of being that there will have to be more manual editing of the CAD package output. (Have personally witnessed this, the DNPs were changed at the last minute, the DNP section didn't get editted properly, and parts that shouldn't have been on the board were placed.) Leaving them throughout and highlighting the rows seems suboptimal because there could easily be duplicate rows for populated and not populated, and again, more manual editing. I don't see why this practice is necessary. A BOM by definition is a list of things required to build something. If a component is not on the BOM and assembly drawing, it should not be on the board. Adding components that aren't actually there just seems like a source of confusion further down the line for whoever enters the BOM into the ERP and purchasing. What does putting unpopulated parts on the BOM achieve that leaving them off the BOM and assembly drawing doesn't? | If you don't explicitly document that these components are not to be placed, you will inevitably have your manufacturing team notice that there is a location on the board with no corresponding line in the BOM, and delay the build to send an engineering query asking what is supposed to be placed there. Explicitly documenting not-placed components avoids these queries, much like "this page intentionally left blank" in the manual avoids people asking what was supposed to be printed on the pages that were blank in their copy. | {
"source": [
"https://electronics.stackexchange.com/questions/313930",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/11074/"
]
} |
313,955 | I know a fair bit about USB, but for some reasons I am not able to answer this question myself. Does the USB Host port (let's say USB 2.0) always supply 5V on the VBUS pin? If not, how does it detect device connection? For device (bus-powered case) it is easy to detect host by sensing the VBUS pin, but for this to work USB Host should always provide 5V on the VBUS pin, which results in me asking this question. One thing I can think of is it can provide 5V, 100mA by default, and more only after successful enumeration. But as far as I have observed, the USB ports on laptop have 5V VBUS enabled by default and not limited to 100mA, so I just want to understand from USB standard point of view. Any reply will be much appreciated. | A standard classic USB host must always provide VBUS (+5V +- 10%) to a downstream port, so a device can initiate the connect sequence (pull D+ or D- high). The port must provide at least 500 mA (2.0 version) or 900 mA (3.0 version), regardless of whether there is any communication or attachment or else. These are "at least" requirements for classic USB 2.0 and USB 3.0 "high-power ports" and powered hubs, so they can supply more if they wish. The requirements are listed in Section 7.2 of USB 2.0 Specifications. Small battery-powered might have an exception. NOTE1: VBUS must be supplied by host even in "sleep" (suspend state) mode. -- NOTE2: if a host doesn't drive VBUS high, no attachment would/should occur, even if the device has own power. Connect requests (D+ or D-
pull-ups) must occur only if VBUS is present, by USB 2.0
specifications Sec.7.2.1. So it is a spec violation just to "have" the
pull-ups, the pull-up must be conditional with VBUS. -- NOTE3: So, no VBUS => no communication. Becuse of this rule, no "partial" or "host-signaling" mode is possible without VBUS. The 500 mA is a requirement for USB HOST . This supply number is frequently confused with a requirement for a USB DEVICE as consumer. A USB device SHOULD NOT draw more than 100 mA upon initial connect stage, and can draw full power only when it gets enumerated and receives "set_configuration ()" command. USB devices report their power requirements in device descriptor, during this inital "100mA" session. If the host has exhausted its power budget, it can stop the enumeration, effectively rejecting the device. NOTE4: as one can see, there is no "negotiation", it is either "my [host] way", or "freeway". -- NOTE5: USB hosts have no specified means to police actual power
consumption from its ports. USB host controllers don't have any
registers that can measure/report ports consumption. Therefore a host cannot enforce
the 100mA limit to police out bad devices that might violate the 100
mA specification, unless a drastic event of port overcurrent occurs.
The compliance to 100 mA limit was thought at the level of USB-IF
certification process. In short, the 500 mA and 100 mA are requirements for different USB entities, one for hosts and hubs, and another for devices, again as described in Section 7.2.1. This is how it works from USB standard point of view. Now things are a bit different with introduction of Type-C connector. Type-C devices (both hosts and peripherals) are prohibited to output VBUS , initially. So, instead of boldly having VBUS power on a Type-C port, the host must turn on VBUS source only if the port logic detects the presence of cable/device. It does this by sensing voltage level on "CC" (Communication Channel) pin. A Type-C host has a pull-up on both CC pins. A device (or legacy cable assembly) must have 5.1k pull-down. When device/cable is plugged in, the host detects that 5.k drags its pull-up down, and at this point the host has rights to engage VBUS power, and USB communication begins. | {
"source": [
"https://electronics.stackexchange.com/questions/313955",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/103871/"
]
} |
314,226 | As known, contactor is used to switch higher capacity than relay. But there some relays can switch high current too, such as, some power relays can switch current beyond 100A, and there are contactors to switch only 160A. So, if a relay has a same switching current with a contactor, which one to choose? And can relays be used in parallel to achieve high switching current to replace a contactor? | Wikipedia's Contactor article explains it pretty well. Unlike general-purpose relays, contactors are designed to be directly connected to high-current load devices . Relays tend to be of lower capacity and are usually designed for both normally closed and normally open applications. Devices switching more than 15 amperes or in circuits rated more than a few kilowatts are usually called contactors. Apart from optional auxiliary low current contacts, contactors are almost exclusively fitted with normally open ("form A") contacts . Unlike relays, contactors are designed with features to control and suppress the arc produced when interrupting heavy motor currents . [Emphasis mine.] Further down the same article ... Differences between a relay and a contactor: Contactors generally are spring loaded to prevent contact welding. Arc-suppression relays usually have NC contacts; contactors usually do not (when de-energerzied, there is no connection). Magnetic suppression and arc dividers are typically utilized when switching multi-horsepower motors. Magnetic suppression is accomplished by forcing the arc to follow the longer field lines of a fixed magnet placed in close proximity to the contacts. The longer path is specifically designed to force an arc length that can’t be sustained by the availableinductive energies. Figure 3 shows a schematic representation of magnetic arc suppression. Source: Automation Direct, Electrical Arcs - Part 1 of 2 part series . The article linked above is well worth a read. Your questions: So, if a relay has a same switching current with a contactor, which one to choose? Look carefully at the application and contact rating - particularly for motor or inductive loads. If you are satisfied that either will suffice you can choose based on some other criteria such as cost. And can relays be used in parallel to achieve high switching current to replace a contactor? Generally not. While doing this does reduce the long term heating of the individual contacts due to steady current running through them it is a problem during switching due to timing differences. Even wiring contacts of the same relay in parallel is risky as they never are perfectly aligned and the first one to make and last one to break carry the full switching action. | {
"source": [
"https://electronics.stackexchange.com/questions/314226",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/25264/"
]
} |
314,604 | Historically, choosing a design avoiding 0402 and finer pitch components was advantageous for mass production cost savings. The yields were improved and accuracy requirements for pick and place machines were reduced. This allowed vendors to choose from a larger number of manufacturing facilities and identify cost saving opportunities. Does that kind of thinking still hold any water given that 0402s and BGAs have been popular for a long time? Are there still factories that specialize in low cost larger pitch manufacturing? Just to be clear, I'm only talking about production volumes in the millions. | Many assembly houses these days do 0402 with the same machines they do anything else, possibly using a different needle, though I'd suspect they'd be using the 0402 capable needle for 0603 and 0805 as well. Is it still true you'll have more choices when you don't go below 0603? Sure. Most likely. There's a lot of cheap assembly houses that are cheap because they use the old equipment of the others, probably everywhere around the world. Some assembly houses may do down to 0201, but not be too happy to do the smaller stuff, because it requires extra operator attention. However, when you go into volume, the cost of the more advanced assembly house will likely not weigh up to the cost difference for smaller circuit boards, more efficient systems and/or lower component cost. And some times using 0402 or even 0201 offers better per-component performance as well, such as lower parasitic effects. Obviously if a 5000 unit reel of 0603 capacitors costs $15 and the 10000 unit reel of 0402 of the same value cost $20, that'll add up when you're making 10's of thousands with 10 each, but not really do much at all below using a reel per month. Because boards are now almost always made to 5mil/5mil standard, the board won't likely be much more expensive if you make anything more compact with tiny components, but at high volumes the board space savings will start weighing as well. If a panel costs $100 and with 0603 the panel can fit 20 PCBs, but with 0402 it can fit 25 PCBs, that usually saves much more in volume than any extra cost you have at assembly in high quantities. In all, if you want to be fully sure you'd need to do a cost estimation, including an RFQ to a few assembly houses that tickle your fancy. All the assembly/full-service houses I use are always ready to pick up the phone or answer an e-mail with questions about comparative costs. And more often than not I find the cost increase of something "unwise" 10 years ago falls into the less than a few percent now. And the same will happen later to stuff we think expensive now, so, really, you need to regularly keep asking them if things have changed if you want to be the best designer you can be. Summarising: The only reason I don't do at least 0402 in my designs is if it's a hobby thing for me or others, where I want to be as quick as possible with replacing components, or I want others to be able to use my design as well, as I am not even noticing significant cost increase up to 160mmx160mm boards at 10 units. Over average past orders. | {
"source": [
"https://electronics.stackexchange.com/questions/314604",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/17744/"
]
} |
316,295 | I'm currently attempting to replace some bloated caps on an old PC motherboard but am currently having problems with some specific points. Here's a pic of the motherboard: . The area outlined in fuchsia are the series of caps I am trying to desolder. Notice they are in pairs of square and round solder points. The round solder points are no problem to desolder. It's the square ones that won't desolder. No matter how much I heat them up, the solder on the points just won't melt and so I can't pull the caps off! Is there a special technique or tool I need to perform this task? Thanks in advance for any help you can provide. | You need a bigger soldering iron, as in "more power." The square connections are in the ground plane of the board. That is the large yellow area they are embedded in. That is a large piece of copper, and there are probably also large copper surfaces on the internal layers of the board. Copper conducts heat very well, and it also radiates it away. The large copper areas are basically sucking up all the heat your iron can provide and radiating it away fast enough that it can't get hot enough to melt solder. The solution is an iron that can put in heat faster than the board can dissipate it. So, you need an iron with more power. Many irons are only around 30 watts. You'll need much more than that. When I've had to do that kind of thing, I borrowed a huge 150 watt iron from my father in law. It isn't intended for electronics, but it has the raw power needed for large copper surfaces. As for technique, high wattage irons often have wide tips. I apply some extra solder to the heavy joint with the iron heating just the ground connection. When that finally melts, I rotate the tip of the iron to heat both pads for that part. The solder melts pretty quickly, then I can pull the part out. Afterwards (if you need to to replace the part) you can clean the holes with a solder sucker or solder wick. While you are removing the part, you actually want as much solder as possible on the connection. Removing solder makes it harder to get the part out, not easier. | {
"source": [
"https://electronics.stackexchange.com/questions/316295",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/126294/"
]
} |
316,296 | Basically I am making a solar inverter without any battery or charge controller that will directly convert the dc output coming from solar panels (6 connected in parallel) into 220V AC.I am using solar panels 50W each, having an open circuit voltage of 20V and the voltage varies between 15-20V during the entire day provided a minimum amount of sunlight is there.
Next, I am using a 12V buck converter circuit using an LM2576 and few more components to get a stable output voltage of 12V out of the panels.
Now this 12V DC is fed to an inverter circuit which converts it into a square/modified sine wave 220V AC at approximately 50Hz.
But, I am not getting desirable power output. From 6 panels, all I am able to power is a 45W LED bulb along with a small 3W LED bulb. Probably, one problem is with LM2576 buck converter IC. This IC although providing a constant 12V output but it is rated at a 3A fixed output current. And I think probably this is the reason why we are unable to drive more loads. Is there a way to amplify current in this case? Or something else should be done which I am missing here ? | You need a bigger soldering iron, as in "more power." The square connections are in the ground plane of the board. That is the large yellow area they are embedded in. That is a large piece of copper, and there are probably also large copper surfaces on the internal layers of the board. Copper conducts heat very well, and it also radiates it away. The large copper areas are basically sucking up all the heat your iron can provide and radiating it away fast enough that it can't get hot enough to melt solder. The solution is an iron that can put in heat faster than the board can dissipate it. So, you need an iron with more power. Many irons are only around 30 watts. You'll need much more than that. When I've had to do that kind of thing, I borrowed a huge 150 watt iron from my father in law. It isn't intended for electronics, but it has the raw power needed for large copper surfaces. As for technique, high wattage irons often have wide tips. I apply some extra solder to the heavy joint with the iron heating just the ground connection. When that finally melts, I rotate the tip of the iron to heat both pads for that part. The solder melts pretty quickly, then I can pull the part out. Afterwards (if you need to to replace the part) you can clean the holes with a solder sucker or solder wick. While you are removing the part, you actually want as much solder as possible on the connection. Removing solder makes it harder to get the part out, not easier. | {
"source": [
"https://electronics.stackexchange.com/questions/316296",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/151135/"
]
} |
316,615 | I am a hobbyist looking to buy a soldering station. The market for soldering equipment ranges from $10 to $1000. This begs the question:
* What is the reason behind these huge price differences?
* What attributes should I look for in a good soldering station? I researched a bit and found out that a good soldering iron should be temperature controlled and should strive to pump enough wattage to maintain the desired temperature while the soldering iron is tranferring heat to the component it's heating.
Additionally it should be a brand that has easy to find replacement tips. Surely there must be other things that I am unaware of. Please shed some light on these unknown unknowns. | You don't need a $1000 iron unless you're going to be using it all day, every day, for a living, for the next 40 years. Same as any other tool. $100 will get you a good iron that will last you a long time (if you take care of it) and will do any hobby-level job. Adjustable temperature is nice for versatility, but not necessary. You can overcome a fixed-temperature iron's limitations with technique and practice. Personally I spent the first few years going through inexpensive (sub-$40) irons before finally setting aside $100 for a Hakko station. Things I noticed: The control unit and soldering base were much sturdier/heavier, and didn't move around on me The iron's handle didn't get hot after 30 minutes of use Being able to adjust the temperature is nice when dealing with different heating needs The iron heated up much faster The iron sat with greater stability in its holder than previous ones The included list of useable tips had like 80 varieties. I haven't used any of them yet, but I know they're available if I need them You can, of course, get by with inexpensive tools depending on your level of use and the types of projects you're doing. Many projects are absolutely doable with a $20 35-watt iron. To me, $100 seems just about right for a tool I use reasonably often at a hobby level. Edit per recommendations from comments: Temperature control (not to be confused with adjustable temp) makes it much easier to get consistent solder joints. Temperature control means there is temperature feedback, so the iron tip will maintain its temperature when dumping heat into the joint (up to the power limit of the iron). This is especially useful when soldering to something that sheds heat quickly (like a ground plate). Having enough wattage is key to good joints. If the iron doesn't have enough power to adequately heat the entire joint, you'll get "cold joints" which don't conduct well. At best they're annoying and give unreliable behavior. At worst they can be a fire hazard (they act like a resistor and can get very hot). I submit that 35-40W is ample power for small projects that only involve small-gauge wire and component leads. I've used a 35W iron for things like swapping guitar pickups and little circuits you assemble on perfboard with 20AWG wire. More wattage is generally not a bad thing, as you'll generally end up with an adjustable-temp station above probably 60W or so (my Hakko is 70W). For soldering to a big hunk of metal (like a large grounding plate or block), you may eventually need a 100-150W gun. I certainly haven't taken a survey of every available option, nor used them all, so as always YMMV. | {
"source": [
"https://electronics.stackexchange.com/questions/316615",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/155491/"
]
} |
317,028 | I have understood that solar panels don't last forever. Warranties are typically a couple dozen years, and you can expect your panel to last for perhaps twice the warranty period. But what does exactly happen when the panel is failing? Does it fail suddenly like computer hard disks, or does its output degrade like the capacity of batteries? What are the physical principles behind the failing solar panel? Is the failing in some way related to heat? Do solar cells made by a reputable brand last longer than cheap Chinese cells? Is it possible to make a solar panel that would last essentially forever given a high enough price is possible? Such a panel might prove useful if it turns out that the low interest rate environment will continue. Of course, there are many types of solar cells, so the answer may be limited to the most common types, i.e. polycrystalline and monocrystalline silicon cells. | Reading here and a couple other places makes it sound like solar panel degradation varies widely. Manufacturing origin doesn't appear to be correlated to longevity or if it is, it may be opposite what we expect (China appears to do well). The general gist is that you'll lose a fraction of a percent every year on average. It's likely due to high energy photons slightly changing the structure over time. Weathering is also a concern. Wind-blown sand scratching the surface and dust blocking light are two other ways cells degrade. Most solar installations appear to be able to handle 20-40 years of use without issue, but some don't appear to handle thermal cycling well. In that case, you can have catastrophic failure of one or many cells causing poor solder bonds to break, delamination to occur, or entire cells to crack. Corrosion of the cell and connectors could be another late game failure mode. I think more of a concern than the cells degrading is the supporting electronics (inverter) dying. The cost of installing a solar installation these days is largely being determined by peripherals rather than the panels themselves. Power electronics to support the system and their failure mode is really what I would be researching if I were in your shoes as I believe the likelihood of catastrophic failure there is much more likely in a much shorter timeframe. This looks to give a great rundown of many manufacturers and their lifetimes. I've included a diagram from there below: | {
"source": [
"https://electronics.stackexchange.com/questions/317028",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/70843/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.