source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
132,678
I've got a USB device that requires 12 watts to charge. My laptop appears to output less that that. My question is: Is it possible to step-up the output of a laptop USB port to 12 watts?
No. This is basic physics. There is no free lunch (or energy). If the laptop only puts out 500 mA at 5 V, for example, then you get 2.5 W. You could convert this to a different combination of voltage and current, but the result can't on average exceed the 2.5 W you put in (It is possible to get higher power out for short durations, but that's clearly not what you are asking about. The average out still can't exceed the average in.). Since no conversion will be 100% efficient, you will actually get a little less power out, with the remainder getting dissipated as heat in the converter. For example, let's say you can make a switching power supply that is 90% efficient. That means with 2.5 W in, you get 2.25 W out at some other voltage and current combination. The remaining 250 mW will heat the switching power supply. You could get, for example, 10 V at 225 mA, 24 V at 94 mA, 2 V at 1.13 A, etc.
{ "source": [ "https://electronics.stackexchange.com/questions/132678", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5580/" ] }
132,683
I have some trouble understanding why M1 is in saturation/active mode. According to Wikipedia a MOSFET is in saturation mode if \$V_{GS} > V_{th}\$ and \$V_{DS} \ge (V_{GS} – V_{th})\$. However as drain and gate are tied together \$\implies V_{DG} = 0 \implies V_{DS} = V_{GS}\$. Therefore \$V_{DS} \ge (V_{GS} – V_{th})\$ can't be true (\$V_{th} > 0\$)? What am I missing?
No. This is basic physics. There is no free lunch (or energy). If the laptop only puts out 500 mA at 5 V, for example, then you get 2.5 W. You could convert this to a different combination of voltage and current, but the result can't on average exceed the 2.5 W you put in (It is possible to get higher power out for short durations, but that's clearly not what you are asking about. The average out still can't exceed the average in.). Since no conversion will be 100% efficient, you will actually get a little less power out, with the remainder getting dissipated as heat in the converter. For example, let's say you can make a switching power supply that is 90% efficient. That means with 2.5 W in, you get 2.25 W out at some other voltage and current combination. The remaining 250 mW will heat the switching power supply. You could get, for example, 10 V at 225 mA, 24 V at 94 mA, 2 V at 1.13 A, etc.
{ "source": [ "https://electronics.stackexchange.com/questions/132683", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/54932/" ] }
132,697
I am trying to connect an Arduino Mega with another device using a LIN Transceiver. I have decided to use a LIN MCP2004 Transceiver for this. I am now trying to understand the reference design of MCP2004. I have come up with the following design but I have a couple of questions: Should RxD be pulled up to 5 volt or 12 volt? Are the values for R1 and R2 ok? I don't understand what to do with the V_REN pin? Thanks. PS, please don't suggest other LIN transceivers since I only have access to MPC2004. simulate this circuit – Schematic created using CircuitLab
No. This is basic physics. There is no free lunch (or energy). If the laptop only puts out 500 mA at 5 V, for example, then you get 2.5 W. You could convert this to a different combination of voltage and current, but the result can't on average exceed the 2.5 W you put in (It is possible to get higher power out for short durations, but that's clearly not what you are asking about. The average out still can't exceed the average in.). Since no conversion will be 100% efficient, you will actually get a little less power out, with the remainder getting dissipated as heat in the converter. For example, let's say you can make a switching power supply that is 90% efficient. That means with 2.5 W in, you get 2.25 W out at some other voltage and current combination. The remaining 250 mW will heat the switching power supply. You could get, for example, 10 V at 225 mA, 24 V at 94 mA, 2 V at 1.13 A, etc.
{ "source": [ "https://electronics.stackexchange.com/questions/132697", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/54933/" ] }
133,900
In my electrical circuits class, we've spent the last six weeks learning circuit analysis with resistors rated in ohms. Now, out of the blue, every question on the next practice exam has the resistors rated in the watts they consume. Here is a question from the practice exam where I'm supposed to find Ix. Am I really supposed to find all of the node voltages by substituting the resistance values with V^2/P (or some manipulation of it) and then doing a loop current analysis? That gives a bunch of quadratic equations instead of making things easier. I feel like I'm overlooking something mind-blowingly simple here. I've spent the last few hours searching for help on this and I can't find anything for this type of problem. Any help to point me in the right direction would save what little hair I have left on my scalp.
The symbol for Ohms is the capital letter Omega, \$\Omega\$. In some word processors if you make the Omega symbol using a Greek font and then convert it to another font like Times New Roman or Arial, then that symbol will show up as a "W.". In other words, your professor probably used the wrong font and those are meant to be Omega's.
{ "source": [ "https://electronics.stackexchange.com/questions/133900", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/56031/" ] }
133,937
I haven't seen any for a while now, but a couples of year ago I saw some several times: cylindrical magnets around electrical wires. I'm not talking about specialized equipments but items of everyday life which comes with such a magnet. In fact I did not know it was a magnet until I accidentally broke one. Unfortunately I did not manage to find any picture from the web. Do you know the use of them? Also, are we still using them?
They're not actually magnets, but rather ferrite which is a paramagnetic material. A ferrite bead with a conductor through it is an inductor and so is used as a low pass filter. Typical use is for power cables to reduce EMI (electromagnetic interference).
{ "source": [ "https://electronics.stackexchange.com/questions/133937", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/30863/" ] }
134,496
Maybe this is more of a perceptional problem, but it seems like microcontrollers have advanced by leaps and bounds in the last 20 years, in almost all regards, higher clock speed, more peripherals, easier debugging, 32-bit cores, etc... It is still common to see RAM in the 10's of KB (16/32 KB). It doesn't seem like it could be an issue of cost or size directly. Is it an issue of complexity with the RAM controller above some threshold? Or is it just that it isn't generally required? Looking over a parts matrix at a popular Internet supplier, I see one Cortex M4 with 256 KB for less than US$8, and then for a few dollars more you can find a few more that are ROMless, but it seems pretty sparse... I don't exactly have a need for a microcontroller with a MB of volatile storage, but it seems like somebody might...
There are several reasons for this. First of all, memory takes up a lot of silicon area. This means that increasing the amount of RAM directly increases the silicon area of the chip and hence the cost. Larger silicon area has a 'double whammy' effect on price: larger chips mean less chips per wafer, especially around the edge, and larger chips means each chip is more likely to get a defect. Second is the issue of process. RAM arrays should be optimized in different ways than logic, and it is not possible to send different parts of the same chip through different processes - the whole chip must be manufactured with the same process. There are semiconductor foundaries that are more or less dedicated to producing DRAM. Not CPUs or other logic, just straight up DRAM. DRAM requires area-efficient capacitors and very low leakage transistors. Making the capacitors requires special processing. Making low leakage transistors results in slower transistors, which is a fine trade-off for DRAM readout electronics, but would not be so good for building high performance logic. Producing DRAM on a microcontroller die would mean you would need to trade off the process optimization somehow. Large RAM arrays are also more likely to develop faults simply due to their large area, decreasing yield and increasing costs. Testing large RAM arrays is also time consuming and so including large arrays will increase testing costs. Additionally, economies of scale drive down the cost of separate RAM chips more so than more specialized microcontrollers. Power consumption is another reason. Many embedded applications are power constrained, and as a result many microcontrollers are built so that they can be put into a very low power sleep state. To enable very low power sleep, SRAM is used due to its ability to maintain its contents with extremely low power consumption. Battery backed SRAM can hold its state for years off of a single 3V button battery. DRAM, on the other hand, cannot hold its state for more than a fraction of a second. The capacitors are so small that the handful of electrons tunnel out and into the substrate, or leak through the cell transistors. To combat this, DRAM must be continuously read out and written back. As a result, DRAM consumes significantly more power than SRAM at idle. On the flip side, SRAM bit cells are much larger than DRAM bit cells, so if a lot of memory is required, DRAM is generally a better option. This is why it's quite common to use a small amount of SRAM (kB to MB) as on-chip cache memory coupled with a larger amount of off-chip DRAM (MB to GB). There have been some very cool design techniques used to increase the amount of RAM available in an embedded system for low cost. Some of these are multi chip packages which contain separate dies for the processor and RAM. Other solutions involve producing pads on the top of the CPU package so a RAM chip can be stacked on top. This solution is very clever as different RAM chips can be soldered on top of the CPU depending on the required amount of memory, with no additional board-level routing required (memory busses are very wide and take up a lot of board area). Note that these systems are usually not considered to be microcontrollers. Many very small embedded systems do not require very much RAM anyway. If you need a lot of RAM, then you're probably going to want to use a higher-end processor that has external DRAM instead of onboard SRAM.
{ "source": [ "https://electronics.stackexchange.com/questions/134496", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17955/" ] }
134,561
When a fluorescent light fixture goes out, it often flickers and then after a while stops shining altogether. When this happens and when it finally dies is the circuit broken and does it still use electricity? I don't understand too much what exactly is spent or breaks in a fluorescent light bulb when it is at the end of its life and I would appreciate any explanation. Here's a video about what I'm talking about: https://www.youtube.com/watch?v=NDnKEOeFJn0
I'm going to go out on a limb and say this question is valuable from the point of view of electronic design, as it pertains to some fundamental understanding on how fluorescent lights work. Fluorescent lights work by accelerating electrons from the cathode to the anode in an almost-vacuum environment. In this vacuum is mercury vapour, and when the electron hits a mercury atom, that Hg atom goes into an excited state and outputs one or more photons of UV light upon decay. These UV photons then hit the phosphor-based coating on the inside of the glass tube, which converts these UV photons to visible white light. So, in order to function, it is vitally important for these lights to have a lot of 'free' electrons available to shoot at the mercury. One way to make electrons more mobile and likely to shoot off the cathode is to heat it up, and this is what a so-called 'starter' circuit does: it is essentially nothing more than a high voltage generator and a heating coil. The heating coil heats up the electrode to mobilize the electrons and the high voltage generator (usually just a resonant LC pump) creates enough voltage for the initial 'spark' to ignite the bulb. Once electrons start flowing and the lamp is 'on', the gas inside the lamp looks more like a plasma and is very conductive, so neither the high voltage nor the addition of heat is necessary to keep it working. Hence, it's just a starter, once the bulb is on, it is shut down. Old-style starters would keep trying to fire the bulb even when the electrodes were entirely spent. This means that that heating coil would be running until its filament would burn out. In a lot of cases this would mean the bulb has a higher power consumption after it's died. Modern electronic starters 'give up' after a few tries when they detect that the bulb won't start. After that they use up no or almost no energy until power is cycled to the starter.
{ "source": [ "https://electronics.stackexchange.com/questions/134561", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/56307/" ] }
134,634
Today, while drinking some water from a \$500mL\$ bottle, I started reading the info about the water and found out that the conductivity (\$\sigma\$) at \$25°\$C is \$147.9\mu S/cm\$. So it came to my attention that maybe I could calculate the resistance of the water bottle, from top to bottom. After some measuring, I found out that the bottle can be approximated as a cylinder with \$18cm\$ height and \$3cm\$ base radius. So we can do the following: \$R_{eq} = \frac{\rho L}{A}\$, where \$\rho = \frac{1}{\sigma}\$ is the resistivity, \$L\$ is the bottle's height and \$A\$ is the base area. By doing this, I got \$R_{eq} \simeq 4.3k\Omega\$. Then, I bought a new full bottle, made a hole on it's bottom (of course avoiding leakages) and measured the resistance (with a digital multimeter) from this hole to the "mouth", at first making it so that only the tip of the probes touches water. The measured resistance was really high, ranging from \$180k\Omega\$ to even \$1M\Omega\$ depending on how deep in water I positioned the probes. Why is the measured resistance so different from what I calculated? Am I missing something? Is it possible at all to use a bottle of water as a resistor? Edit #1: Jippie pointed out that I should use electrodes with the same shape as the bottle. I used some aluminum foil and it actually worked! Except this time I measured ~\$10k\Omega\$ and not the \$4.3k\Omega\$ I calculated. One thing I was able to notice while lighting a LED with water as a resistor was that the resistance was slowly growing over time. May this phenomenon be explained by the electrolysis that happens while DC current travels through water (the electrodes slowly get worse because of ion accumulation at their surfaces)? This would not happen for AC current, right?
The formula you use is valid for a certain area, but the size of your probes is nowhere near the area you used in your calculation. If you want a closer approximation, you'll have to use electrodes similar in size as the area you calculated the water column for, one flat on top, one flat at the bottom.
{ "source": [ "https://electronics.stackexchange.com/questions/134634", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/23638/" ] }
134,969
I've been reading about decoupling capacitors, and I can't seem to understand why ST recommends 100 nF decoupling capacitors on a 72 MHz ARM microcontroller. Usually 100 nF decoupling capacitors are only effective up to about 20-40 MHz due to resonance. I thought 10 nF decoupling caps were more suitable since their resonance is closer to 100 MHz. (Obviously, it depends on the package and its inductance, but those are just ballpark values from what I've seen.) According to the STM32F103 datasheet, ST recommends 100 nF capacitors on V DD and 10 nF on VDDA. Why is that? I would think I should use 10 nF on V DD too.
Three things you should note: 1) Most bypass recommendations in datasheets and application notes are fairly random in my opinion. You may easily be a better engineer than the person who wrote the application note :-). A better datasheet would talk about how low an impedance you as a board designer should provide and to what frequency. I wrote about this here . 2) Most of the parasitic inductance comes from your mounting inductance (footprint and via length) and not the capacitor itself. This is why you would like a smaller package rather than a smaller value. This is also why you would want to get the vias close together and use closely coupled power/ground planes. 3) It's possible that the chip has some bypass as part of the package and die, but this should ideally be detailed in the datasheet before you can take advantage of it (back to my first point). If not (and this is likely), you can try to measure this yourself, like I show here . You may want to use something like pdntool.com to select the best combination of bypass capacitors based on your impedance and frequency requirements. This method has worked reliably for many projects over the last 15+ years. I excuse for plugging my own blog posts here, but it's just much faster for me to find the references I need that way. Feel free to ask more questions.
{ "source": [ "https://electronics.stackexchange.com/questions/134969", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/43172/" ] }
135,335
Most manufacturers produces crimped and straight lead pairs of their capacitors which has exactly same capacitance and voltage rating. Why do they bother crimping the leads? What advantage does it make? In which cases a crimped lead capacitor should be preferred?
Ceramic capacitors are rather brittle and so they do not like their leads getting tugged on. Adding these crimps forces the capacitor to sit off the board with a few mm of relatively flexible lead in between. This will isolate the capacitor from forces that it would otherwise experience during vibration, board flexing/bending, thermal expansion/contraction, etc. By providing the crimped leads at the factory, the board house does not require a machine to add those in-house.
{ "source": [ "https://electronics.stackexchange.com/questions/135335", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5542/" ] }
136,036
If I place three cheap 200 V rated diodes across a 500 V supply instead of one expensive diode, is the system guarantied to work correctly? My worry is the situation at which two of the diodes share 150 V and the remaining 350 V appears on the other diode, bringing out the holly smoke. Would something like that happen? simulate this circuit – Schematic created using CircuitLab
No, the voltage does not distribute equally. The reverse leakage current for diodes is not a carefully controlled parameter, and can vary substantially from unit to unit, even from the same manufacturing batch. When placed in series, the diodes with the lowest leakage current will have the highest voltage across them, which will cause them to fail, which in turn will apply excessive voltage to the remaining diodes, causing them to fail as well. The usual solution is to put a high-value resistor in parallel with each diode. Select the value of the resistor so that the current through the resistor (when the diodes are reverse-biased) is about 10× the worst-case leakage current of any diode. This means that the reverse voltage that appears across the diodes will not vary by more than about 10%. Note that this still means that you need some margin in the ratings of diodes. For example, for 600V of peak reverse voltage, you should use four 200-V diodes, not three. There is another phenomenon that comes into play as well. The diodes will not all "switch off" at the same speed when going from forward bias to reverse bias. Again, the "best" (fastest) diodes will fail first. The solution for this is to also place a capacitor, about 10 to 100 nF, in parallel with each diode. This limits the risetime (dV/dt) of the reverse voltage, allowing all of the diodes to switch before it rises too high.
{ "source": [ "https://electronics.stackexchange.com/questions/136036", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5542/" ] }
136,042
I'm building a simple circuit which has an LM211 comparator at its heart. The positive input is connected to a 10k-10k resistor divider across the supplies, stabilized by a 1uF capacitor. Hysteresis is provided by 1.8MOhm from the output to the positive input. The signal on the SENS line is a slowly changing resistance hooked up to +5V on the other side. Everything works perfectly on a breadboard, but I wanted to check a few things before finalizing the board layout, and came across a TI application note which says in paragraph 6: It is a standard procedure to use hysteresis (positive feedback) around a comparator, to prevent oscillation, and to avoid excessive noise on the output because the comparator is a good amplifier for its own noise. In the circuit of Figure 2, the feedback from the output to the positive input will cause about 3 mV of hysteresis. However, if the value of Rs is larger than 100Ω, such as 50 kΩ, it would not be reasonable to simply increase the value of the positive feedback resistor above 510 kΩ. However, it doesn't say why a high value feedback resistor is "not reasonable". I built up my circuit on a breadboard, and it seems to work fine, I can definitely see good hysteresis behaviour with the 1M8 resistor as compared to without it. So, what is the drawback?
No, the voltage does not distribute equally. The reverse leakage current for diodes is not a carefully controlled parameter, and can vary substantially from unit to unit, even from the same manufacturing batch. When placed in series, the diodes with the lowest leakage current will have the highest voltage across them, which will cause them to fail, which in turn will apply excessive voltage to the remaining diodes, causing them to fail as well. The usual solution is to put a high-value resistor in parallel with each diode. Select the value of the resistor so that the current through the resistor (when the diodes are reverse-biased) is about 10× the worst-case leakage current of any diode. This means that the reverse voltage that appears across the diodes will not vary by more than about 10%. Note that this still means that you need some margin in the ratings of diodes. For example, for 600V of peak reverse voltage, you should use four 200-V diodes, not three. There is another phenomenon that comes into play as well. The diodes will not all "switch off" at the same speed when going from forward bias to reverse bias. Again, the "best" (fastest) diodes will fail first. The solution for this is to also place a capacitor, about 10 to 100 nF, in parallel with each diode. This limits the risetime (dV/dt) of the reverse voltage, allowing all of the diodes to switch before it rises too high.
{ "source": [ "https://electronics.stackexchange.com/questions/136042", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/39570/" ] }
136,123
Sometimes you need to ensure that your oscilloscope probe ground lead is not creating measurement error such as ringing and overshoot. In various forms, I have seen (and successfully used) a ground spring for measuring a circuit. I have shamelessly borrowed AndrejaKo's image to make sure we're all on the same page: I have determined that this lead length is essential in some of my test configurations, but it requires constant attention to make sure I'm not shorting something out, loosing a connection, or probing the wrong thing. This limits my ability to do other tasks in the test setup, and makes it impractical for other setups that do not allow that sort of access safely (blind or hazardous conditions). How do I attach a scope probe to the circuit under test with a 1/2" (1cm) ground lead, or otherwise get a hands-free high bandwidth setup?
The kind of tip you show is not intended for permanent installation. If you need to have a scope hooked up to the device under test, with good high frequency performance, the only good solution is to design testing connections into your device. I like MMCX connectors , because they're very compact, and you can get MMCX->SMA pigtails (and convert that to BNC) for cheap . You do have to design testing into your project, but it's a good habit to get into anyways. I tend to try to scatter MMCX footprints around my board layouts, so I can get easy probe access to any nets I'm interested in. Plus, they make decent pads for probing with a spring ground clip if you don't want to solder connectors down. You can also make a homemade alternative , if you have the board-space and patience: As W5VO points out in the comments, using a test-setup like this for high-speed connections can be somewhat challenging. You would need to either construct a 10:1 probe adapter with a compensation capacitor, and mount it right on the mating MMCX connector, or properly ensure that your connecting cable is 50Ω, and the oscilloscope you're using is set to 50Ω input impedance to prevent reflections and signal distortions. If you are interested in high-speed logic probing, a simpler solution then dealing with having to terminate the signal run to your scope would be to use a homemade inline termination as close to the MMCX connector as possible. Basically, you can homebrew a 10:1 or 20:1 probe by simply inserting a series termination as close to the connector (the PCB-end connector) as possible. With a 50Ω scope imput impedance, a series resistance of 450Ω results in 10:1 attenuation, while maintaining proper impedance matching to the oscilloscope, and also loading the circuit under test much less. A 950Ω resistor would result in 20:1 attenuation. There are several homemade probes using this technique here and here . For this sort of setup, I would take a male and female PC-mount connector, and solder the resistor in between the two, with some bare wire connecting the ground pins. It should be quite compact and structurally robust. You can even add a compensation capacitors if you're interested in very high speed signals. There is a good resource about that here . You then simply insert the series termination inbetween the scope lead and your board under test, and set your scope to the proper attenuation.
{ "source": [ "https://electronics.stackexchange.com/questions/136123", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/638/" ] }
136,714
I have a PCB which has the perfect size for a project of mine, so I would like to use it if possible. However, the copper plating on the back side of PCB only surrounds the individual holes (that is, no holes are interconnected). See picture right here: I find this strange. How can this be useful? I would definitely need some copper tracks with interconnected holes in there because some components need to be connected to each other. Am I supposed to make my own tracks somehow? I saw some stuff online about people who would insert multiple wires in the same hole to make interconnections, but this seems undesirable. I'd rather avoid that if there is some way to make tracks.
What you've got is called a prototype board . It is available at electronics suppliers everywhere and is obviously not meant for production. Join things together any way that is convenient for you. Many methods have been pictured. Another common way is inserting a component lead beside its next connection and just bending it over to fit. The results are typically quite messy, but it can take a lot more handling than a breadboard prototype. Thus it is a common step before getting printed and etched boards made. You can also find prototype boards in the same circuit pattern as the push-in breadboards, so you can simply transfer your circuit from one to the other, solder, and install.
{ "source": [ "https://electronics.stackexchange.com/questions/136714", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/57327/" ] }
137,104
When I run 24VAC into a full wave bridge rectifier followed by a 220uF electrolytic capacitor to turn it into ~32VDC, the source has two wires. Does it matter in what order I connect the AC wires to the input to the bridge rectifier? If so how do I determine which wire goes where? I suspect that it's totally symmetric on the input side, but I'm am full of doubt when it comes to AC. Sorry if this is just a really dumb question.
When you're looking at an AC source in isolation such as in your question, indeed there's no polarity and you can connect the wires either way round. When combining two or more AC sources in series or parallel, the relative phasing is very important.
{ "source": [ "https://electronics.stackexchange.com/questions/137104", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/771/" ] }
137,394
Here is a picture of 3 mounting holes on three different PCBs. The red one is what I prepared on Eagle CAD and used "hole" button on the left side menu , something like this: Do I need to use "via" in order to have the same style mounting hole on the other PCBs? As far as I can see, they are vias on the blue PCBs. Isn't it better to isolate the mounting holes on a PCB? What I am thinking to have a risk to get an unwanted signal or voltage to your ground plane as it is unisolated? or Does it improve grounding or tying grounds of unconnected PCBs? Is it a gold finish on the mounting holes? Why would I want to have a gold finish on a screw hole that would increase the cost? The mounting holes on blue PCBs have several holes on it, what is the purpose of having holes on the screw holes? Fabrication-wise, it seems it would slow down the production a little bit as each individual mounting(or screw) hole requires drill holes on it. The upper blue PCB is from a projector, the bottom one is from a harddisk of a PC.
The mounting holes that have the large copper pads around the hole are fabricated that way because it is desired to have that connected circuit be conductive to the mounting screw. In most cases it is the GND of the circuit that connects to the mounting hole pad and there is a desire to have such GND connect into the metal chassis to which the board mounts. The small via holes in the mounting hole pads are designed to electrically connect the mounting hole pad to the pad on the opposite side of the board. In the case of a multi layer board with an inner plane layer the vias may also be used to connect the mounting hole pad to that inner layer. In the past it was much less common to see mounting hole pads with these via holes. Instead the mounting hole was built as as a large plated through hole to connect to the opposite side and/or inner layers of the board. However, as in the examples you show, note that the more modern type mounting holes do not have plating in the large diameter screw hole and thus the need for the vias to provide the electrical connection. Plating removal in screw mounting holes is primarily done for one reason. The sharp threads of a screw in the hole can cause small particles of metal to come off the plated hole. This is particularly true when boards might be removed and re-installed during testing and/or repair. These small particles can come out and appear on the circuit board or float around in the electronics enclosure. In the advent of surface mount components with vary narrow lead spacings these small metal particles from the screw holes can lead to shorts on the circuit leading to either intermittent circuit operation or outright failure.
{ "source": [ "https://electronics.stackexchange.com/questions/137394", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/26011/" ] }
137,447
In the following power supply circuit, there is a full wave diode bridge (full wave rectifier?) after the DC Input. I can see how we need a full wave rectifier after an AC input, but why after a DC input? Is it to smooth out power signal? Thanks Circuitlib Schematic
Looks to me like it's purely for the convenience (and maybe safety) of the user. It allows you to connect the input using any polarity you choose rather forcing a specific polarity on you.
{ "source": [ "https://electronics.stackexchange.com/questions/137447", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/13391/" ] }
138,296
Why is the time constant (RC) measured in seconds even though the units are farads x ohms? This is to fulfill my own curiosity as I haven't had much luck at finding the answer. I would be most grateful if someone could give me a solid answer or send me in the right direction.
It's the way the units work out. Broken down to its form in SI units, a volt is $$\mathrm{V = \frac{kg \cdot m^2}{A \cdot s^3}}$$ where A is amperes. So, when you divide by current to get ohms, you see that $$ \Omega = \mathrm{\frac{kg \cdot m^2}{A^2\cdot s^3}}$$ A farad is: $$ \mathrm{F=\frac{s^4 \cdot A^2}{m^2 \cdot kg}} $$ So when you multiply Ohms by Farads, you're left with seconds: $$ \Omega \cdot \mathrm{F = \frac{kg \cdot m^2}{A^2\cdot s^3} \cdot \frac{s^4 \cdot A^2}{m^2 \cdot kg} = s} $$
{ "source": [ "https://electronics.stackexchange.com/questions/138296", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/58168/" ] }
138,319
My question is inspired by this: Why do farads multiplied by ohms produce a result that has a unit of seconds? I want to ask what kilogram is doing in this equation: $$\mathrm{V = \frac{kg \cdot m^2}{A \cdot s^3}}$$
Voltage is used to measure the potential difference between one point and a reference point. The reference point when its ground voltage becomes a node or a point voltage. Now potential is defined as the "Work done in bringing a point charge from infinity to the given location". The Work done is Force multiplied by displacement and The force involves mass of the particle (unit +ve charge), which in SI units is in kg.
{ "source": [ "https://electronics.stackexchange.com/questions/138319", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/39905/" ] }
138,378
I've seen some PCBs were some components have been sketched , but they don't seem to be mounted on the board. See, this image (of a PlayStation 2) where the transformer has been drawn: Why is this done on some circuits?
There are a couple of possibilities. One is that they're using a single PCB for multiple designs. Another (that's still technically sort of the same idea, but mostly different in practice) is that they started with one design. Then (for example) a new part became available that (for example) integrated more functionality onto an IC, so some of the passive components that used to be required aren't any more. For example, a part that required external pull-up resistors might be replaced with one that's otherwise identical, but has internal pull-up resistors. Your board has spots for pull-up resistors, but with the new part you get exactly the same functionality by simply omitting them. It's mostly a question of balancing the cost of designing a PCB against the increased cost of producing the design using the existing PCB design. For example, let's assume it costs $500,000 to design a PCB, and by designing a new one you could save $1 on each finished item. Obviously you need to sell at least 500,000 of the modified design for the re-design to break even. If you're selling fewer than that, you're better off sticking with the existing design. In addition, most suppliers give (often substantial) price breaks for producing a larger number of identical items. For example, let's assume you have two designs and expect to sell about 5,000 of each of those designs. All else being equal, one of the PCBs should cost $1 less to produce than the other--but if you buy 10,000 of a single design you get (say) a 10% discount. In this case, if each PCB costs at least $10 to produce, you end up saving money by using the same board for both designs. Having fewer PCB designs also simplifies production. You only have to track one part in inventory instead of two. That part (the PCB) may easily be the only one that's truly unique to each design, so if your guess/prediction about how the two designs will sell is wrong, keeping your inventory balanced can be quite a bit simpler. For example, ordering more of a custom PCB will typically have considerably longer lead time than ordering more of standard capacitors, resistors, off the shelf ICs, etc.
{ "source": [ "https://electronics.stackexchange.com/questions/138378", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/24361/" ] }
139,688
Though the resistor is always introduced as one of the most simple components it is the one which makes least sense to me. Ohm's law defines the resistance as$$R = \frac{V}{I}$$ this means that the voltage is defined as $$V = I \cdot R$$ and the current as $$I = \frac{V}{R}$$. So following the law a resistor must affect both voltage and current however the reality is that it only changes one size. To lower the voltage To lower the current This does not make much sense to me because in my understanding voltage and current must be lowered both but in the common LED resistor example it only affects one size: $$ U = 9V\\ I = 30mA\\ R = 300Ω $$ you also find use cases where only voltage is affected. How do I interpret this? What is the factor which determines if the resistor affects either voltage or current?
There is no factor that determines if the voltage or the current is reduced. That whole concept is erroneous. The simple statement you are looking for is: A Resistor Defines the Relationship Between the Voltage and the Current That is, if the current is fixed, then the resistor defines the voltage. If the voltage is fixed, then the resistor defines the current. In all three of the Ohm's Law formulae you will have two of the three values as fixed values - values you know, through measurement, or whatever, and the third variable is the one you want to find. From there it's simple maths. The LED example, though, throws an extra spanner in the works, since the LED isn't a linear device . So its influence on the circuit is calculated separately before Ohm's Law is applied. You have three known values, and you want to calculate a fourth. The known values you have are: the supply voltage (9V), the LED forward voltage (say, 2.2V as an example), and the current you want to flow through the LED (30mA). From that you want to calculate the value of the resistor. So you subtract the LED's forward voltage from the supply voltage, since those are both fixed voltages, and the result will be the amount of voltage that must be dropped across the resistor for the whole to total 9V. So 9V - 2.2V is 6.8V. That is a fixed voltage. The current you want is fixed too - you have decided on 30mA. So the resistor value is then: $$ R=\frac{V}{I} $$ $$ \frac{6.8}{0.03} = 226.\overline{6} \Omega ≈ 227 \Omega $$ You will always have two of the three values as fixed values - either because they are set by external factors, like the power supply or battery voltage, or they are a specific value that you require or desire, when it is you who has set that value. The third value is what must be calculated to make both those fixed values hold true.
{ "source": [ "https://electronics.stackexchange.com/questions/139688", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/23782/" ] }
140,111
I've come across a design where each pad was connected using 4 'bridges' to the GND copper layar. What stands behind these 'bridges'? Why not make a full copper layer with only solder-mask defining the pads?
No, they are not bridges, they are pads with thermal relief . A typical pad on a printed circuit board is only connected to a few narrow tracks. A pad directly connected to the copper pour is difficult to solder since the heat quickly leaks away from the pad into the copper pour due to high thermal conductivity of copper. A thermal connection restricts the heat flow, making the pad easier to solder.
{ "source": [ "https://electronics.stackexchange.com/questions/140111", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5778/" ] }
140,225
Like most computer hobbyists and programmers, I've amassed boxes of USB cables to connect USB, Micro-USB, and Mini-USB to chargers, computers, and gadgets. These cables are a mix of phone charger cables, and cables that came with external hard drives, bike lights, GPS units, and other miscellaneous gadgets. The problem is, they all look the same, just plain black cables. How can I tell if one of these cables is a charge-only USB cable instead of a USB data cable? Ideally, I would love to rely on some visual clue, but I have a multimeter I could use to test the cables with if I knew a good approach to this. My goal is to label these cables so I can resolve this ambiguity so when I reach in to my box of cables, I know which cable to use for charging my phone and which one to use to synchronise my GPS with my computer.
The kind of cable you mean is missing the D+ and D- data lines. It simply doesn't have those wires inside the cable. You can test for continuity or resistance using a multimeter. Probe between the corresponding data pins: D+ on one side to D+ on the other, or D- to D-. The D+/D- lines are the middle two pins of a USB connector. Just select one on one side of the cable, and test continuity to both of the middle pins on the other side. You will see no continuity or a high/"infinite" resistance on your meter if the cable is missing data wires and is a "charge only cable". Technically USB requires the data lines to request more power from a host device, so a cable missing these connections would, in theory, only let devices charge very slowly. In practice most USB hosts will not enforce such a limit. It is also possible that some phones will refuse to charge without data lines in the cable.
{ "source": [ "https://electronics.stackexchange.com/questions/140225", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/59071/" ] }
140,227
I want to design a 4-bit PISO shift register with 4 DFFs and 3 AND gates. I have gone so far that these two designs can be implemented, but I can't go further minimizing it so as to use 3 AND gates for the implementation. If there is anyone having any suggestions I would appreciate it. I have thought of another design but I didn't put it here because I think it limits the use of the shift register. Also I think that the second design is not right. I think that there is another way to be implemented but I am not sure.
The kind of cable you mean is missing the D+ and D- data lines. It simply doesn't have those wires inside the cable. You can test for continuity or resistance using a multimeter. Probe between the corresponding data pins: D+ on one side to D+ on the other, or D- to D-. The D+/D- lines are the middle two pins of a USB connector. Just select one on one side of the cable, and test continuity to both of the middle pins on the other side. You will see no continuity or a high/"infinite" resistance on your meter if the cable is missing data wires and is a "charge only cable". Technically USB requires the data lines to request more power from a host device, so a cable missing these connections would, in theory, only let devices charge very slowly. In practice most USB hosts will not enforce such a limit. It is also possible that some phones will refuse to charge without data lines in the cable.
{ "source": [ "https://electronics.stackexchange.com/questions/140227", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/59074/" ] }
140,554
I am a physics teacher who did engineering and hated all things electrical! Hence when my students sometimes ask me how a voltmeter can measure the potential difference between two points if there is no current passing through the voltmeter. I can only assume that it is because having an infinite resistance is impossible, but I have never had the confidence to answer this without worrying about feeding them incorrect information. My idea as it stands is that the resistance of a voltmeter is only theoretically infinite, in which case there will be a current flowing, however small, that can be used somehow by the voltmeter of a predetermined resistance to calculate the actual potential difference. Can somebody explain whether I am along the right lines with this and help me explain this in definite terms or at least disabuse me of my assumptions and tell me the correct idea?
The underlying difficulty seems to be the belief that some current must flow to measure voltage. This is false. Since you are a physics teacher, I'll explain by making analogies to other physical systems. Say we have two sealed vessels, each filled with some fluid. We want to measure the pressure difference between them. Like voltage, relative pressure is a difference in potentials. We could connect them with a tube which is blocked in its middle by a rubber diaphragm. Some fluid will move initially, but only until the diaphragm stretches to balance the forces of the fluids acting on it. We can then infer the pressure difference from the deflection of the diaphragm. This meets the definition of infinite resistance in the electrical analogy, since once this system has reached equilibrium, no current flows (neglecting diffusion through the diaphragm, which can be made arbitrarily small and isn't necessary for the operation of the device). However, it does not qualify as infinite impedance , because it has non-zero capacitance . In fact, this device is exactly Bill Beaty's favorite mental model of a capacitor : There are, in fact, devices that measure voltage that work analogously. Most electroscopes fall into this category. For example, the pith ball electroscope: Many of these devices are very old and require very high voltages to work. However, modern MOSFETs are essentially the same thing at a microscopic scale in that their input looks like a capacitor. Instead of deflecting a ball, the voltage modulates the conductivity of a semiconductor: The MOSFET works by altering the conductivity of a channel between the source (S) and drain (D) as a function of the voltage between the gate (G) and the bulk (B). The gate is separated from the rest of the transistor usually by a thin layer of silicon dioxide (white in picture above), a very good insulator, and like the diaphragm device before, whatever very small leakage there is isn't relevant to the operation of the device. We can then measure the conductivity of the channel, and the current flowing in this channel can be supplied by a separate battery and not the device under test. Thus, we can measure a voltage with an extremely high (theoretically infinite) input resistance. simulate this circuit – Schematic created using CircuitLab
{ "source": [ "https://electronics.stackexchange.com/questions/140554", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/59201/" ] }
140,618
I don't understand how FPGA can be used to accelerate an algorithm. Currently I'm running a time consuming real time algorithm on a quadcore laptop so that four computations can be done in parallel. I have been recently alerted that FPGA may yield even better performance. I don't understand how that works. Can someone explain how FPGA accelerates an algorithm and if I should switch to a Xilinx or Altera FPGA solution or keep performing the computations on my quadcore laptop. Additional Details: The algorithm runs 20 artificial neural networks using inputs fed in through the wavelet packet transform Thank you all for the great answers.
A colleague of mine benchmarked this and came to the conclusion that FPGAs would outperform a PC once you had more than about 100 independent , integer tasks that would fit in the FPGA. For floating point tasks GPGPU beat FPGA throughout. For narrow multithreading or SIMD operation then CPUs are extremely optimised and run at a higher clock speed than FPGAs typically achieve. The other caveats: tasks must be independent. If there are data dependencies between tasks then that limits the critical path of computation. FPGAs are good for boolean evaluation and integer maths, as well as hardware low-latency interfaces, but not for memory-dependent workloads or floating point. If you have to keep the workload in DRAM then that will be the bottleneck rather than the processor.
{ "source": [ "https://electronics.stackexchange.com/questions/140618", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/50265/" ] }
141,121
Would it be theoretically possible to speed up modern processors if one would use analog signal arithmetic (at the cost of accuracy and precision) instead of digital FPUs (CPU -> DAC -> analog FPU -> ADC -> CPU)? Is analog signal division possible (as FPU multiplication often takes one CPU cycle anyway)?
Fundamentally, all circuits are analog. The problem with performing calculations with analog voltages or currents is a combination of noise and distortion. Analog circuits are subject to noise and it is very hard to make analog circuits linear over huge orders of magnitude. Each stage of an analog circuit will add noise and/or distortion to the signal. This can be controlled, but it cannot be eliminated. Digital circuits (namely CMOS) basically side-step this whole issue by using only two levels to represent information, with each stage regenerating the signal. Who cares if the output is off by 10%, it only has to be above or below a threshold. Who cares if the output is distorted by 10%, again it only has to be above or below a threshold. At each threshold compare, the signal is basically regenerated and noise/nonlinearity issues/etc. stripped out. This is done by amplifying and clipping the input signal - a CMOS inverter is just a very simple amplifier made with two transistors, operated open-loop as a comparator. If the level is pushed over the threshold, then you get a bit error. Processors are generally designed to have bit error rates on the order of 10^-20, IIRC. Because of this, digital circuits are incredibly robust - they are able to operate over a very wide range of conditions because the linearity and noise are basically non-issues. It's almost trivial to work with 64 bit numbers digitally. 64 bits represents 385 dB of dynamic range. That's 19 orders of magnitude. There is no way in hell you are going to get anywhere near that with analog circuits. If your resolution is 1 picovolt (10^-12) (and this will basically be swamped instantly by thermal noise) then you have to support a maximum value of 10^7. Which is 10 megavolts. There is absolutely no way to operate over that kind of dynamic range in analog - it's simply impossible. Another important trade-off in analog circuitry is bandwidth/speed/response time and noise/dynamic range. Narrow bandwidth circuits will average out noise and perform well over a wide dynamic range. The tradeoff is that they are slow. Wide bandwidth circuits are fast, but noise is a larger problem so the dynamic range is limited. With digital, you can throw bits at the problem to increase dynamic range or get an increase in speed by doing things in parallel, or both. However, for some operations, analog has advantages - faster, simpler, lower power consumption, etc. Digital has to be quantized in level and in time. Analog is continuous in both. One example where analog wins is in the radio receiver in your wifi card. The input signal comes in at 2.4 GHz. A fully digital receiver would need an ADC running at at least 5 gigasamples per second. This would consume a huge amount of power. And that's not even considering the processing after the ADC. Right now, ADCs of that speed are really only used for very high performance baseband communication systems (e.g. high symbol rate coherent optical modulation) and in test equipment. However, a handful of transistors and passives can be used to downconvert the 2.4 GHz signal to something in the MHz range that can be handled by an ADC in the 100 MSa/sec range - much more reasonable to work with. The bottom line is that there are advantages and disadvantages to analog and digital computation. If you can tolerate noise, distortion, low dynamic range, and/or low precision, use analog. If you cannot tolerate noise or distortion and/or you need high dynamic range and high precision, then use digital. You can always throw more bits at the problem to get more precision. There is no analog equivalent of this, however.
{ "source": [ "https://electronics.stackexchange.com/questions/141121", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/33690/" ] }
141,360
You read the title and that's what I am trying to do. Synopsis: I have a young son who is determined to catch Santa (to what ends I don't know). He even dreamed up using some kind of pressure plate and connecting it to a light that will turn on when Santa steps on it to get cookies. I would love to help him build this contraption. We have been down at RadioShack repeatedly. More practically, I am not an EE and have minimal experience with (and time for) bread boards. So, I was hoping some people here could give some guidance. I have been evaluating Little Bits but not sure which way to proceed. I am thinking a pressure sensor under a carpet near the cookies that somehow trips a relay and turns on a light or something else clever. Naturally, I don't want this to succeed just yet in capturing Santa; consider it a lesson in drafting better requirements and a more complete design. Next year we can drive a new design for catching Santa. So, while the ultimate goal is a bit childish, is it a serious enough question and maybe I can jump start my son's interest in engineering.
Adorable. Frankly, the Little Bits kit is way overkill and incredibly expensive. If your goal is to make a simple sensor that detects pressure and turns on a light bulb, that can be done using stuff you probably have around the house. Here's a basic idea that might be "good enough" for this year. Next year may require some more sophistication as your son's imagination develops. Instead of a force transducer pad or something equally expensive, use a metal plate that is propped up by a small spring. When the plate is stepped on, it compresses the spring and makes contact with another plate on the floor. In doing so, a circuit is completed and activates a light source. The metal plates can be cardboard or plastic wrapped in aluminum foil. Or thin sheet metal. The spring can be an actual spring or any squishable material that deforms enough under the weight of a person (or one jolly fat guy). The whole assembly can be hidden under a rug with only a slight bump giving away its presence.
{ "source": [ "https://electronics.stackexchange.com/questions/141360", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/43203/" ] }
143,921
Apart from condensation why do electronic components usually have a low temperature limit? For example my laptop says something along the lines of -10 °C to 75 °C temperature while in use. I can understand the high temperature limit, as things will probably melt! But why is cold such a bad thing? Apart from batteries, which components will extreme cold damage, and how? Will using it increase the damage? Will using the equipment offset this damage (as they warm up from use)? Also, I am talking about extreme temperatures below -50 °C, so is condensation still a problem? Note: I am not storing it so it is not a duplicate of another question. Note 2: I am not talking about semiconductors, but generally speaking.
I once designed an amplifier that would oscillate at -10°C. I fixed it by changing the design to add more phase margin. In this case, the oscillation did not cause any damage, but the circuit did not work well in this condition, and it caused errors. These errors went away at higher temperatures. Some plastics crack when they freeze. Dry ice is -78.5°C, and I have broken a lot of plastic with dry ice. For example, I destroyed a perfectly good ice chest that cracked into little pieces in the spot where I had a chunk of dry ice in it. In surface-mount designs, the differential temperature coefficient of expansion between the parts-soldered-to-the-circuit-board and the circuit board can cause large stresses. The stress-strain-temperature relationship often barely works over the specified temperature range. When the equipment is powered up, the hot components can change shape and break the brittle plastic, much like my old ice chest. If the equipment is below 0°C and then you take it into a nice warm, humid office, water will condense on the circuit boards and can cause problems. Presumably a similar thing can happen with frost, depending on the weather. When the frost melts, there can be problems. When I receive equipment in the morning that has been carried as air cargo, I assume that it has recently been very cold, and I let it sit around for some hours to warm up slowly and to stay dry before opening the box in the office. Turning on very cold gear could be interesting. Some current-limiting components, such as a PTC or PPTC , will pass a lot more current. The lubricants in motors such as fans and disk drives could also be a problem.
{ "source": [ "https://electronics.stackexchange.com/questions/143921", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/39152/" ] }
144,125
I find myself constantly trying to recycle old components in my new projects. Yesterday, I needed a demultiplexer so I just took it out from some old phone board... Demultiplexers cost basically nothing and the one I took out is quite old. So my question is this: Does it make sense to reuse old components the way I did or is it better to use new ones?
I guess the answer is maybe . Yes, if... you're in the process of fixing something and the hard-to-get spare part is readily available on a junk board in the corner of your lab. Hard to get can even be a standard 10k resistor, if it's Sunday afternoon and you don't want to order parts online or wait for the next electronics shop to open. Sometimes, I even re-use old parts when hacking something at my company and don't want to wait two days for ordered parts before putting the design in production. you're trying to learn from an existing design and rebuild a sub-circuit found on a larger board. Besides solid knowledge about the required theory, existing designs are your best teacher. you're after exotic components (high voltage from CRT monitors, vintage parts from old equipment, cool-looking stuff like nixie tubes or a magic eye from an old radio) No, if... you're building a large new project and the components we're talking about are standard, cheap diodes, resistors, ... you're going to sell your circuit and must know where your components come from. Who tells you that a part on a junk board has never been subjected to stresses beyond those listed in the data sheet's absolute maximum ratings, causing permanent damage or deviations from the values you must count on?
{ "source": [ "https://electronics.stackexchange.com/questions/144125", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/61537/" ] }
144,326
In precalculus class, we are learning about sin/cos/tan/cot/sec/csc and their amplitude, periods and phases shifts. I've studied electronics on and off for about a year. I would like to know if we actually know what waves look like. Do they actually look like the sine and cosines like in mathematics textbooks, or are those wave functions just representations of something we can't see be can only analyze their effects and therefore something we don't know what they look like.
Forget the quantum stuff for a moment. If you want to learn about quantum electrodynamics, read QED by Richard Feynman. (You should read it anyway; it may be the only really good pop physics book.) Classically, an electromagnetic field is a force field that acts on electric charge. It doesn't "look like" something any more than a mechanical push or pull does. One of the things that the EM forces can act on is molecules. They can change the shape of the molecules, or (at high frequencies) even break chemical bonds. That's how you see -- light stimulates a chemical reaction in the cells of your retina, which kicks off a chain of chemical reactions that culminate in brain activity. When we say that a radio wave can be described as a sine wave, we're talking about how the amplitude of the wave (i.e. the strength of the force) varies over space and time. Sine waves tend to pop up a lot for the reasons Dave mentioned -- they're simple solutions to second-order differential equations, and you can use Fourier analysis to describe other signals in terms of sinusoids. Sine waves are also used to talk about sound, for the same reason. Most radio waves will not be pure sinusoids, but many are based on sinusoids. For example, the amplitudes of AM radio waves are sinusoids whose amplitude varies slowly. The amplitudes of FM radio waves are sinusoids whose frequencies vary slowly. Here's an illustration, courtesy of Berserkerus on Wikimedia Commons : Notice that the example signal in this image is also a sine wave. That's not an accident. Sine waves work well as simple test signals. The radiation from power lines would also be pretty close to a pure sine wave. If you want to visualize a radio wave, imagine being underwater near a beach. The currents aren't visible, but you can still feel moving waves of water as they push you back and forth. That's what radio waves do to the electrons in an antenna.
{ "source": [ "https://electronics.stackexchange.com/questions/144326", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/56152/" ] }
144,530
I've tried to find a circuit for a coarse and fine adjustment (two potentiometers) voltage divider, but I don't understand it and/or they don't have a linear response. Problem: I want to have an adjustable voltage from 0 - 5V using two potentiometers, one for coarse adjustment and the other for fine (10mV if possible) adjustment. From the datasheets I've looked at (e.g. this ) they don't seem to specify the resolution of increments possible of the pot. Here are three circuits I currently have: The third circuit's fine adjustment decreases as the coarse adjustment is set higher, so I don't think this is a good idea (unless a logarithm pot is used... no idea how those work yet). Since the first and second are very similar, I'll consider the first one. I assumed a 5 degree resolution out of 300 degrees, since I could not find any information regarding this. This gives me: 0.83kOhm / adjustment with the 50K pot, and a 166mV resolution 0.167kOhm / adjustment with the 10K pot The equation I obtain is: $$ V_{out} = \frac{R_{course} + R_{fine}}{50 + R_{fine}} V_{in} $$ Plotting this in matlab for 0V course adjustment, I get the following curve: At the lower end of the pot, there is a resolution of 33mV and at the higher end of the pot there is a resolution of 24.7mV. For my application, this is adequate. However I'm unsure if there is a better (and linear) approach to a fine and course adjustment.
This is better.. simulate this circuit – Schematic created using CircuitLab Advantages are: Low sensitivity to pot tolerance and tempco (you can use precision resistors for R2/R3) Quite linear and almost constant fine adjustment range in mV Quite constant (+/-0.5%) and predictable output impedance (minimum 9.09K maximum 9.195) Low sensitivity to CRV (contact resistance variation) of pots (1% CRV in R1 results in 0.05% variation). This circuit draws 20mA or so from the 5V rail. If that's an issue you can increase R4 10:1, increase both R4 and R1 by another 10:1 at the expense of a bit of performance or scale all the value at the expense of output impedance. Your circuit #1 has an output impedance of 0 ohms to 27.5K, depending on the pot settings. Fine and coarse only takes you so far, you could also consider a switched voltage divider for the "coarse" adjustment. Expecting the "coarse" adjust to stay stable within 0.2% may be too much to ask unless it's a very nice potentiometer. Note that your conductive plastic pot does not specify a temperature coefficient at all- that's because conductive plastic pots are generally horrible- maybe +/-1000ppm/°C typically, so using them as a rheostat rather than a voltage divider is not such a great idea. You've got that reduced by 5:1 by the ratios of the pots, but it's still pretty bad. The circuit I presented would typically be about 5x better with decent resistors for R2/R3 because the pots are used purely as voltage dividers. Edit: as a good approximation for R4 << R3 and R1 << R2 (you can do the exact math in Matlab taking the pot resistances into account if you like), the output voltage is: \$ V_{OUT} = 5.0 (\frac {\alpha \cdot 9.09K}{10K} + \frac {\beta \cdot 9.09K}{100K}) \$ Where 0\$\le \alpha\le 1\$ is the position of R1 and 0\$\le \beta \le 1\$ is the position of R4 So the range of R1 is 4.545V and the range of R4 is 0.4545V. If you center both pots you get 2.500V. If you can set R4 to 1% of full scale (reasonable), that's 4.5mV resolution.
{ "source": [ "https://electronics.stackexchange.com/questions/144530", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/29737/" ] }
144,607
Like diode and BJT having around 0.6V drop, is there any voltage drop across the MOSFET drain and source when the MOSFET is turned on? In the datasheet, they mention diode forward voltage drop, but I assume that it for the body diode only.
The MOSFET behaves like a resistor when switched ON (i.e. when Vgs is large enough; check the data sheet). Look in the data sheet for the value of this resistor. It's called Rds(on). It may be a very small resistance, much less than an Ohm. Once you know the resistance, you can calculate the voltage drop, based on the current flowing.
{ "source": [ "https://electronics.stackexchange.com/questions/144607", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/31102/" ] }
144,706
We had a big argument last night with vague conclusions. Is the current with a frequency less than 1 Hz considered DC? It would still resemble a wave...
AC and DC are relative terms. If you're looking at a 10kHz waveform for 100ns, you will think it is DC. It works the other way around too: if you forget about what's providing you with "DC", who knows if this waveform is not going to change in the next seconds, minutes, days, years? Think the voltage of a capacitor for example during slow discharge. If you monitor the voltage on an oscilloscope, you'll see a flatline. DC you say? Wait longer, and the flatline will decrease in voltage towards zero, which means there is some AC in there as well. Besides, no signal is actually pure DC, you always have AC components as well due to noise and all sorts of causes. It is only "DC-enough" or "AC-enough" for the application you're intending to use it with/for. Fourier transforms are a good way to picture what DC and AC components are in a waveform. The transform is constant for periodic signals and depends on time for any non-periodic signals like the capacitor example. For the square wave: ( source: wikipedia )
{ "source": [ "https://electronics.stackexchange.com/questions/144706", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/61807/" ] }
144,711
I am designing a circuit to control an electromagnet (coil with iron core) through a N-channel MOSFET driven by a PWM signal. I can't figure out how to derive the relationship between the PWM duty cycle and the current I will get in the coil, and I need to know this as the magnetic field generated is a function of i. These are the system specs: DC voltage source: 7.5V Coil L = 15 mH Thanks for the help! Update Thanks for the feedback so far: of course the resistance is missing, I forgot about that! At this point, I have the coil with L = 15mH and R = 2.4 Ohm. And no, it's not coursework, just a personal project. So, the current in the circuit should be i = V/R (1 - e^(-Rt/L)). The steady-state value is therefore i = V/R. With this in mind, I thought of adjusting this to PWM as follows: V = Vcc * %pwm (%pwm: duty cycle), therefore I finally have a relationship that links the duty cycle to the current through the coil. This, however, turns out to be off compared to experimental data I just took: for example, for duty cycle of 20%, I would expect V = 1.5 V and i = 0.625 A. In reality, however, I measure a voltage around 1.1 V. What is this due to? I thought it might be linked to the PWM frequency, but it's 3.9kHz, which sounds like more than enough! Finally, I also made a model in Simulink to try and understand the issue, and these are the plots I'm getting: Funny thing is, I am getting average current and voltage values much higher than they should be! Besides, why does the voltage plot vary as a "sawtooth" rather than the square PWM signal? Thanks again! Update 2 Right, so I think I managed to get my model right now, thank you again for the help everybody! At this point, I think I have a fairly good model of the relationship between PWM duty cycle and current through coil. This is my updated Simulink output for a 50% duty cycle: Thank you again simulate this circuit – Schematic created using CircuitLab
AC and DC are relative terms. If you're looking at a 10kHz waveform for 100ns, you will think it is DC. It works the other way around too: if you forget about what's providing you with "DC", who knows if this waveform is not going to change in the next seconds, minutes, days, years? Think the voltage of a capacitor for example during slow discharge. If you monitor the voltage on an oscilloscope, you'll see a flatline. DC you say? Wait longer, and the flatline will decrease in voltage towards zero, which means there is some AC in there as well. Besides, no signal is actually pure DC, you always have AC components as well due to noise and all sorts of causes. It is only "DC-enough" or "AC-enough" for the application you're intending to use it with/for. Fourier transforms are a good way to picture what DC and AC components are in a waveform. The transform is constant for periodic signals and depends on time for any non-periodic signals like the capacitor example. For the square wave: ( source: wikipedia )
{ "source": [ "https://electronics.stackexchange.com/questions/144711", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/61809/" ] }
144,878
The I²C protocol allows, in theory and with 7-bit addressing, up to 127 devices to be connected to the master. This is a large number, so why would any low-cost microcontroller (e.g. this PIC24 ), have more than one I²C port? Why is it needed?
Sensor hub arrangement In this scenario, there are two I²C buses. Let's call them local bus and main bus . The purpose of the local bus is to connect a bunch of sensors to a microcontroller (μC). The purpose of the μC is to poll the sensors, aggregate information from them, and detect certain events. A μC in such role is called sensor hub . The sensor hub is not responsible for higher order functions; there is a powerful main processor for that. The main bus connects the sensor hub to the main processor. So, the sensor hub μC is a master on the local I²C bus and a slave on the I²C main bus. SPI and I²C The PIC linked in the original post doesn't share the pins between SPI and I²C. However, there are other PICs that use the same pins for hardware SPI and I²C, because both are implemented with the same MSSP peripheral. If a PIC has two separate MSSP peripherals, then one can be used for hardware SPI, while the other one is used for hardware I²C.
{ "source": [ "https://electronics.stackexchange.com/questions/144878", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/22868/" ] }
145,829
This article Understanding Digital Multimeter DMM Specifications / Specs mention A digital multimeter will only be able to meet its specifications when it is within a certain environment. Conditions such as temperature, humidity and the like will have impact on the performance. Also conditions such as line voltage can affect the performance. In order to ensure that the digital multimeter is able to operate within its uncertainty specification, it is necessary to ensure that the external conditions are met. Outside this range the errors will increase and the readings can no longer be guaranteed. A further element to be considered is the calibration period of the digital multimeter. As all circuits will drift with time, the DMM will need to be periodically re-calibrated to ensure that it is operating within its specification. The calibration period will form part of the specification for the DMM. The most usual calibration period is a year , but some digital multimeter specifications may state a 90 day calibration period. The 90 day period will enable a tighter specification to be applied to the digital multimeter, allowing it to be used in more demanding applications. When looking at the calibration period of the digital multimeter, it should be remembered that calibration will form a significant element of the cost of ownership and after some years will be significantly above that of any depreciation. A long calibration period for the digital multimeter is normally to be advised, except when particularly demanding testing is required. Is it necessary to calibrate a digital multimeter every year? (From my understanding, only analog multimeters require calibration)
For hobby/student DMMs, the answer is no. You don't have to calibrate it every year. Please take note of the quote: "A long calibration period for the digital multimeter is normally to be advised, except when particularly demanding testing is required.". For a 3 1/2 digit battery-powered DMM, most never get calibrated after being bought. If you're using a 6 1/2 digit unit, and measuring microvolts to trouble-shoot medical equipment, that's another story. It all comes down to how important absolute accuracy is to you. However, your belief that "only analog multimeter require calibration" is dead wrong. The distinction between analog and digital in this case only applies to the display. An analog multimeter uses its conditioning / amplifier circuits to directly drive an analog meter. A digital unit uses its conditioning / amplifier to drive an A/D converter. In both cases, if the analog circuits get out of whack, the meter will give bad results. A problem is more likely to be noticed in a digital meter simply because DMMs make small errors easier to see.
{ "source": [ "https://electronics.stackexchange.com/questions/145829", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/50795/" ] }
145,916
I am a beginner at electronics. I don't have many tools and stuff, so I am forced to solder by hand. I don't have a pcb to solder on, so I usually just solder the wires together, but it's really difficult without a holder. Is there an easier way to do this? If not, is there a substitute for a wire holder? Any help would be appreciated.
Something like this may help. Just google "solder helping hand"
{ "source": [ "https://electronics.stackexchange.com/questions/145916", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/62106/" ] }
146,085
The light bulb shown below is from a Christmas tree light string. The string is made up of series connected light bulbs. However if one burns out, the other bulbs keep working. Taking out the bulb from its socket makes all the others in series with it to stop working. The picture doesn't show it very clearly, but the filament is broken in two. However, when I used my multimeter to check the resistance across this bulb it showed me about 1.5 ohms which explains the behaviour (all the other series bulbs still lighting). My question is: what is the working principle of this bulb? I guess is that they use a parallel resistor connected (see the winding wire inside bulb, left to the filament in the picture). But if this resistor is only 1.5 ohms, how does the bulb lights up? Assuming equal resistances, each light bulb gets about 7 V AC. Imagine how much current would draw that 1.5 ohms "resistor" alone...
These bulbs contain a shunt wire which is normally insulated from the bulb but closes when a high voltage is applied. From How Stuff Works : You can also see the wire in your image, below the burned out filament. The shunt wire is coated in an insulation with a low breakdown voltage. When a bulb burns out, the other bulbs look like wires (especially after they cool and their resistance decreases) and the burnt bulb looks like an open circuit, thus nearly the full supply voltage (110 or 220 VAC) appears across the broken bulb. This arcs through the shunt wire's insulation, destroying the insulation, allowing the shunt wire to short out the bulb, allowing the string to continue working. simulate this circuit – Schematic created using CircuitLab This is an example of an antifuse . An ordinary fuse starts with a low resistance, and converts to a high resistance above a certain current threshold. An antifuse does the opposite in every respect: it starts with a high resistance, and converts to a low resistance above a certain voltage threshold. The two could be considered electrical duals . Those Christmas light fixers that look like guns generate a high voltage spike which can help a failed antifuse work. They very briefly generate a voltage higher than mains voltage (not unlike a camera flash) which can help the antifuse do its thing if mains voltage isn't enough to break down the insulation.
{ "source": [ "https://electronics.stackexchange.com/questions/146085", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/40609/" ] }
146,098
The jumper cables in my possession are all so stiff and seem cheaply made. The connectors fall apart and its frustrating. The jumper cable is such a simple but critical component - how/where can I make the time or money investment to get nice cables? I do have one example (in my minimal experience) of a set of nice cables, and these come off of my Saleae Logic analyzer. They seem rugged, not stiff, well made and built to last. Any way to get jumper cables for prototyping that resemble these? There is no doubt that the engineers who created the Saleae Logic Analyzer were aware of the components out there for making good jumper cables, and chose to make or provide ones of high quality.
These bulbs contain a shunt wire which is normally insulated from the bulb but closes when a high voltage is applied. From How Stuff Works : You can also see the wire in your image, below the burned out filament. The shunt wire is coated in an insulation with a low breakdown voltage. When a bulb burns out, the other bulbs look like wires (especially after they cool and their resistance decreases) and the burnt bulb looks like an open circuit, thus nearly the full supply voltage (110 or 220 VAC) appears across the broken bulb. This arcs through the shunt wire's insulation, destroying the insulation, allowing the shunt wire to short out the bulb, allowing the string to continue working. simulate this circuit – Schematic created using CircuitLab This is an example of an antifuse . An ordinary fuse starts with a low resistance, and converts to a high resistance above a certain current threshold. An antifuse does the opposite in every respect: it starts with a high resistance, and converts to a low resistance above a certain voltage threshold. The two could be considered electrical duals . Those Christmas light fixers that look like guns generate a high voltage spike which can help a failed antifuse work. They very briefly generate a voltage higher than mains voltage (not unlike a camera flash) which can help the antifuse do its thing if mains voltage isn't enough to break down the insulation.
{ "source": [ "https://electronics.stackexchange.com/questions/146098", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/31389/" ] }
146,482
So I understand that LED's have a maximum current (like 20mA for instance), but scientifically why is this? Using the water analogy it seems like a high voltage would be the thing that would mess up something (I like to think of it like a huge amount of "pressure" blowing out a pipe or something). Why would a rate of electron flow damage something?
It's difficult to come up with an analogy because the usual analogies for electrical systems are fluid systems. A great thing about fluid systems is that the working fluid is also good at cooling things, and most people's practical experience with fluid systems involves rates of flow where heating is not very significant. So let's try a different analogy: a string being pulled through the resistance of your fingers. Your fingers are the LED, and the voltage drop of the LED is analogous to the difference in tension of the string on either side of your fingers. Current is analogous to the rate at which the string is being pulled. Will your fingers be damaged if the string is pulled too fast? Yes: we call it "rope burn". This will happen even if you adjust the resistance of your fingers to maintain a constant difference in tension on the rope regardless of its speed (analogous to the approximately constant voltage drop of the LED). The reason is that the rate of work done, and thus, the heat generated, is the product of the force your fingers apply to the rope and the rate at which the rope is moving through your fingers. You can get a rope burn by squeezing too hard, or moving the string too fast. "Rate of work" or "rate of energy" is called power . One way to define it, for mechanical systems, is the product of force (\$F\$) and velocity (\$v\$): $$ P = Fv $$ Since power is a rate of energy it should be in units of energy per time. In SI units, thats joules per second, also known as the watt . So, however fast the rope is moving, and however much force your fingers are applying to it, you are doing work at the rate of some number of joules per second. This energy can't vanish: it becomes heat in the rope and your fingers. Once you exceed your body's ability to transfer heat away from your fingertips your skin gets too hot and you are burned. The analogy for electrical systems is that power is the product of voltage and current: $$ P = VI $$ \$V\$ is approximately constant for an LED, but if you increase \$I\$ enough, you generate heat faster than it can radiate to the ambient environment. The LED gets too hot and is damaged.
{ "source": [ "https://electronics.stackexchange.com/questions/146482", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ] }
146,651
I found around a couple of damaged "gold coated" probes, after checking the inner wire it happens to be made of cooper, my questions here are simple 1) Why coat the tips if the rest of the probe is just copper? 2) In order to decrease the total resistance of the probes shouldn’t the entire wire be made of gold? (Common sense says that’s a bit expensive) 3) How can a simple gold coat help with measurements?
They are gold-plated because gold tarnishes far, far less readily than copper. Its purpose is not to provide a low-resistance conductor but to prevent tarnishing of the probe surface from affecting the measurement.
{ "source": [ "https://electronics.stackexchange.com/questions/146651", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/52466/" ] }
146,918
I am currently "investigating" FPGAs, what they can do, how they do it etc. In more than one place ( for example here ) I have seen projects that implement a simple microcontroller with FPGA. So my question: I would like to know, what is the purpose of doing such implementations? Why use a microcontroller implemented in FPGA instead of having a micro on board? What are benefits? And perhaps also what are downsides?
Benefits: blazingly fast interface between the microcontroller and any custom interface or I/O logic on-chip. customizable processor and debug interfaces also, often easier control logic than writing the control code with, say, VHDL Downsides: Possibly more expensive FPGA is needed to fit both the microcontroller and the custom logic, compared to just having the custom logic on the FPGA Possibly more difficult to implement, especially with memories and if the core is complex, than a ready-made microcontroller on a separate chip.
{ "source": [ "https://electronics.stackexchange.com/questions/146918", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/33554/" ] }
148,356
I know that it's easy to make a unity gain buffer with an op-amp (as a voltage follower): simulate this circuit – Schematic created using CircuitLab I also know that it's easy to make an inverting buffer with an op-amp (an inverting amplifier with \$R_1 = R_2\$): simulate this circuit However, the accuracy of this inverting amplifier depends on the precision of \$R_1\$ and \$R_2\$ - if they're not closely matched, the output will be a bit different from \$-V_{in}\$. Is there a way of making an inverting buffer with an op-amp that doesn't depend on the precision of these resistors, like the voltage follower? Is it a better idea to get higher precision resistors?
No, there is no way to make an inverting buffer with just an op-amp that does not depend on the resistor values. You can get resistors with very fine accuracy and stability (at an equally impressive price) or you can get networks with matched (in value and in temperature coefficient) where the absolute accuracy may not be so impressive but the ratio is tightly controlled. There is a way to invert a signal without accurate resistors- the so-called flying capacitor method, but it's fairly complex and resistors are a better solution for most situations down to ppm level accuracy.
{ "source": [ "https://electronics.stackexchange.com/questions/148356", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/49251/" ] }
148,575
I'm trying to wrap my mind around how capacitors work. I understand they store a charge and generally understand how but I don't understand how using them "smoothes" the flow of the charge. Doesn't, say a motor, drawing power from a charged capacitor do the same thing when drawing power from a power source? What does is mean that the charge is smoothed and how??
Capacitors don't store charge. That's such a worthless statement because it's based on this word "charge" that has multiple meanings. Please forget you ever heard it. They also do not smooth energy. What they smooth is voltage. I will answer you question, but first you must really understand how capacitors work. What capacitors store is energy. The stuff that flows around in electric circuits is electric charge . We measure rate of flow of charge in amperes. Quantity of charge is measured in coulombs. Because charge is never created nor destroyed , whenever we are measuring charge we are usually counting charge that flows past a metaphorical gate. Except for some very odd circuits, the total charge in an electronic device is also constant. It is very much like a closed hydraulic system: there's some fluid in it and you can move it around, but none ever enters or leaks out. You can count how much fluid flows past some point, but it must come from somewhere, and it must go somewhere else. Imagine if you had a spherical vessel, filled with a fluid. Down the center of the vessel is a rubber plate that you can stretch by pushing fluid in one side and pumping it out the other. That's what a capacitor is like: This is from Bill Beaty's excellent capacitor misconceptions . When you push water in one side, an equal amount of water must come out the other side. Further, once this rubber membrane is stretched, it wants to return to being straight. Thus, the water pressure on one side will be higher than the other. If you were to remove the stoppers and replace them with a hose, water would flow until the rubber were not stretched. Now replace "water" with "electric charge", and "pressure" with "voltage", and you have a capacitor. Now imagine two vessels, one the size of a golf ball, and one the size of a swimming pool. Each has a membrane of identical stretchiness in the middle. If you pump a tablespoon of water through the golf ball sized vessel, the membrane will be stretched a lot, and consequently the pressure difference between the sides will be great. If you do the same to the swimming pool sized vessel, the membrane will barely move at all, and the pressure difference will just be slightly more than nothing. This is what capacitance is. It tells you, for a given quantity of water moved, what the pressure difference is. It tells you, for a given amount of electric charge moved through the capacitor, what the voltage will be. It is defined as: $$ C = {q \over V} $$ Where: \$C\$ is capacitance, measured in farads, \$q\$ is charged moved through the capacitor, measured in coulombs, and \$V\$ is voltage, measure in (you guessed it) volts. Don't get hung up on "coulomb". A coulomb is how much charge moves past a point if 1 ampere is flowing for 1 second. Or, 2 amperes for half a second. Or, 1/2 ampere for 2 seconds. If you took calculus, then you will recognize that charge is the integral of current. In other words, charge is to current as distance is to velocity. You can replace "ampere" with "coulomb per second" -- the units are exactly the same. Using that knowledge and a bit of basic calculus, capacitance can also be defined in terms of voltage and current: $$ {\mathrm d V(t) \over \mathrm d t} = {I(t) \over C} $$ What this says is: the rate of change of voltage over time (volts per second) is equal to the current (amperes or coulombs per second) divided by the capacitance (farads). If you have a 1 farad capacitor, and you are moving 1 ampere (1 coulomb per second) through it, then voltage across the capacitor will change at the rate of 1 volt per second. If you double that capacitance, then the rate of change of voltage will be half. And here, I think, is the answer to your question. Frequently capacitors are put across the power supply to hold the voltage steady. This works because the more capacitance you have, the harder it is to change the voltage, because it requires more current to do so. In this application, capacitors don't smooth energy , they smooth voltage . They do so by providing a storage of energy from which the load can draw during times of transient high current. This makes the power supply's job easier because it doesn't have to deal with high changes in current. In effect, the capacitor helps to average the current demand of the load as seen by the power supply.
{ "source": [ "https://electronics.stackexchange.com/questions/148575", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/62755/" ] }
149,390
How can a phone wire have multiple frequencies? In my Networks Textbook about DSL vs Dial Up it says the following: The residential telephone line carries both data and traditional telephone signals simultaneously, which are encoded at different frequencies: • A high-speed downstream channel, in the 50 kHz to 1 MHz band • A medium-speed upstream channel, in the 4 kHz to 50 kHz band • An ordinary two-way telephone channel, in the 0 to 4 kHz band From my basic knowledge of physics, the frequency of a wire is the rate at which it reverses the polarity. So if you have one wire, how can the electrons be simultaneously changing polarity 4,000 times/second (for talking on the phone) and also 50,000 times/second (for using DSL)?
The underlying assumption in your question - that the frequency being measured is the rate at which electrons reverse polarity - is incorrect. The frequency of a signal at the transmitter, receiver, or anywhere in between physically corresponds to the cyclic arrival of a voltage. For example, in a digital application using amplitude modulation (let's assume on-off keying for simplicity), you could measure frequency by the number of 'on' pulses you detect per unit time. In RF communications, this might correspond to a logic-high voltage, or in optical communications it might correspond to the arrival of a large number of photons. In the ideal case, a logic-low or off state would correspond to a voltage of zero or the arrival of no photons, but dark currents and the imperfections of modulators rarely make that the case. In terms of implementation, a straightforward and simple implementation to the transmission of two separate RF frequencies on a single medium (a copper wire) is by the use of two complete transmitter chains to encode the data at the two distinct carrier frequencies, and then the use of an RF combiner to get the two outputs from the transmitters onto a single copper wire. The receiver can be implemented a number of ways, but a simplistic method would be to use an RF power divider to create two copies of the signal, and then use a high pass filter on one and a low pass filter on the other. You can then continue with the normal receiver chain. As others have said, multiple frequencies can be present on a wire at the same time. The instantaneous presence of multiple frequencies does not indicate multiple voltages though; there will necessarily be a single voltage at any given point on the wire (so long as the voltage is defined between that point and a common reference, typically ground). Over a span of time though, you can construct a signal by sampling at regular intervals. That signal will not look like a normal sinusoidal wave if multiple frequencies are present though, due to the principle of superposition. If you pick two carrier frequencies, let's say 5 kHz and 5 MHz, modulate data onto both and then sum the resultant modulated signals, you might be presented with a very peculiar signal in the time domain. If you apply a Fourier Transform though and look at the signal in the frequency domain, you would see a strong signal at 5 kHz, a strong signal at 5 MHz, and then a smattering of other frequencies around the carrier frequencies to account for the modulated data.
{ "source": [ "https://electronics.stackexchange.com/questions/149390", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/64497/" ] }
149,393
I want to use a pair of N and P channel mosfets in a totem pole arragement to shift a VFD display on and off using a 74LS247 decoder/driver. I would put the P channel with the source pin on the top and connected to the 25 volts the VFD requires. The P channel drain pin would be connected N channel drain pin. The junction of both drains would be my output (25 Volts). The source pin of the N channel would be connected to ground. The gates of both the P and N channel mosfets would be tied together and powered from the "Open Collector" output of the 74LS247 decoder/driver to turn on and off the appropriate mosfet. I plan on using a Fairchild FDS8958A mosfet complementary pair. Questions: Do I need a series resistor in series with the LS247 output and the two gates? What value do I need? Do I need pullup / pulldown resistors between the source pin and gate pins?
The underlying assumption in your question - that the frequency being measured is the rate at which electrons reverse polarity - is incorrect. The frequency of a signal at the transmitter, receiver, or anywhere in between physically corresponds to the cyclic arrival of a voltage. For example, in a digital application using amplitude modulation (let's assume on-off keying for simplicity), you could measure frequency by the number of 'on' pulses you detect per unit time. In RF communications, this might correspond to a logic-high voltage, or in optical communications it might correspond to the arrival of a large number of photons. In the ideal case, a logic-low or off state would correspond to a voltage of zero or the arrival of no photons, but dark currents and the imperfections of modulators rarely make that the case. In terms of implementation, a straightforward and simple implementation to the transmission of two separate RF frequencies on a single medium (a copper wire) is by the use of two complete transmitter chains to encode the data at the two distinct carrier frequencies, and then the use of an RF combiner to get the two outputs from the transmitters onto a single copper wire. The receiver can be implemented a number of ways, but a simplistic method would be to use an RF power divider to create two copies of the signal, and then use a high pass filter on one and a low pass filter on the other. You can then continue with the normal receiver chain. As others have said, multiple frequencies can be present on a wire at the same time. The instantaneous presence of multiple frequencies does not indicate multiple voltages though; there will necessarily be a single voltage at any given point on the wire (so long as the voltage is defined between that point and a common reference, typically ground). Over a span of time though, you can construct a signal by sampling at regular intervals. That signal will not look like a normal sinusoidal wave if multiple frequencies are present though, due to the principle of superposition. If you pick two carrier frequencies, let's say 5 kHz and 5 MHz, modulate data onto both and then sum the resultant modulated signals, you might be presented with a very peculiar signal in the time domain. If you apply a Fourier Transform though and look at the signal in the frequency domain, you would see a strong signal at 5 kHz, a strong signal at 5 MHz, and then a smattering of other frequencies around the carrier frequencies to account for the modulated data.
{ "source": [ "https://electronics.stackexchange.com/questions/149393", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/64446/" ] }
150,058
I have read this post and it does not answer my question in its entirety: I think of a microcontroller as anything that has some memory, registers, and can process a set of instructions such as LOAD, STORE and ADD. It contains logic gates and such to perform its role, but its main task is to be a universal processor of bits. I think of a microntroller as a system of interconnected ASIC designs to create the ability to store and process instructions. I think of an ASIC device as a circuit that has been specifically constructed using logical and electrical components to perform one single task, with no other task in mind nor extra hardware included. I think of an FPGA device as an ASIC device (a low-level device) + a bunch of unused stuff left over, used to implement a particular truth table. Despite its name, an FGPA feels very "application-specific", since it must be rewired to perform a new and different task. This leads to confusion with ASIC. Albeit, in the case of rewiring an FPGA, all necessary hardware should be present. Also, FPGAs are meant to be programmable, but isn't that what a microcontroller is meant for? The post above I referenced also mentions HDL, with which I am familiar. Can't HDL be used for both ASIC and FPGA, and by proxy to design an entire microcontroller?
ASIC vs FPGA A Field Programmable Gate Array can be seen as the prototyping stage of Application Specific Integrated Circuits: ASICs are very expensive to manufacture, and once it's made there is no going back (as the most expensive fixed cost is the masks [sort of manufacturing "stencil"] and their development). FPGAs are reprogrammable many times, however because of the fact that a generic array of gates is connected to accomplish your goal, it is not optimised like ASICs. Also, FPGAs are natively dynamic devices in that if you power it off, you loose not only the current state but also your configuration. Boards now exist though that add a FLASH chip and/or a microcontroller to load the configuration at startup so this tends to be a less important argument. Both ASICs and FPGAs can be configured with Hardware Description Languages, and sometimes FPGAs are used for the end product. But generally ASICs kick in when the design is fixed. FPGA vs microcontroller As for the difference between a microcontroller and a FPGA, you can consider a microcontroller to be an ASIC which basically processes code in FLASH/ROM sequentially. You can make microcontrollers with FPGAs even if it's not optimised, but not the opposite. FPGAs are wired just like electronic circuits so you can have truly parallel circuits, not like in a microcontroller where the processor jumps from a piece of code to another to simulate good-enough parallelism. However because FPGAs have been designed for parallel tasks, it's not as easy to write sequential code as in a microcontroller. For example, typically if you write in pseudocode "let C be A XOR B", on a FPGA that will be translated into "build a XOR gate with the lego bricks contained (lookup tables and latches), and connect A/B as inputs and C as output" which will be updated every clock cycle regardless of whether C is used or not. Whereas on a microcontroller that will be translated into "read instruction - it's a XOR of variables at address A and address B of RAM, result to store at address C. Load arithmetic logic units registers, then ask the ALU to do a XOR, then copy the output register at address C of RAM". On the user side though, both instructions were 1 line of code. If we were to do this, THEN something else, in HDL we would have to define what is called a Process to artificially do sequences - separate from the parallel code. Whereas in a microcontroller there is nothing to do. On the other hand, to get "parallelism" (tuning in and out really) out of a microcontroller, you would need to juggle with threads which is not trivial. Different ways of working, different purposes. In summary: ASIC vs FPGA: fixed, more expensive for small number of products (cheaper for high volumes), but more optimised. ASIC vs microcontroller: certainly like comparing a tool with a hammer. FPGA vs microcontroller: not optimised for sequential code processing, but can do truly parallel tasks very easily as well. Generally FPGAs are programmed in HDL, microcontrollers in C/Assembly Whenever speed of parallel tasks is an issue, take an FPGA, evolve your design and finally make it an ASIC if it's cheaper to you in the long run (mass production). If sequential tasks are okay, take a microcontroller. I guess you could do an even more application specific IC from this if it's cheaper to you in the long run as well. The best solution will probably be a bit of both. What a quick search after writing this gave me: FPGA vs Microcontrollers, on this very forum
{ "source": [ "https://electronics.stackexchange.com/questions/150058", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/58446/" ] }
150,079
I am trying to design a small CNC maching for milling PCBs. I am using unipolar stepper motors driven by a microcontroller, but I am unsure of what tehnique to use to drive them. I once read that driving a motor with constant current provides a constant torque and driving them with constant voltage provides a constant speed. But, using constant current source seems more intuitive to me. Since its the current through a conductor that generates a magnetic field around it. Meaning if I drive them with a constant voltage there would be a rise in the strength of the magnetic field untill it reaches its maximum ( if I am not switching with higher frequency) generated by the coil. And if I were to drive them with constant current, the coild would immediately generate the maximum strength of the magnetic field and I would be able to drive them with higher frequencies. Is my logic here flawed, where am I wrong? What tehnique is usually used in milling machines? Should I look out for maximum voltage/current rating of a motor when using constant current?
ASIC vs FPGA A Field Programmable Gate Array can be seen as the prototyping stage of Application Specific Integrated Circuits: ASICs are very expensive to manufacture, and once it's made there is no going back (as the most expensive fixed cost is the masks [sort of manufacturing "stencil"] and their development). FPGAs are reprogrammable many times, however because of the fact that a generic array of gates is connected to accomplish your goal, it is not optimised like ASICs. Also, FPGAs are natively dynamic devices in that if you power it off, you loose not only the current state but also your configuration. Boards now exist though that add a FLASH chip and/or a microcontroller to load the configuration at startup so this tends to be a less important argument. Both ASICs and FPGAs can be configured with Hardware Description Languages, and sometimes FPGAs are used for the end product. But generally ASICs kick in when the design is fixed. FPGA vs microcontroller As for the difference between a microcontroller and a FPGA, you can consider a microcontroller to be an ASIC which basically processes code in FLASH/ROM sequentially. You can make microcontrollers with FPGAs even if it's not optimised, but not the opposite. FPGAs are wired just like electronic circuits so you can have truly parallel circuits, not like in a microcontroller where the processor jumps from a piece of code to another to simulate good-enough parallelism. However because FPGAs have been designed for parallel tasks, it's not as easy to write sequential code as in a microcontroller. For example, typically if you write in pseudocode "let C be A XOR B", on a FPGA that will be translated into "build a XOR gate with the lego bricks contained (lookup tables and latches), and connect A/B as inputs and C as output" which will be updated every clock cycle regardless of whether C is used or not. Whereas on a microcontroller that will be translated into "read instruction - it's a XOR of variables at address A and address B of RAM, result to store at address C. Load arithmetic logic units registers, then ask the ALU to do a XOR, then copy the output register at address C of RAM". On the user side though, both instructions were 1 line of code. If we were to do this, THEN something else, in HDL we would have to define what is called a Process to artificially do sequences - separate from the parallel code. Whereas in a microcontroller there is nothing to do. On the other hand, to get "parallelism" (tuning in and out really) out of a microcontroller, you would need to juggle with threads which is not trivial. Different ways of working, different purposes. In summary: ASIC vs FPGA: fixed, more expensive for small number of products (cheaper for high volumes), but more optimised. ASIC vs microcontroller: certainly like comparing a tool with a hammer. FPGA vs microcontroller: not optimised for sequential code processing, but can do truly parallel tasks very easily as well. Generally FPGAs are programmed in HDL, microcontrollers in C/Assembly Whenever speed of parallel tasks is an issue, take an FPGA, evolve your design and finally make it an ASIC if it's cheaper to you in the long run (mass production). If sequential tasks are okay, take a microcontroller. I guess you could do an even more application specific IC from this if it's cheaper to you in the long run as well. The best solution will probably be a bit of both. What a quick search after writing this gave me: FPGA vs Microcontrollers, on this very forum
{ "source": [ "https://electronics.stackexchange.com/questions/150079", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/37289/" ] }
150,299
This has been bugging me for a while now... Does a single capacitor, on its own, behave as a high-pass filter or a band-pass filter? In a crystal radio set, you use a single capacitor as the tuning element, to select which radio frequency the radio will receive. This strongly implies that a capacitor is a band-pass filter. But reading Wikipedia, it is suggested that a capacitor is actually a 1-pole high-pass filter. Well, obviously it can't be both. So which is it? (Bonus points for anybody who can point to an actual frequency response curve.)
A capacitor by itself is not a filter at all, neither high pass, low pass, nor anything else. A capacitor can be used as part of a high pass, low pass, or band pass filter, depending on how it's connected to other parts . For example, a capacitor with a resistor can be a high pass filter: or a low pass filter: Together with a inductor and some additional impedance (represented by the resistor), it can be a band pass filter: Or a band rejection filter: A crystal radio works like the left band pass filter. C1 and L1 form a resonant tank that has high impedance at the resonant frequency and low impedance at other frequencies. Even that by itelf is not a filter, since just a changing impedance isn't a filter. It is the changing impedance working against some other impedance that forms a voltage divider that then makes a filter. In the example above, R1 is that other impedance. In a crystal radio, it is the impedance of the signal coupled to L1 magnetically by the antenna coil. In that case the antenna coil is the primary of a transformer, and L1 is the secondary, which resonates at a particular frequency depending on the value C1 is tuned to. Added about crystal radio: I see from the comments that there is some confusion about how the capacitor in a crystal radio works and how such a radio is tuned. There are different ways a crystal radio can be made, but I'll stick to the very common configuration you can find all over the web, and that is implemented by most crystal radio kits: The inductor is a single coil, ususally magnet wire wound round something like a carboard toilet paper roll. The coil is essentially a transformer. The transformer primary is the left section between the antenna and the tap. Since the tap is grounded, there is no direct flow of current between the two sections of the coil. Voltage is induced in the right part of the coil by transformer action. The only way for the signal to get from the left part of the coil (the transformer primary) to the right part (the transformer secondary), is by the magnetic coupling between the two parts of the coil. The transformer creates a higher voltage at its right end, although at a higher impedance. Typical antennas have impedance in the 50-300 Ω range, whereas the crystal radio is intended to drive old style headphones that have a few kΩ impedance. The higher voltage at a higher impedance is a better match to the headphones, and allows the very limited power from the antenna to be used more efficiently. The inductance of the coil together with the capacitance form a high Q tank circuit. The radio picks up a station when the capacitor is adjusted so that the tank resonates at the station's carrier frequency. Due to the finite impedance of the antenna driving the tank as seen thru the transformer, and the impedance of the headphones loading the output, the capacitor and the coil together form a narrow band pass filter.
{ "source": [ "https://electronics.stackexchange.com/questions/150299", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/64892/" ] }
150,321
I asked a relatively simple question . Unfortunately, the answers provoke far more questions! :-( It seems that I don't actually understand RC circuits at all. In particular, why there's an R in there. It seems completely unnecessary. Surely the capacitor is doing all the work? What the heck do you need a resistor for? Clearly my mental model of how this stuff works is incorrect somehow. So let me try to explain my mental model: If you try to pass a direct current through a capacitor, you are just charging the two plates. Current will continue to flow until the capacitor is fully charged, at which point no further current can flow. At this point, the two ends of the wire might as well not even be connected. Until, that is, you reverse the direction of the current. Now current can flow while the capacitor discharges, and continues to flow while the capacitor recharges in the opposite polarity. But after that, once again the capacitor becomes fully charged, and no further current can flow. It seems to me that if you pass an alternating current through a capacitor, one of two things will happen. If the wave period is longer than the time to fully charge the capacitor, the capacitor will spend most of the time fully charged, and hence most of the current will be blocked. But if the wave period is shorter, the capacitor will never reach a fully-charged state, and most of the current will get through. By this logic, a single capacitor on its own is a perfectly good high-pass filter. So... why does everybody insist that you have to have a resistor as well to make a functioning filter? What am I missing? Consider, for example, this circuit from Wikipedia: What the hell is that resistor doing there? Surely all that does is short-circuit all the power, such that no current reaches the other side at all. Next consider this: This is a little strange. A capacitor in parallel? Well... I suppose if you believe that a capacitor blocks DC and passes AC, that would mean that at high frequencies, the capacitor shorts-out the circuit, preventing any power getting through, while at low frequencies the capacitor behaves as if it's not there. So this would be a low-pass filter. Still doesn't explain the random resistor through, uselessly blocking nearly all the power on that rail... Obviously the people who actually design this stuff know something that I don't! Can anyone enlighten me? I tried the Wikipedia article on RC circuits, but it just talks about a bunch of Laplace transform stuff. It's neat that you can do that, I'm trying to understand the underlying physics. And failing! (Similar arguments to the above suggest that an inductor by itself ought to make a good low-pass filter — but again, all the literature seems to disagree with me. I don't know whether that's worthy of a separate question or not.)
Let's try this Wittgenstein's ladder style. First let's consider this: simulate this circuit – Schematic created using CircuitLab We can calculate the current through R1 with Ohm's law: $$ {1\:\mathrm V \over 100\:\Omega} = 10\:\mathrm{mA} $$ We also know that the voltage across R1 is 1V. If we use ground as our reference, then how does 1V at the top of the resistor become 0V at the bottom of the resistor? If we could stick a probe somewhere in the middle of R1, we should measure a voltage somewhere between 1V and 0V, right? A resistor with a probe we can move around on it...sounds like a potentiometer, right? simulate this circuit By adjusting the knob on the potentiometer, we can measure any voltage between 0V and 1V. Now what if instead of a pot, we use two discrete resistors? simulate this circuit This is essentially the same thing, except we can't move the wiper on the potentiometer: it's stuck at a position 3/4th from the top. If we get 1V at the top, and 0V at the bottom, then 3/4ths of the way up we should expect to see 3/4ths of the voltage, or 0.75V. What we have made is a resistive voltage divider . It's behavior is formally described by the equation: $$ V_\text{out} = {R_2 \over R_1 + R_2} \cdot V_\text{in} $$ Now, what if we had a resistor with a resistance that changed with frequency? We could do some neat stuff. That's what capacitors are. At a low frequency (the lowest frequency being DC), a capacitor looks like a large resistor (infinite at DC). At higher frequencies, the capacitor looks like a smaller resistor. At infinite frequency, a capacitor has to resistance at all: it looks like a wire. So: simulate this circuit For high frequencies (top right), the capacitor looks like a small resistor. R3 is very much smaller than R2, so we will measure a very small voltage here. We could say that the input has been attenuated a lot. For low frequencies (lower right), the capacitor looks like a large resistor. R5 is very much bigger than R4, so here we will measure a very large voltage, almost all of the input voltage, that is, the input voltage has been attenuated very little. So high frequencies are attenuated, and low frequencies are not. Sounds like a low-pass filter. And if we exchange the places of the capacitor and the resistor, the effect is reversed, and we have a high-pass filter. However, capacitors aren't really resistors. What they are though, are impedances . The impedance of a capacitor is: $$ Z_\text{capacitor} = -j{1 \over 2 \pi f C} $$ Where: \$C\$ is the capacitance, in farads \$f\$ is the frequency, in hertz \$j\$ is the imaginary unit , \$\sqrt{-1}\$ Notice that, because \$f\$ is in the denominator, the impedance decreases as frequency increases. Impedances are complex numbers , because they contain \$j\$. If you know how arithmetic operations work on complex numbers, then you can still use the voltage divider equation, except we will use \$Z\$ instead of \$R\$ to suggest we are using impedances instead of simple resistances: $$ V_\text{out} = V_{in}{Z_2 \over Z_1 + Z_2}$$ And from this, you can calculate the behavior of any RC circuit, and a good deal more.
{ "source": [ "https://electronics.stackexchange.com/questions/150321", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/64892/" ] }
150,329
I'm having trouble figuring out the name of this circuit. It needs to delay turning "on" a 12V circuit and a 120VAC circuit, after a 5V circuit has been turned on. Example: Turn on 1x power switch, which turns on a microcontroller (5V), a DC motor circuit (12V) and a large AC motor (120VAC). The sequence of switching "on" should be 5V, 12V, then 120VAC Without that sequence the 12V motors and 120VAC motor turn on wildly (for ~1 sec) as the microcontroller is "powering on". Ideally a passive circuit.
Let's try this Wittgenstein's ladder style. First let's consider this: simulate this circuit – Schematic created using CircuitLab We can calculate the current through R1 with Ohm's law: $$ {1\:\mathrm V \over 100\:\Omega} = 10\:\mathrm{mA} $$ We also know that the voltage across R1 is 1V. If we use ground as our reference, then how does 1V at the top of the resistor become 0V at the bottom of the resistor? If we could stick a probe somewhere in the middle of R1, we should measure a voltage somewhere between 1V and 0V, right? A resistor with a probe we can move around on it...sounds like a potentiometer, right? simulate this circuit By adjusting the knob on the potentiometer, we can measure any voltage between 0V and 1V. Now what if instead of a pot, we use two discrete resistors? simulate this circuit This is essentially the same thing, except we can't move the wiper on the potentiometer: it's stuck at a position 3/4th from the top. If we get 1V at the top, and 0V at the bottom, then 3/4ths of the way up we should expect to see 3/4ths of the voltage, or 0.75V. What we have made is a resistive voltage divider . It's behavior is formally described by the equation: $$ V_\text{out} = {R_2 \over R_1 + R_2} \cdot V_\text{in} $$ Now, what if we had a resistor with a resistance that changed with frequency? We could do some neat stuff. That's what capacitors are. At a low frequency (the lowest frequency being DC), a capacitor looks like a large resistor (infinite at DC). At higher frequencies, the capacitor looks like a smaller resistor. At infinite frequency, a capacitor has to resistance at all: it looks like a wire. So: simulate this circuit For high frequencies (top right), the capacitor looks like a small resistor. R3 is very much smaller than R2, so we will measure a very small voltage here. We could say that the input has been attenuated a lot. For low frequencies (lower right), the capacitor looks like a large resistor. R5 is very much bigger than R4, so here we will measure a very large voltage, almost all of the input voltage, that is, the input voltage has been attenuated very little. So high frequencies are attenuated, and low frequencies are not. Sounds like a low-pass filter. And if we exchange the places of the capacitor and the resistor, the effect is reversed, and we have a high-pass filter. However, capacitors aren't really resistors. What they are though, are impedances . The impedance of a capacitor is: $$ Z_\text{capacitor} = -j{1 \over 2 \pi f C} $$ Where: \$C\$ is the capacitance, in farads \$f\$ is the frequency, in hertz \$j\$ is the imaginary unit , \$\sqrt{-1}\$ Notice that, because \$f\$ is in the denominator, the impedance decreases as frequency increases. Impedances are complex numbers , because they contain \$j\$. If you know how arithmetic operations work on complex numbers, then you can still use the voltage divider equation, except we will use \$Z\$ instead of \$R\$ to suggest we are using impedances instead of simple resistances: $$ V_\text{out} = V_{in}{Z_2 \over Z_1 + Z_2}$$ And from this, you can calculate the behavior of any RC circuit, and a good deal more.
{ "source": [ "https://electronics.stackexchange.com/questions/150329", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/64897/" ] }
150,353
simulate this circuit – Schematic created using CircuitLab OK. So I've been told that the current flowing into the current source (that is, into the arrow's behind) is the same as the current coming out of the current source (that is, out of the arrow's head). If this is the case, then why is it a current "source"? It's not providing any extra current! I know this is a fairly elementary question, but I haven't found a good answer elsewhere. Thanks.
The current in any 2 terminal device is always the same, there's no way for a 2 terminal device to provide "extra" current. You can't have 0A in and 4A out. What it does do is force the current to be the given value regardless of the impedance across it. (So therefore you can't put in an open circuit an ideal current source, the voltage would go to infinity.) So a resistor by itself has no current through it. Hook it across an ideal current source and you get (for your example current source) 1A through it regardless of the value of the resistor.
{ "source": [ "https://electronics.stackexchange.com/questions/150353", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/59092/" ] }
151,296
I'm taking an electronic circuit analysis class, and I was asked to give the color of the 3rd band of a 1MΩ resistor. My answer was blue, thinking it could be black-brown-blue (01 * 1MΩ), but the automated quiz said the correct answer was green (brown-black-green). So I did some research, thinking there were just multiple correct answers, and I read in a few places that the first band on a resistor is never black (0). Why is this? Does a black first band have some other meaning? It would really help me remember it if I knew the history or reasoning behind it. This question has been answered. For further reading on zero-ohm resistors mentioned in the answers and comments here, I found these questions and answers helpful: What is the usage of Zero Ohm & MiliOhm Resistor? Zero ohm resistor tolerance?
The first band is never black for the same reason that you always write numbers scientific notation with a single nonzero digit in front of the decimal place (e.g. 6.022e23) - convention. Generally the resistor specifications will all have the same number of nonzero significant digits (2 or 3, depending on the tolerance) except for a couple of values, namely even powers of ten (1, 10, 100, etc) and possibly a few others by coincidence.
{ "source": [ "https://electronics.stackexchange.com/questions/151296", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/65366/" ] }
151,627
When powering a simple LED circuit (DC power source, LED, resistor), does the supply voltage matter, as long as the correctly calculated current limiting resistor value is used? In other words, is there / could there be something inherently wrong by powering an LED with 12V or 24V as long as I used the correct resistor, knew the forward voltage of the LED, knew the maximum current, and calculated it using something like this , when I could have powered the same LED with a 3.5V supply knowing the same variables and using the same website? I'm assuming there is a limit to the maximum amount of voltage to use for the LED here... when I look at the electrical characteristics chart for the CREE XP-G for example, it shows current as a function of voltage, with voltage starting at around 2.5V @ 0ma, maxing out at around 3.25V @ 1500ma (the maximum current the LED is rated at, as described in the Characteristics table in the same document. After 3.25V, the chart depicts current quite rapidly approaching infinity. I'm assuming this relates to my question, I'm just curious how its all related. I'm sure its all basic Ohm's law stuff, I'd just appreciate a clarification of the math at work.
There is no limit on the voltage, per se, that you use to power the circuit that drives the diode. The diode only cares about what the diode can see, and it can't see the voltage drop across the current limiting resistor. That said, at some point what you're going to care about is the power dissipated across the resistor, which is \$ I^2R \$. If you want to keep the current to be constant in the case of growing required voltage drop, then R will eventually get big, and it will dissipate too much power. The power that run of the mill axial lead resistors can dissipate is 1/4 watt. For a 20mA current, that means to limit power across the resistor to 1/4 watt, you can't exceed 625 ohm, which means you can maximally drop 12.5 volts across it, and you're ceilinged out at a power supply of about 14.5V for a red LED. It's worse for small package SMD resistors, which are often 1/8 Watt or less. If you need more of a voltage drop, you would have to change to a higher power rated resistor, which can get physically big, as well as more expensive. As to why the actual voltage across the LED doesn't change too dramatically given proper choice of current limiting resistor, one convenient way to look at this is with the "load line" technique. From http://i.stack.imgur.com/1cUKU.png , (Public domain image from Wikimedia): The negative sloped line represents the resistor. If \$ V_D = 0 \$ there would be \$V_{DD}/R \$ of current through the resistor, and if \$ V_D = V_{DD} \$, then there is no current through the resistor (as there's no voltage drop across the resistor). The circuit "lives" at the equilibrium point where the resistor line and the diode curve intersect, as you MUST have the same current through the diode and the resistor. Note that changing R and \$ V_{DD} \$ less than dramatically won't move this point as much as you think it might in terms of the final voltage drop across the diode, because of how steep the diode curve gets.
{ "source": [ "https://electronics.stackexchange.com/questions/151627", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/61947/" ] }
151,987
For 220 V AC mains voltage, is it good practice to replace a 300 V MOV with a 300 V bidirectional TVS diode (like in here http://www.littelfuse.com/products/tvs-diodes.aspx ), or connect both in parallel? Are there any points that need to be considered?
It comes downto what are you trying to protect against There are four main types of transient suppression devices Gas Tube Protection time: > 1us Protection Voltage: 60 - 100V PowerDissipation: Nil Reliable Performance: No Expected Life: Limited Other: Only 50-2500 surges, can short powerlines MOV Protection time: 10 - 20ns Protection Voltage: > 300V PowerDissipation: Nil Reliable Performance: No Expected Life: Degrades Other: Fusing required. Degrades Avalanche TVS Protection time: 50ps Protection Voltage: 3-400V PowerDissipation: low Reliable Performance: yes Expected Life: long Other: Low power dissipation. Bidirectional requires dual Thyristor TVS Protection time: <3ns Protection Voltage: 30-400V PowerDissipation: Nil Reliable Performance: yes Expected Life: long Other: High Capacitance http://www.onsemi.com/pub_link/Collateral/HBD854-D.PDF ( http://web.archive.org/web/20051001082352/http://www.onsemi.com/pub/Collateral/HBD854-D.PDF ) http://www.vishay.com/docs/88440/failurem.pdf
{ "source": [ "https://electronics.stackexchange.com/questions/151987", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/61768/" ] }
152,291
Can someone please explain the how grounding / earthing to prevent a person from getting electrical shock using simple illustration of a faulty electric iron connected to the 240VAC mains? I don't understand how a person standing on the floor tile in the house and holding a live equipment can complete the circuit for current flow. Where is the connection from the ground to the back to the equipment?
The main point of grounding a line-powered appliance is to electrically "box up" the dangerous parts. If, for example, a "hot" wire comes loose inside the appliance and touches the metal case, the current will flow thru the ground connection to that case. That will blow a fuse, trip a breaker, or trigger the ground fault interruptor if that line is equipped with one. If the case weren't grounded, then the same loose wire now puts the case at the hot potential. If you come along and touch it and something else grounded, like a faucet, at the same time, the full 220 V is now applied across your body. You are right that touching just a hot wire without touching anything else conductive won't hurt you. Presumably the "tile" floor you are talking about is made of insulating material. However, the reason this is unsafe is that often you are not completely insulated from everything else. If you touch the faulty appliance and happen to brush against a water faucet, the case of your desktop computer, a radiator, or any other appliance that is ground, you can be seriously hurt. Even a concrete floor can carry enough current to be dangerous.
{ "source": [ "https://electronics.stackexchange.com/questions/152291", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6108/" ] }
152,307
I have read different forums and watched a few youtubes (in addition to my textbook readings) and the explanations seem to fall short. The issue seems to be how we are first taught about a direct relationship between voltage and current (that is, an increase in voltage renders an increase in current if resistance remains the same) and then we're taught about power lines that have high voltage and low current (because other wise we would need thick wires that carry high current [which would run the risk of overheating due to the joule effect or something or another..). So please don't explain to me the infrastructural reasons why high voltage, low current is necessary for power lines. I just need to know how high voltage, low current is even possible. I've only been studying DC so far so maybe AC has rules that would enlighten me...but I thought the E=IR formula was universal.
You're confusing "high voltage" with "high voltage loss". Ohm's Law governs the loss of voltage across a resistance for a given current passing through it. Since the current is low, the voltage loss is correspondingly low.
{ "source": [ "https://electronics.stackexchange.com/questions/152307", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/65863/" ] }
152,549
Just wanting to clarify my understanding of ESR here, because I'm not fully sure if it should be modeled the way I would think to model series resistance. Or if it's more complex than that. Basically the question comes down to is there any reason why one wouldn't place a ceramic capacitor in parallel with an electrolytic cap to drastically reduce the total ESR. This is going off the assumptions that: The total ESR of two capacitors placed in series is modeled by the parallel resistance equation $$\dfrac{1}{\frac{1}{R_1} + \frac{1}{R_2}}$$ Both capacitors are rated for the appropriate voltage. Ceramic capacitors have << ESR than electrolytic capacitors So the ceramic capacitor would have a negligible effect on the total capacitance, but be the primary determinant in the total ESR. (values are 2200uF electrolytic, 1uF ceramic, 24V) The reason that I ask this is I am trying to quickly discharge some capacitors into an LED for 1us and my understanding was the rate of discharge was directly proportional to the ESR (plus whatever is going on in the circuit that is connected)
An equivalent circuit for what you describe is: simulate this circuit – Schematic created using CircuitLab Note that the resistors aren't in parallel, so we can't use the usual parallel resistors equation: $$ R_\text{effective} = {1\over 1/R_1 + 1/R_2} $$ What we do have, however, is two impedances in parallel, so we can use the very similar parallel impedance equation: $$ Z_\text{effective} = {1\over 1/Z_1 + 1/Z_2} $$ Let's say the electrolytic has an ESR of 18mΩ, and the ceramic has an amazing ESR of 0.001mΩ. Now in order to apply that equation for parallel impedances, we must first calculate the impedance of each capacitor and its ESR. The impedance of an ideal capacitor is given by: $$ Z = -{j \over 2 \pi fC} $$ Impedances are complex numbers , and here \$j\$ is the imaginary unit. \$f\$ and \$C\$ are the frequency and capacitance. So here we encounter the first problem: what do we use for the frequency? An ideal pulse is an infinite series of odd harmonics , so really it's many frequencies. But let's just say that if we truncate that series at 1MHz, that will make a fast enough pulse for your application. Let's do the math at 1MHz. So at 1MHz, the impedance of the electrolytic is: $$ -{j \over 2 \pi \cdot 1\:\mathrm{MHz} \cdot 2200\:\mathrm{\mu F} } = -7.23j \cdot 10^{-5}\:\Omega $$ Series impedances add, and the impedance of a resistor is just its resistance. So, the impedance of the electrolytic with its ESR is: $$ Z_1 = (1.8 \cdot 10^{-2} - 7.23j \cdot 10^{-5})\:\Omega $$ Likewise we can calculate the impedance of the ceramic capacitor as $$ Z_2 = (1 \cdot 10^{-6} - 1.59j \cdot 10^{-1})\:\Omega $$ Applying those into the parallel impedances equation above, and you get the total effective impedance as: $$ (1.78 \cdot 10^{-2} - 2.08j \cdot 10^{-3})\:\Omega $$ The real part of this number, 17.8mΩ, is the ESR. It's reduced somewhat, but not a lot. The reason: the magnitude of the impedance of the electrolytic is much smaller, so it influences the final result much more. If we increase the frequency enough, eventually the ceramic starts helping more, since with increasing frequency the capacitive part of the impedance approaches zero and the ESR becomes a more significant part of the impedance of the real capacitor. At 100GHz, we get: $$ Z_1 = (1.8 \cdot 10^{-2} - 7.23j \cdot 10^{-10})\:\Omega \\ Z_2 = (1 \cdot 10^{-6} - 1.59j \cdot 10^{-6})\:\Omega \\ Z_\text{effective} = (1.00 \cdot 10^{-6} - 1.59j \cdot 10^{-6})\:\Omega $$ However, for your pulse this is of minimal help, since most of the energy you need to deliver is at lower frequencies. Or thinking about it another way, although the ceramic has a relatively low ESR and can discharge more quickly, it stores less energy than a larger capacitor charged to the same voltage, and thus is less significant. I should also point out that the above calculations make a really bad assumption. If you look in the datasheet for an electrolytic, right next to where the ESR is specified, so too is the frequency at which it is measured. It will be different at different frequencies. Some of the ESR comes from resistance in the leads and plates, and this is relatively constant with frequency. Another component of ESR comes from losses in the dielectric, which is very frequency dependent. You can read more about this kind of loss researching dissipation factor .
{ "source": [ "https://electronics.stackexchange.com/questions/152549", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/54913/" ] }
152,600
Why did scientists chose to go with sine wave to represent alternating current and not other waveforms like triangle and square? What advantage does sine offers above other waveforms in representing current and voltage?
Circular motion produces a sine wave naturally: - It's just a very natural and fundamental thing to do and trying to produce waveforms that are different is either more complicated or leads to unwanted side effects. Up and down motion (in nature) produces a sine wave against time: -
{ "source": [ "https://electronics.stackexchange.com/questions/152600", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/31060/" ] }
152,786
I read the article Google wants the US' wireless spectrum for balloon-based Internet . It says to use over 24 GHz frequency spectrum for communication. Is it ever possible to generate that high frequency by using piezoelectric crystals? Or are they using a PLL frequency multiplier? Even if it is possible to generate that high-frequency signal, and if you want to send 1 bit on every period of signal, there must be a processor that is working much more faster than 24 GHz. How is that possible on a balloon?
RF comms do not transmit one bit of information per cycle of the carrier wave - that would be digital baseband communications and it requires incredible amounts of bandwidth. Incidentally, you can buy FPGAs with built-in 28 Gbps serdes hard blocks. These can serialize and deserialize data for 100G ethernet (4x25G + coding overhead). I suppose the 'fundamental' frequency in this case would actually be 14 GHz (data rate/2 - think about why this is!) and they require around 200 MHz to 14 GHz of bandwidth. They don't go all the way down to DC due to using the 64b66b line code. The frequency used to drive the serdes modules would be generated by some sort of a VCO that is phase locked to a crystal reference oscillator. In the RF world, the message signal is modulated onto a carrier which is then upconverted to the required frequency for transmission with mixers. These balloons probably have a baseband of less than 100 MHz, meaning that initially the digital data is modulated onto a relatively low frequency carrier (intermediate frequency) of around 100 MHz. This modulation can be done digitally and the modulated IF generated by a high speed DAC. Then this frequency is translated up to 24 GHz with a 23.9 GHz oscillator and a mixer. The resulting signal will extend from 23.95 to 24.05 GHz, 100 MHz of bandwidth. There are many ways to build high frequency oscillators in that band. One method is to build a DRO, which is a dielectric resonance oscillator. Think of this as an LC tank circuit - there will be some frequency where it will 'resonate' and either generate a very high or very low impedance. You could also think of this as a narrow bandpass filter. In a DRO, a piece of dielectric is used - usually some sort of ceramic, I believe - that resonates at the frequency of interest. The physical size and shape determine the frequency. All you need to do to turn it into a frequency source is add some gain. There are also ways of using special diodes that exhibit negative resistance. A Gunn diode is one example. Biasing a Gunn diode correctly will cause it to oscillate at several GHz. Another possibility is something called a YIG oscillator. YIG stands for Yttrium Iron Garnet. It is common to build bandpass filters by taking a small YIG sphere and coupling it to a pair of transmission lines. YIG happens to be sensitive to magnetic fields, so you can tune or sweep the center frequency of the filter by varying the ambient magnetic field. Add an amplifier, and you have a tunable oscillator. It's relatively easy to put a YIG in a PLL. The power of a YIG is that it is possible to use it to produce a very wide band smooth sweep, and hence they are often used in RF test equipment such as spectrum and network analyzers and sweeping and CW RF sources. Another method is to simply use a bunch of frequency multipliers. Any nonlinear element (such as a diode) will produce frequency components at multiples of the input frequency (2x, 3x, 4x, 5x, etc.). Stringing together a chain of multipliers, bandpass filters, and amplifiers can be used to produce very high frequencies.
{ "source": [ "https://electronics.stackexchange.com/questions/152786", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/22682/" ] }
153,351
What causes "dead spots" in breadboards, where chips just don't work right? My guess is that the plastic or backing metal is warped, so the holes don't line up right, or the metal is somehow tarnished and nonconductive. In any case, can these be fixed? The breadboards are part of expensive digital trainers and cannot be easily swapped out.
People regularly "borrow" my breadboards, then return them not working well. What I've found is that they are plugging in wire leads or terminals that are too fat. This causes the spring contact inside to warp out of shape and it no longer grips thin wire leads properly. One of the worst things you can do is plug standard 0.025" square header pins into a breadboard. Same with thick resistor or capacitor leads. Doing so will stretch the contact pin out of shape and render that hole useless for use with thin component leads. TO-220 devices will WRECK breadboard sockets UNLESS you to a really simple trick: grab each lead near the package just below where the lead gets wider and rotate the lead by a quarter turn. Make sure the turn is gentle so as to not weaken the lead. If you look closely at the lead coming out of a TO-220 package, you will see that it is thinner than it is wide. Unfortunately, putting the device into a breadboard such that each of leads is in a different column means that the thick portion of the lead is destroying the breadboard socket. But if you rotate the lead, now the thin part of the lead is what spreads the contact apart. FWIW - I'll usually also trim the ends of the TO-220 leads at an angle so they enter the breadboard more easily. In terms of repairing breadboards, I've repaired a bunch of mine. In days long ago, I could (and did) order replacement contact strips from the manufacturer. Now that most breadboards come from the far East, I simply use a brand-new breadboard as a source of parts for fixing several wrecked breadboards. The contact strips come out easily once you remove the self-adhesive paper label that covers the bottom of the breadboard. Quick tip: use the lead of a 1n4148 diode to check each and every hole of any suspect breadboards. If the lead isn't gripped tightly when it goes into the hole, that hole has been stretched and needs to be repaired. Edit: I've been asked previously what one might use instead of 0.025" square header pins when plugging PCBs or carrier boards into a solderless breadboard. I use a header strip that has 0.018" round pins instead of 0.025" square pins. I've been using a Samtec part #TS-132-T-A (available from Digikey Samtec TS-132-T-A but someone on the PIClist pointed out that Amazon has what appears to the exact same item for significantly less money. That link is here: Round Pin Headers on Amazon
{ "source": [ "https://electronics.stackexchange.com/questions/153351", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6854/" ] }
153,474
I've noticed that in all resources on diodes and rectifiers, they show the output voltage as the positive half-wave of the input signal. However, that seems wrong. I understand that there's a voltage drop across the diode, and if the total voltage is below this level, the diode is closed. Therefore, it'd only seem logical, if the diode didn't open right away, but only after the input wave reaches this voltage. Here's my illustration - first, input. Second, my idea of output. Third - output as shown in textbooks. If I am wrong, how is it possible that there's no "flat area" in the output signal, when the input is below the diode's opening level?
Yes, you are right, have a look at this ltspice simulation of a simple full wave rectifier (click to enlarge): Textbooks like to simplify things before they go in depth (if at all). How many text books have you seen to talk about diode drop at that point at all? Its an application of wittgensteins ladder . Note that at higher frequencies things like the diodes recovery time will start to play an important role too, but even less textbooks talk about that. Both things are not immediately important to understand the concept that should be learned at that point.
{ "source": [ "https://electronics.stackexchange.com/questions/153474", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/57825/" ] }
153,572
Im am quite stunned as to what might be wrong. electronics beginner I bought today some copper wire, 20SWG /0.9MM from maplin (UK), for a project I am working on. There is no power going through the wire. I got positive and negative of 9V battery going to breadboard, then two pieces of copper wire one in + one in -. No voltage on the multimeter. Exact same setup with jumper wires work.. Copper wires are inserted fully into the breadboard... What am I missing ? p.s Label says: 250g EN COPPER 20 SWG 0.9MM
My guess is the "EN" of the code means "Enamelled" - I.e., it's coated with enamel. That kind of wire is meant for winding transformers, inductors, and electromagnets etc. The enamel coating insulates the wire and stops a coil turning into a single lump of copper. You need to remove the enamel from the ends of the wire, either with a small craft knife, or burn it away using a hot soldering iron and solder.
{ "source": [ "https://electronics.stackexchange.com/questions/153572", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/66453/" ] }
153,911
I am trying to create an op-amp amplifier that would work from single 5V supply, and would be able to amplify -100mV to +100mV audio signal to around a 1V peak-peak or so. I've came across this circuit from this article , that could seem to work, but am having trouble calculating the actual values: simulate this circuit – Schematic created using CircuitLab From the article I read that R1 and R2 should both be the same and around 42kOhm for 5V power supply. R4 should be R3+(0.5*R1) and thats about it... So how would I go about actually calculating the capacitor, resistor values needed for a varying frequency signal with maximum frequency at around 20kHz and gain of about 5? Thank you for helping me! EDIT: In the article the author wrote by the ground symbol: "*STAR GROUND". Is it really imporant that I combine all ground trances in the schematic to one point, or can I use a ground plane across the whole circuit?
You seemed to have actually found a reasonable circuit on the internet. I heard there was out there somewhere. The equations you cite are overly strict. Instead of just telling you the values, it's better to explain what each part does. R1 and R2 are a voltage divider to make 1/2 the supply voltage. This will be the DC bias the opamp will operate at. C2 low pass filters the output of that voltage divider. This is to squash glitches, power supply ripple, and other noise on the 5 V supply so they don't end up in your signal. R3 is needed only because C2 is there. If R3 weren't there, C2 would squash your input signal too, not just the noise on the power supply. Ultimately, the right end of R3 is intended to deliver a clean 1/2 supply signal with high impedance. The high impedance is so that it doesn't interfere with your desired signal coming thru C1. C1 is a DC blocking cap. It decouples the DC level at IN from the DC level the opamp is biased at. R4 and R5 form a voltage divider from the output back to the negative input. This is the negative feedback path, and the overall circuit gain is the inverse of the voltage divider gain. You want a gain of 10, so the R4-R5 divider should have a gain of 1/10. C3 blocks DC so that the divider only works on your AC signal, not the DC bias point. The divider will pass all DC, so the DC gain from the + input of the opamp to its output will be 1. C4 is another DC blocking cap, this time decoupling the opamp DC bias level from the output. With the two DC blocking caps (C1, C4), the overall amplifier works on AC and whatever DC biases may be at IN and OUT are irrelevant (within the voltage rating of C1 and C4). Now for some values. The MCP6022 is a CMOS input opamp, so it has very high input impedance. Even a MΩ is small compared to its input impedance. The other thing to consider is the range of frequencies you want this amplifier to work over. You said the signal is audio, so we'll assume anything below 20 Hz or above 20 kHz is signal you don't care about. In fact, it's a good idea to squash unwanted frequencies. R1 and R2 only need to be equal to make 1/2 the supply voltage. You mention no special requirement, like battery operation where minimizing current is of high importance. Given that, I'd make R1 and R2 10 kΩ each, although there is large leeway here. If this were battery operated, I'd probably make them 100 kΩ each and not feel bad about it. With R1 and R2 10 kΩ, the output impedance of the divider is 5 kΩ. You don't really want any relevant signal on the output of that divider, so let's start by seeing what capacitance is needed to filter down to 20 Hz. 1.6 µF. The common value of 2 µF would be fine. Higher works too, except that if you go too high, the startup time becomes significant on a human scale. For example, 10 µF would work to filter noise nicely. It has a 500 ms time constant with the 5 kΩ impedance, so would take a few seconds to stabilize after being turned on. R3 should be larger than the output of R1-R2, which is 5 kΩ. I'd pick a few 100 kΩ at least. The input impedance of the opamp is high, so lets use 1 MΩ. C1 with R3 form a high pass filter that needs to pass at least 20 Hz. The impedance seen looking into the right end of R3 is a bit over 1 MΩ. 20 Hz with 1 MΩ requires 8 nF, so 10 nF it is. This is a place you don't want to use a ceramic cap, so lower values are quite useful. A mylar cap, for example, would be good here and 10 nF is within the available range. Again, the overall impedance of the R4-R5 divider doesn't matter much, so lets arbitrarily set R4 to 100 kΩ and work out the other values from there. R5 must be R4/9 for a overall amplifier gain of 10, so 11 kΩ works out. C3 and R5 form a filter that has to roll off at 20 Hz or below. C3 must be 720 nF or more, so 1 µF. Note one issue with this topology. Frequency-wise, C3 is acting with R5, but the DC level that C3 will eventually stabilize at is filtered by R4+R5 and C3. That is a filter at 1.4 Hz, which means this circuit will take a few seconds to stabilize after power is applied. C4 forms a high pass filter with whatever impedance will be connected to OUT. Since you may not know, you want to make it reasonably large. Let's pick 10 µF since that's readily available. That rolls off at 20 Hz with 8 kΩ. This amp will therefore function as specified as long as OUT is not loaded with less than 8 kΩ.
{ "source": [ "https://electronics.stackexchange.com/questions/153911", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/37289/" ] }
153,923
I go through this every time I have PCBs I need to populate with SMD parts, and this has become more an issue as pin spacings have gotten tighter, and my hands have become less steady with age. So far I've modified some curved tweezers with a rubber band for grip tension, to help hold a component in place until 1 or 2 pins can be soldered. It works but it can be cumbersome. The clamping force needs to be very light, and it seems one slight tap with a sharp iron tip will still move it. I've also tried various glues, placing a pin drop of glue in the center of where a component would go. That sometimes works, but all the glues I've tried either waste my time waiting for it to dry, or dry (skin over) too quickly when grabbing some more. Worse, too often even a pin drop of glue will spread onto the pads, and then I have to waste more time cleaning things up. If I had my druthers, all SMD pars would come with peel off self stick backing. But anyway, suggestions would be welcome. It will be a long time and many test markets before anything I'm doing will be populated for me with pro pick and place machines.
I just saw the SMD beak on Hack-a-day that looks like what you are looking for (I want one!)... http://vpapanik.blogspot.de/2015/02/the-smd-beak.html I've also had luck... use a little piece of scotch or painter's tape on one side of a part to hold it down tack down a couple of leads with solder remove tape and solder down properly For smaller parts (ie. SOT23)... tin one pad (typically a middle one) hold the part in exactly the right place with tweeters quickly touch the lead over the tinned pad with your iron to tack it down properly solder the other pins and work your way around back to the tacked one. A nice feature of this technique is that you can rotate the part very precisely by moving your elbow (your arm acts like a big leaver). Even if you are a bit shaky, you can wait until the part happens to be perfectly aligned and then lock it down with the tack.
{ "source": [ "https://electronics.stackexchange.com/questions/153923", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/54964/" ] }
156,930
In STM32 Standard Peripheral library, we need to configure the GPIO. But there are 3 functions which I not sure how to configure them; GPIO_InitStructure.GPIO_Speed GPIO_InitStructure.GPIO_OType GPIO_InitStructure.GPIO_PuPd In the GPIO_Speed , there are 4 settings to pick from GPIO_Speed_2MHz /*!< Low speed */ GPIO_Speed_25MHz /*!< Medium speed */ GPIO_Speed_50MHz /*!< Fast speed */ GPIO_Speed_100MHz How do I know which speed do I choose from? Is there any advantage or disadvantages using high speed or low speed? (eg: power consumption?) In the GPIO_OType , there are 2 settings to pick from GPIO_OType_PP // Push pull GPIO_OType_OD // Open drain How to know which to choose from? and what is open drain and push pull? In the GPIO_PuPd , there are 3 settings to pick from GPIO_PuPd_NOPULL // No pull GPIO_PuPd_UP // Pull up GPIO_PuPd_DOWN // Pull down I think this settings is related to initial setting of push pull.
GPIO_PuPd (Pull-up / Pull-down) In digital circuits, is is important that signal lines are never allowed to "float". That is, they need to always be in a high state or a low state. When floating, the state is undetermined, and causes a few different types of problems. The way to correct this is to add a resistor from the signal line either to Vcc or Gnd. That way, if the line is not being actively driven high or low, the resistor will cause the potential to drift to a known level. The STM32 (and other microcontrollers) have built-in circuitry to do this. That way, you don't need to add another part to your circuit. If you choose "GPIO_PuPd_UP", for example, it is equivelent to adding a resistor between the signal line and Vcc. GPIO_OType (Output Type): Push-Pull: This is the output type that most people think of as "standard". When the output goes low, it is actively "pulled" to ground. Conversely, when the output is set to high, it is actively "pushed" toward Vcc. Simplified, it looks like this: An Open-Drain output, on the other hand, is only active in one direction. It can pull the pin towards ground, but it cannot drive it high. Imagine the previous image, but without the upper MOSFET. When it is not pulling to ground, the MOSFET is simply non-conductive, which causes the output to float: For this type of output, there needs to be a pull-up resistor added to the circuit, which will cause the line to go high when not driven low. You can do this with an external part, or by setting the GPIO_PuPd value to GPIO_PuPd_UP. The name comes from the fact that the MOSFET's drain isn't internally connected to anything. This type of output is also called "open-collector" when using a BJT instead of a MOSFET. GPIO_Speed Basically, this controls the slew rate (the rise time and fall time) of the output signal. The faster the slew rate, the more noise is radiated from the circuit. It is good practice to keep the slew rate slow, and only increase it if you have a specific reason.
{ "source": [ "https://electronics.stackexchange.com/questions/156930", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/52977/" ] }
157,065
I'm trying to understand \$V_{GS}\$ of MOSFET transistor. From what I understand \$V_{GS}\$ normally stands for voltage gate to source breakdown, but other than that I lack an understanding. \$V_{GS(th)}\$ is the threshold voltage at which the mosfet will turn on, so I have some questions about the threshold voltage; What happens if I go over the max threshold as told by the data sheet? What happens if I'm under it?
Vgs is just the voltage from gate to source (with the red lead of the multimeter on the gate and the black one on the source). Everything else is from context. The Absolute Maximum Vgs is the maximum voltage you should ever subject the MOSFET to under any conditions (stay well away). Usually the actual breakdown is quite a bit different (borrowing from this datasheet): Vgs(th) is the voltage at which the MOSFET will 'turn on' to some degree (usually not very well turned on). For example, it might be 2V minimum and 4V maximum for a drain current of 0.25mA at Tj = 25°C (the die itself is at 25°C).. That means that if you want your 20A MOSFET to really turn on fully (not just conducting 250uA) you need a lot more voltage than 4V to be sure about it, but if your Vgs is well under about 2V you can be pretty sure it's well turned off (at least around room temperature). Rds(on) is always measured at a specified Vgs. For example, it might be 77m\$\Omega\$ with Vgs = 10V and Id = 17A and Tj = 25°C. That 10V is the Vgs you need to feed your MOSFET for it to be happily turned on so it looks like a very low resistance. Vgs also comes up when you want to know the gate leakage. Igss might be +/-100nA at Vgs = +/-20V and Tj = 25°C.
{ "source": [ "https://electronics.stackexchange.com/questions/157065", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/43026/" ] }
157,302
In the circuit below: I find it awkward that the direction of the current on the 2V power supply is running opposite to the direction of the current on the 5V power supply. In the 5V power supply the current flows from - to + but in the 2V power power supply the current flows from + to - . Mathematically, it all works out and the numbers add up but I am having a little bit of a hard time gaining some intuition regarding what is going on with this "backwards" current. Here are a couple of questions: If the 2V power supply was a battery, would this mean that the battery would be charging? If that was the case, would the battery eventually explode due to overcharging? If the 2V power supply was a regular power supply (connected to the wall) why is the power supply not breaking? The power supply is not a battery, right? So there should be no such thing as "charging" a power supply so why do I not see smoke coming out of the power supply given that I am going against the natural flow of electrons?
As the other answer says, from a purely theoretical point of view, there's nothing wrong with current flowing either way through a voltage source. That's exactly what an ideal voltage source is: a circuit element that maintains the same potential on its pins, no matter what the current through it is. In the real world, our voltage sources are not ideal. A linear regulator will not usually maintain regulation when current is reversed. A battery will charge (if its chemistry permits it) when current is reversed. But we also use voltage sources to model things besides power supplies. For example, a voltage source can model the output of an op-amp. And an op-amp output typically can maintain its output voltage whether sourcing or sinking current (within limits). Or a voltage source could model a shunt regulator or zener diode. These devices will only maintain regulation when they are sinking current. Or a voltage source could model a forward-biased diode. Diodes also will only work as voltage sources when current flows in to the more positive terminal. So from a pure theory point of view, you should be prepared to accept current flowing either way through a voltage source. When thinking about real circuits, you need to consider what actual device the voltage source is modeling and then whether the model is still valid with whichever current direction the model predicts.
{ "source": [ "https://electronics.stackexchange.com/questions/157302", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/64356/" ] }
157,620
If my schematic calls for a 1% resistor, can I use a 10% resistor that measures to the correct resistance within 1% or is there some quality to tolerance beyond what it measures Ohm-wise? For example, my schematic calls for a 1% 1000-Ohm resistor. I have a 1000-Ohm resistor with a silver band (10%). I measure the resistor using an Ohm-meter and it reads 1008 Ohms which is within 1% of 1000. Can I use the resistor and meet the designer's intent?
You don't provide much info regarding the environment that the circuit will be used in or the specific resistor types. If you expect temperature variations (or even temperature change caused by self heating) then temperature coefficient becomes important and the initial measured resistance value may be soon way off. By 10% you probably refer to either carbon or carbon film resistor types. Carbon resistors typically have a have a temperature coefficient of 5000 ppm/°C which means 0.5%. That is 5% value change with just 10°C variation. Carbon film resistors have temperature coefficients typically around 200 to 500ppm/°C, which can give a value variation of 0.2% to 0.5% per 10°C temperature variation. On the other hand a metal film resistor (which is probably what you refer to by 1% tolerance) has a temperature coefficients ranging between 10 and 100 ppm/°C, so a variation of 10°C will result in a value change of just 0.01% to 0.1%
{ "source": [ "https://electronics.stackexchange.com/questions/157620", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/39947/" ] }
157,647
I found a component attached to a PCB inside one of my old $5 devices. There are pink, rubbery things attached to either side of the LCD that can bend and were firmly attached (glued?), and there are a lot of exposed conductive traces where the pink things were attached on the PCB. I would like to know what these pink things are. Pictures are shown below. The front side of the LCD ^ The back side of the LCD ^ The front side of the PCB. This is where the LCD is connected to. ^ The back side of the PCB ^ Again, I would like to know what the pink things are.
Often called Zebra strips (or Elastomeric connectors). They have very thin vertical conductors that connect between flat PCB pads and things like LCD pads on glass. Here is a link with a similar pink component. http://en.wikipedia.org/wiki/Elastomeric_connector
{ "source": [ "https://electronics.stackexchange.com/questions/157647", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/68922/" ] }
157,658
I'm a beginner in electronics, and in my free time I love making stuff with my Arduino. As I was handling a few Bluetooth modules with my hands, I thought about static electricity discharging on the chips. I'm not very familiar with this, so I have a few questions about the topic, and I hope you could help me out to get some stuff straight in my head. Is static electricity really so dangerous? Can I really destroy something simple like an Arduino module if I touch it? Assuming that I need a discharge in order to pass the electricity, how is this happening? Anything more I should know?
Is static electricity really so dangerous? Yes. The conductive paths inside an IC are really small , so it doesn't take much energy through them to vaporize them.¹ There are millions of such paths inside the ICs of an Arduino, and it only takes damage to one of them to break the device. It is possible that you could get lucky and break some feature that you aren't using, but this is not a good gamble to take. Can I really destroy something simple like an Arduino module if I touch it? What makes you think an Arduino is "simple?" Microcontrollers are among the most complex and delicate objects humans make. Fabergé eggs are simple and durable by comparison. How does this happen? It happens the same way you get a discharge when touching a doorknob. Whenever the path through the device is a better path for the electricity than leaking away through your shoes and the air, it will take that path. Generally, the device is a good path because it is plugged into a power source, which means there's a ground path, which is a low-impedance path to a much lower voltage potential. Some of an Arduino's input pins will be protected against static discharge, but probably not all. Even those that are protected can be killed with enough hits. Protection doesn't make a pin invulnerable, it just allows it to withstand a certain amount of ESD energy. Like any armor, hit it enough times with enough energy, and you can break through. This is not to say that unplugging the device is a good solution to the problem, however. For one thing, it defeats much of the protection built into the device, because some of it works by shunting the dangerous energies to ground. When you unplug the device, you remove that path, so now the electricity is forced to take a different path through the device, one without this protection. An unplugged device still has other conductive paths that lead to a lower voltage potential. Consider that humans and shoes are poor conductors, yet we manage to get static buildups, and can be electrocuted. Anything more I should know? Go to a good electronics tool shop and look at all the antistatic products available. They're made for a good reason. Buy some. Use them. :) Footnotes: The sort of static discharges you feel when touching a doorknob range between 5 kV and 15 kV depending on several factors;² the peak current can be 1 A. The static electricity is equalized in about a microsecond and doesn't pass through your heart, so it is merely annoying to a human but potentially fatal to a semiconductor gate. The human body can also charge to less than 5 kV before discharging. There is a range where there is still enough energy in the discharge to damage a sensitive IC, but where the energy is too low for your nerves to sense it. Dry air allows the human body to store more charge because it leaks away slower than in moist air, so that the energy in a static discharge generally gets higher in winter. Your shoe type also affects the amount of energy you can deliver in a static discharge, as does the carpet. Then you have effects like how moist your skin is, whether your fingers are callused, etc.
{ "source": [ "https://electronics.stackexchange.com/questions/157658", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/43242/" ] }
158,234
I have read that current is always the same within a circuit, but as far as I understood voltage is not. Every electronic part I use lowers the voltage more or less, even simple wires do this. So far, so good. Now I wonder why it does not matter if a resistor comes before or behind an LED with respect to voltage drops caused by the parts of a circuit. Supposed I have a very simple circuit: 9V Battery -> Resistor -> LED -> 9V Battery Also supposed that the LED has a maximum voltage of 3V and 20 mA. So I need to calculate the desired resistor: 9V - 3V = 6V So I need a resistor that takes 6V out, and since I want 20 mA and current is the same in the entire circuit, it's according to Ohm's law: U = R * I 6V = R * 0,02A R = 6V / 0,02A R = 300 Ohm Again, so far, so good. Now, with a resistor taking out 6V this makes sure that only 3V are left for the LED: The battery provides 9V, the resistor uses 6V, the LED gets the remaining 3V. Everything's fine. What I do not get is why it also works the same way if I have the resistor behind the LED. Wouldn't that mean that we have a battery providing 9V, the LED getting all the 9V, using 3V, and then 6V for the resistor remaining? Why does this work? Shouldn't 9V be way too much for the LED? Why doesn't it matter if the resistor is set up before or behind the LED?
One thing you need to remember is that voltage is relative. Voltage is a potential difference and it makes no sense to discuss voltage without a 'zero' reference. In the case of your LED circuit there will be a voltage across the battery, a voltage across the LED, and a voltage across the resistor. If you add up all the voltages as you go around the loop, you get zero – up 9 at the battery, down 6 at the resistor, down 3 at the LED, the total is zero and you're back at the same point in the circuit. The LED only sees the difference in voltage between its two leads, as does the resistor. Since only the difference is important it makes no difference what order the parts are connected in. As for current, it is only the same along a continuous path. Electrons are not created or destroyed (what goes in must come out). Since there is only one possible path for the electrons to take in your circuit, the current will be the same through all of the components. In a parallel circuit you see the opposite: all of the components have the same voltage, but the currents will be different.
{ "source": [ "https://electronics.stackexchange.com/questions/158234", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/29522/" ] }
158,413
I've got a switch like this one: Now my (quite novice) question is: How do I best attach a wire to the connector without soldering, e.g. for quickly trying something out? Right now I've folded the wire so that I got a hook at its end, put that hook into the hole, and wrapped everything with electrical tape. Is there a better way? (I can't use alligator clamps as the two connectors are too close to each other.)
Quick disconnect terminals. They are good for permanent attachment too. The blades under the switch were intended for this type of terminal, so they should have a correct width and thickness. ( page where the picture came from )
{ "source": [ "https://electronics.stackexchange.com/questions/158413", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/29522/" ] }
158,778
At the voltage levels of typical overhead transmission lines in the US, a bird can land on one and be just fine (as long as it doesn't do something like spread its wings and touch a tree or something else at lower electric potential). However, what about a hypothetical powerline at much higher voltage (as in tens of megavolts). Could landing on such a powerline fatally-shock the bird even though it does not complete a circuit for sustained current? (Assume that the distance is long enough that electrical arc'ing is impossible.) NOTE: My understanding of what happens when a bird flies from an earth object to a powerline (please correct me if I'm wrong) is that - upon contacting the wire - its electric potential changes from earth-potential to the powerline's potential. In order for this to happen, there is an initial transfer of electrical energy (i.e. flow of charge i.e. current) from the powerline to the bird which "equalizes" their electric potential, which happens nearly instantaneously. If this is correct, then my question can be restated more generally as "Can an 'equalization charge' such as this result in a fatal shock, if the potential difference that it's equalizing is high enough?"
Assuming the bird still is at earth potential when entering in contact with the wire (say, it jumped right on it from the pole). There are lots of unknowns in this problem but let's try to fill some gaps with data we kind of know in humans. So until an EE stackexchanger who is an ornithologist shows up with interesting data, let's assume humans can fly and like to chill out hanging from a high voltage cable. All objects and living things have an equivalent electrical capacity. The Human Body Model is a convention which dictates humans are equivalent on that aspect to a 100pF capacitor (let's assume it doesn't reduce much from the ground to 23meters high, and call it a worst case scenario). Now, let's assume the contact resistance between the cable and wherever the geometric center of that capacitor is, is 3000Ohm - taken from the "Hand holding wire" case of the table in another thread - divided by two for a two hands contact. Then the total duration of the equilibrium current, taken as 5 times the time constant of the equivalent RC, is 0.75 microseconds. Effects of currents through living things depend on the magnitude of the current and the duration. I have never seen any study showing any data below 10ms (e.g. the same study cited above), which is not surprising as apparently the response time of the cardiac tissue is 3ms . For 10ms, the current that generates irreversible effects is 0.5A, and it seems to have settled at that point (little dependent on the duration), certainly down to 3ms. Let's assume that past that point, the cardiac tissue behaves like an ineffective first order system, attenuating 20dB/decade. The required current for similar effects would be 20*4.25=90dB higher, or 15811A. For a contact resistance of 1500Ohms as used above, it means the voltage of the cable needs to be 23GV! Burns solely depend on the energy transferred, so theoretically a high voltage could burn for such a small time. But how high? Well, "Electrical injuries: engineering, medical, and legal aspects", page 72 , states: The estimated lowest current that can produce noticeable first or second degree burns in a small area of the skin is 100A for 1s Edit: Note that 100A is quite high, it is unclear how the author defines "first degree burns on small area of skin", but I would guess it would be for an area bigger than an inch, burning all epidermis and some of the dermis cells such that they peel away. So for 750nanoseconds, that's 133MA required! If we use again the 1500Ohms resistance from above, that means the wire would need to be at 199GV, which is insane. Chances are there will be other nasty effects before those burns appear, but neither 23GV nor 199GV sound likely in the near future. Side note, as J... raised in the comments, a 23GV cable would spontaneously arc with anything at Earth potential within 7.6km and therefore would require an incredible amount of isolation. As if it wasn't enough, you may have noticed that the above assume the maximum current is applied for the entire duration of the equilibrium current whereas in fact it is a decaying exponential... The average current over this duration is in fact 0.2 times the maximum, so these values should really be 115GV and 995GV! Warning: This does not mean it is safe to jump on and hang from high voltage lines, this is a quick analysis with rough data estimates and modelling and shall not be considered a justification for your actions.
{ "source": [ "https://electronics.stackexchange.com/questions/158778", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/49756/" ] }
158,910
Im trying to find a coating for my PCB in order to protect it from reverse engineering. I want it opaque, non-conductive and with a low thermal resistance. Basically I do not want that someone replicates my circuit easily. I´ve beens searching a lot on coatings but did't find anything that suits to all my requirement. Anyone with an idea?
There is no such thing. If someone really wants to reverse engineer your circuit board, they will be able to do it. The only question is how much trouble, and therefore expense and time, it will take. At best, you can make it too difficult for the casual copier, although that's probably not who you are worried about. The closest thing that matches your spec is to "pot" the circuit board. There is stuff called potting compound specifically for this purpose. There are many different types, from 2-part epoxy mixes, to goop that cures over time or with heat. Each has their own set of hassles and expense at your end. If you do still end up doing this, make sure to use material specifically intended for this. Some of the potting compounds are silicone based, but there is a wide range of silicones. Some emit acetic acid as a byproduct of curing, for example. Those won't be sold as potting compound for electronic circuits. But someone seeing silocone potting compound, and then the acid-cure stuff cheaper may have a bright idea how to save money. Silicone is usually transparent. Butalene rubber is sometimes used for potting, especially for high voltage circuits. It's really sticky and gooey stuff until cured, yucc. Before you go potting your circuit board, think carefully about whether the advantages are really worth the significant cost on your end. Potting won't slow down much a determined cloner that already has the equipment in place. It that's who is going to copy your circuit, you are actually doing the cloner a favor by making your product more expensive than it needs to be and allowing him a nice margin to undercut you. Potting also has other drawbacks beyond just the expense. It makes the product heavier, allows for less power dissipation of components, can trap unwanted dirt and moisture, and makes diagnosing of field failures difficult. In summary, don't pot to prevent cloning, since it won't. Pot if you need a high voltage barrier, want to withstand a harsh environment, or want to add mechanical ruggedness.
{ "source": [ "https://electronics.stackexchange.com/questions/158910", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/45621/" ] }
158,930
A message signal is modulated on a carrier wave of amplitude \$50V\$ using amplitude modulation. The modulation index is given as \$50\%\$. Find the amplitude of the AM wave. I was able to calculate the amplitude of the message signal as \$25V\$. However I'm stuck at the amplitude of AM wave. The maximum displacement of the AM wave will be \$50+25=75V\$ and the minimum displacement will be \$50-25=25V\$. So is the amplitude of the AM wave \$75V\$ or is it \$75-25=50V\$ i.e. is it the maximum displacement or the difference between the maximum and minimum displacements? I know amplitude is the maximum displacement from the mean position however I can't seem to figure the mean position of this AM wave.
There is no such thing. If someone really wants to reverse engineer your circuit board, they will be able to do it. The only question is how much trouble, and therefore expense and time, it will take. At best, you can make it too difficult for the casual copier, although that's probably not who you are worried about. The closest thing that matches your spec is to "pot" the circuit board. There is stuff called potting compound specifically for this purpose. There are many different types, from 2-part epoxy mixes, to goop that cures over time or with heat. Each has their own set of hassles and expense at your end. If you do still end up doing this, make sure to use material specifically intended for this. Some of the potting compounds are silicone based, but there is a wide range of silicones. Some emit acetic acid as a byproduct of curing, for example. Those won't be sold as potting compound for electronic circuits. But someone seeing silocone potting compound, and then the acid-cure stuff cheaper may have a bright idea how to save money. Silicone is usually transparent. Butalene rubber is sometimes used for potting, especially for high voltage circuits. It's really sticky and gooey stuff until cured, yucc. Before you go potting your circuit board, think carefully about whether the advantages are really worth the significant cost on your end. Potting won't slow down much a determined cloner that already has the equipment in place. It that's who is going to copy your circuit, you are actually doing the cloner a favor by making your product more expensive than it needs to be and allowing him a nice margin to undercut you. Potting also has other drawbacks beyond just the expense. It makes the product heavier, allows for less power dissipation of components, can trap unwanted dirt and moisture, and makes diagnosing of field failures difficult. In summary, don't pot to prevent cloning, since it won't. Pot if you need a high voltage barrier, want to withstand a harsh environment, or want to add mechanical ruggedness.
{ "source": [ "https://electronics.stackexchange.com/questions/158930", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/49416/" ] }
159,552
I've heard about laptops such as the new Chromebooks that are charged via a wall charger that connects to a USB-C port. I'm quite happy that this will supposedly standardize laptop chargers but I'm a little unclear about how this works. Existing USB ports provide a 5 volt source, but the laptop chargers provide up to 20 volts. is there some kind of higher voltage line or are the USB-C powered laptops running on a lower voltage? All this information I've seen gives a fairly vague idea of providing more power, providing up to 100 watts rather than 10 watts. Even so, my laptop is not the most powerful machine and the charger still outputs fairly near 100 watts, I can imagine more chargers for power laptops providing much more than 100 watts. Could a general purpose USB-C charger really power all these machines?
USB-C will use the Power Delivery specification , a first connexion is done at 5V, then "negotiate" whether it can use a higher profile to charge. There are 5 profiles available : Profile 1 : 5V@2A Profile 2 : 5V@2A or [email protected] Profile 3 : 5V@2A or 12V@3A Profile 4 : 5V@2A or 12V@3A or 20V@3A Profile 5 : 5V@2A or 12V@5A or 20V@5A There are 4 connection point for the power (2 on each side - see pinout below), as far I know, they are all equal and may be connected by a single cable (I think it will be at the cable manufacturer discretion). These additional connection allow to go for higher current without having massive voltage drop at the connection. Coupled with higher voltage, that gives a lot higher charging power. All in all, I guess that the laptops will as well charge with 5V (on USB A charger), just far slower. And based on what I saw from Apple for their new Macbook, the charger is 29W, so most likely a Profile 3 (a bit under spec), it seems then to be only 12V. It seems that additional profiles have been added by some manufacturers, for instance, Qualcomm Quick Charge 2.0 seems to be an implementation of the Power Delivery but using 9V too. This technology though does not use the Power Delivery specification as it uses the D+/D- lines of the USB 2.0 port to negotiate the voltage. Qualcomm Quick Charge 3.0 brings it one step further and now allow to "negotiate" any voltage from 3.7V to 20V by increment of 200mV. No data found so far about the current at each voltage.
{ "source": [ "https://electronics.stackexchange.com/questions/159552", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/69833/" ] }
159,953
With only 4 cables, two of which are ground, it seems like it has way too many pins. Why is this so?
The connector shown in the image is a 15-pin SATA connector. Pin description: The connector can have 5 wires. And this particular connector shown in the question is missing the 3.3 V (orange) wire. The new SATA power connector contains many more pins for several reasons: 3.3 V is supplied along with the traditional 5 V and 12 V supplies. To reduce impedance and increase current capability, each voltage is supplied by three pins in parallel, though one pin in each group is intended for precharging. Five parallel pins provide a low-impedance ground connection. Two ground pins, and one pin for each supplied voltage, support hot-plug precharging. Ground pins 4 and 12 in a hot-swap cable are the longest, so they make contact first when the connectors are mated. Drive power connector pins 3, 7, and 13 are longer than the others, so they make contact next. The drive uses them to charge its internal bypass capacitors through current-limiting resistances. Finally, the remaining power pins make contact, bypassing the resistances and providing a low-impedance source of each voltage. This two-step mating process avoids glitches to other loads and possible arcing or erosion of the SATA power connector contacts. Pin 11 can function for staggered spinup, activity indication, both, or nothing. Source: wikipedia article on Serial ATA .
{ "source": [ "https://electronics.stackexchange.com/questions/159953", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/70015/" ] }
160,218
I am using a camera module of our custom application. The camera module started consuming more current when compared to a previous board which has all the same settings, chipsets and modules. In our conversation with a support engineer, we got this answer: VCAMD power supply in previous board is driven by a 1.27V DCDC, in present board it is driven by LDO. In a dark environment, DCDC will save about 14mA and in a light environment, DCDC will save about 25mA. So test results of both are different. How can using a DCDC save power, the module would consume the power that it needs?
It's not that a DCDC ( Buck Regulator ) saves power, it's that an LDO wastes power. In effect a buck regulator converts the voltage difference to more available current. An LDO converts the voltage difference to heat, and heat is a waste product you don't really want. An LDO regulating, say, 12V down to 5V has to drop 7V and dissipate that power as heat. The more current you draw the more heat is produced. If you draw 1A through that example (5W) it in turn draws 1A from the power source (12W), so it has to lose 7W of power to the atmosphere. A perfect (they don't exist, but for illustration) buck regulator going from 12V to 5V, with 1A output (5W) would draw in turn 5W from the power source, which at 12V would be 417mA. Of course, as I say, perfect buck regulators don't exist and there are still losses, so it would actually draw slightly more from the source - more like maybe 6W, or 500mA. Still considerably less than an LDO. There are down sides to buck regulators though: The are noisy . They work by rapidly switching the power on and off, and that makes for higher radiated and conducted emissions. They are harder to lay out . To keep the EMI emissions low and get them to pass compliance testing, careful layout on the PCB has to be considered. They use more components . An LDO typically is one chip and a couple of capacitors. Buck regulators also need (normally) at least a diode and an inductor as well. All that adds up to buck regulators being more expensive than LDOs.
{ "source": [ "https://electronics.stackexchange.com/questions/160218", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10502/" ] }
160,249
I would like to understand how the computation process causes the processor to get hot. I understand that the heat is generated by the transistors. How does the transistors generate the heat exactly? Is the correlation between the number of chips and the heat generated linear? Do CPU manufacturers optimize the positions of single transistors in order to minimize the heat generated?
A transistor (FET, in modern ICs) never switches instantly from full OFF to full ON. There is a period while it's turning on or off where the FET acts like a resistor (even when fully ON it still has a resistance). As you know, passing a current through a resistor generates heat (\$P=I^2R\$ or \$P=\frac{V^2}{R}\$). The more the transistors switch the more time they spend in that resistive state, so the more heat they generate. So the amount of heat generated can be directly proportional to the number of transistors - but it is also dependent on which transistors are doing what and when, and that depends on what the chip is being instructed to do. Yes, manufacturers may position specific blocks of their design (not individual transistors, but blocks that form a complete function) in certain areas depending on the heat that block could generate - either to place it in a location with better heat bonding, or to place it away from another block that may generate heat. They also have to take into account power distribution within the chip, so placing blocks arbitrarily may not always be possible, so they have to come to a compromise.
{ "source": [ "https://electronics.stackexchange.com/questions/160249", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/7768/" ] }
160,258
I want to make a device, which takes an AC audio signal like a guitar pickup signal, or piano or what have you as input, and changes the pitch using this function: newFrequency = oldFrequency * 2 ^ ( ( interval - 1 ) / 12 ) oldFrequency is the frequency of current note. Interval is just the number of half-steps from that music note, including that note. For example: A note at 110Hz with interval = 13 would be A at 220Hz. A note at 110Hz with interval = 1 would be A at 110Hz. A note at 110Hz with interval = 7 would be E at 164.814Hz. That's the case for any music note. Not only A. By the way, interval CAN take negative number for a lower pitch. The problem is that we have an audio signal at input. One can play multiple notes or a chord at once. Is that going to be a problem? I'm new to digital electronics, or electronics at all. I would appreciate if you could just give me keywords and leads, so that I could search, study and do more research. I don't know where to begin at all. Thanks in advance.
A transistor (FET, in modern ICs) never switches instantly from full OFF to full ON. There is a period while it's turning on or off where the FET acts like a resistor (even when fully ON it still has a resistance). As you know, passing a current through a resistor generates heat (\$P=I^2R\$ or \$P=\frac{V^2}{R}\$). The more the transistors switch the more time they spend in that resistive state, so the more heat they generate. So the amount of heat generated can be directly proportional to the number of transistors - but it is also dependent on which transistors are doing what and when, and that depends on what the chip is being instructed to do. Yes, manufacturers may position specific blocks of their design (not individual transistors, but blocks that form a complete function) in certain areas depending on the heat that block could generate - either to place it in a location with better heat bonding, or to place it away from another block that may generate heat. They also have to take into account power distribution within the chip, so placing blocks arbitrarily may not always be possible, so they have to come to a compromise.
{ "source": [ "https://electronics.stackexchange.com/questions/160258", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/68496/" ] }
160,259
A friend of mine is having a large, fancy / artistic lamp at home (Europe) and uses a "500W incandescent bulbs on 230V" as recommended by the manufacturer (link) . His problem : every month or so the bulb blows up, and he needs to change it (it's annoying and these bulbs are expensive). It mostly (only?) blows up when he turns on the light from the wall electrical switch (which actions the wall socket where the lamp is plugged into). Without being certain, I suspect the issue is coming from overvoltage (*). My question : if indeed the cause is overvoltage, what would be a simple (in a minimalistic sense), yet practical and robust electrical / electronic circuit that a hobbyist could build to insert between the mains socket and the lamp plug to remove the surge? I assume that the right combination of inductors, capacitors, resistors or diodes (maybe a fuse?) used properly could be enough, but if more "fancy" components could make a significant improvement (eg gas discharge tubes ?), then why not. Thanks for your help. (*) not very scientific but in order to make sure that the issue is not due to a defective wall socket (or another part of is home's electrical wiring), I suggested that he temporarily plug the lamp into another socket, in another room. He did that but he got the same issue (burnt two bulbs in 3 months). I am not living nearby, so I could not go and test the surge myself. PS : for those of us who are "environmentally minded", I did suggest to stop using incandescent bulbs, but he would not listen - apparently the point with these lamps is to use old style bulbs, that's what makes their charm (I do not really understand but anyway my motivation is elsewhere, I want to learn how to fix this sort of technical issues efficiently). Addition : here are a couple of photos of the bulbs used, in case it helps.
A transistor (FET, in modern ICs) never switches instantly from full OFF to full ON. There is a period while it's turning on or off where the FET acts like a resistor (even when fully ON it still has a resistance). As you know, passing a current through a resistor generates heat (\$P=I^2R\$ or \$P=\frac{V^2}{R}\$). The more the transistors switch the more time they spend in that resistive state, so the more heat they generate. So the amount of heat generated can be directly proportional to the number of transistors - but it is also dependent on which transistors are doing what and when, and that depends on what the chip is being instructed to do. Yes, manufacturers may position specific blocks of their design (not individual transistors, but blocks that form a complete function) in certain areas depending on the heat that block could generate - either to place it in a location with better heat bonding, or to place it away from another block that may generate heat. They also have to take into account power distribution within the chip, so placing blocks arbitrarily may not always be possible, so they have to come to a compromise.
{ "source": [ "https://electronics.stackexchange.com/questions/160259", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/49841/" ] }
160,452
Wafers used for making semiconductors are round -- but this wastes quite a few chips around the periphery of the wafer in the fabrication process. Wouldn't it make sense to make the wafer as a square or rectangle instead? Is there some aspect of the lithography process that requires that the surface be round?
As the wafer material is drawn up out of the molten silicon, it is spun in order to produce a single uniform silicon crystal via the Czochralski process . It is this spinning that produces the round profile of the wafer itself.
{ "source": [ "https://electronics.stackexchange.com/questions/160452", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2467/" ] }
161,192
Frankly, Why do all communication ICs(or at least many of them or most famous or popular) such as Bluetooth or WIFI or GSM or etc support AT command set? why don't they have a simple pin for D/C(Data or Command) for communications? What are the benefits of using AT command set? The AT command set is big and would take time and memory space and it makes it difficult to communicate while you can instead use a simple D/C pin and send an integer to set registers or send data.
brhans is correct - Legacy. In the 1980s, Hayes began making the "Smartmodem 1200". It was obsolete almost immediately and Hayes rushed out the Smartmodem 2400. In that rush, there was no time for design alterations between the modem designs. As a result, Hayes were the first to make two different speed modems that accepted the same programming commands! Any software that could get a Smartmodem 1200 to dial a telephone number could also dial a Smartmodem 2400. At the time, every new modem required months for an updated driver to be written. When the Smartmodem 2400 came on the market, there was already a working driver for the Smartmodem 1200 so no months of waiting. Suddenly other manufacturers realised the advantage of new modems having the same command set as older modems. Within six months, vendors were offering "Hayes compatible" modems as the only choice. Which got them sued by Hayes. So everyone started calling their modems "AT Command Set compatible", but continued to use the Hayes command set. By the mid 80s no consumer modems were made that could not use the AT command set. As a result every modem like comms system uses AT commands. There are other advantages too - as the command set is ASCII, anyone can manually type AT commands into a terminal window to control a modem. Because my own modem had a dicey RJ11 connection, I used to start every session in Procomm Plus with: AT OK ATH1 [dial tone] ATDT [phone number] Just to make sure I got the dial tone. If I didn't, I'd go around and wiggle the wires a bit!
{ "source": [ "https://electronics.stackexchange.com/questions/161192", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/29617/" ] }
161,457
I know that the capacitors store energy by accumulating charges at their plates, similarly people say that an inductor stores energy in its magnetic field. I cannot understand this statement. I can't figure out how an inductor stores energy in its magnetic field, that is I cannot visualize it. Generally, when electrons move across an inductor, what happens to the electrons, and how do they get blocked by the magnetic field? Can someone explain this to me conceptually? And also please explain these: If electrons flow through the wire, how are they converted to energy in the magnetic field? How does back-EMF get generated?
This is a deeper question than it sounds. Even physicists disagree over the exact meaning of storing energy in a field, or even whether that's a good description of what happens. It doesn't help that magnetic fields are a relativistic effect, and thus inherently weird. I'm not a solid state physicist, but I'll try to answer your question about electrons. Let's look at this circuit: simulate this circuit – Schematic created using CircuitLab To start with, there's no voltage across or current through the inductor. When the switch closes, current begins to flow. As the current flows, it creates a magnetic field. That takes energy, which comes from the electrons. There are two ways to look at this: Circuit theory: In an inductor, a changing current creates a voltage across the inductor \$(V = L\frac{di}{dt})\$. Voltage times current is power. Thus, changing an inductor current takes energy. Physics: A changing magnetic field creates an electric field. This electric field pushes back on the electrons, absorbing energy in the process. Thus, accelerating electrons takes energy, over and above what you'd expect from the electron's inertial mass alone. Eventually, the current reaches 1 amp and stays there due to the resistor. With a constant current, there's no voltage across the inductor \$(V = L\frac{di}{dt} = 0)\$. With a constant magnetic field, there's no induced electric field. Now, what if we reduce the voltage source to 0 volts? The electrons lose energy in the resistor and begin to slow down. As they do so, the magnetic field begins to collapse. This again creates an electric field in the inductor, but this time it pushes on the electrons to keep them going, giving them energy. The current finally stops once the magnetic field is gone. What if we try opening the switch while current is flowing? The electrons all try to stop instantaneously. This causes the magnetic field to collapse all at once, which creates a massive electric field. This field is often big enough to push the electrons out of the metal and across the air gap in the switch, creating a spark. (The energy is finite but the power is very high.) The back-EMF is the voltage created by the induced electric field when the magnetic field changes. You might be wondering why this stuff doesn't happen in a resistor or a wire. The answer is that is does -- any current flow is going to produce a magnetic field. However, the inductance of these components is small -- a common estimate is 20 nH/inch for traces on a PCB, for example. This doesn't become a huge issue until you get into the megahertz range, at which point you start having to use special design techniques to minimize inductance.
{ "source": [ "https://electronics.stackexchange.com/questions/161457", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/68543/" ] }
161,465
I have learnt that the MPLAB IDE has different compilers like C18, XC8 and HI-Tech. I want to know the following things: Why are there different compilers when one can do the job? Are certain compilers only used for specific microcontrollers? Are there more compilers than these 3 that I need to be aware of? If any compiler can be used for say compiling PIC18F code, then what decides which one I choose? I really want to know how to decide which one to go with.
This is a deeper question than it sounds. Even physicists disagree over the exact meaning of storing energy in a field, or even whether that's a good description of what happens. It doesn't help that magnetic fields are a relativistic effect, and thus inherently weird. I'm not a solid state physicist, but I'll try to answer your question about electrons. Let's look at this circuit: simulate this circuit – Schematic created using CircuitLab To start with, there's no voltage across or current through the inductor. When the switch closes, current begins to flow. As the current flows, it creates a magnetic field. That takes energy, which comes from the electrons. There are two ways to look at this: Circuit theory: In an inductor, a changing current creates a voltage across the inductor \$(V = L\frac{di}{dt})\$. Voltage times current is power. Thus, changing an inductor current takes energy. Physics: A changing magnetic field creates an electric field. This electric field pushes back on the electrons, absorbing energy in the process. Thus, accelerating electrons takes energy, over and above what you'd expect from the electron's inertial mass alone. Eventually, the current reaches 1 amp and stays there due to the resistor. With a constant current, there's no voltage across the inductor \$(V = L\frac{di}{dt} = 0)\$. With a constant magnetic field, there's no induced electric field. Now, what if we reduce the voltage source to 0 volts? The electrons lose energy in the resistor and begin to slow down. As they do so, the magnetic field begins to collapse. This again creates an electric field in the inductor, but this time it pushes on the electrons to keep them going, giving them energy. The current finally stops once the magnetic field is gone. What if we try opening the switch while current is flowing? The electrons all try to stop instantaneously. This causes the magnetic field to collapse all at once, which creates a massive electric field. This field is often big enough to push the electrons out of the metal and across the air gap in the switch, creating a spark. (The energy is finite but the power is very high.) The back-EMF is the voltage created by the induced electric field when the magnetic field changes. You might be wondering why this stuff doesn't happen in a resistor or a wire. The answer is that is does -- any current flow is going to produce a magnetic field. However, the inductance of these components is small -- a common estimate is 20 nH/inch for traces on a PCB, for example. This doesn't become a huge issue until you get into the megahertz range, at which point you start having to use special design techniques to minimize inductance.
{ "source": [ "https://electronics.stackexchange.com/questions/161465", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/20711/" ] }
161,855
I'm an electronic & electrical engineering student in first year of university. I have had to use a few core programs as part of my course for programming and simulation software. However I came across AutoCAD Electrical and struggling to understand the use of the application. It being an electrical CAD software, I assumed that I'd be able to carry out similar tasks as Micro-Cap (which I use to analyse AC, DC circuits) or Proteus (which I use to simulate code for PIC micro-controllers). However I've yet to find any tutorial that says that AutoCAD Elec does any of those. If not, then I would like to know the essential point of it? If drawings is the main use of it, can't I just continue using Visio, which I had to map a fuse-box design for a side project. It is pretty fiddly on Visio, but if all AutoCAD Elec is, is a drawing tool, in what way would it excel Visio for the kind of drawings I need (all car related: looms, fuse-boxes, motors)?
As I understand it, AutoCAD Electrical is normal AutoCAD, but with some features that support electrical designers. In this context, 'electrical designers' means people designing low-voltage motor control centres, industrial plant, control cubicles, and so on. Note the distinction between electrical design and electronics design. Electrical design is concerned with switchboards, 200 metre long cables, motor control contactors, junction boxes, and so on. That is the audience AutoCAD Electrical is intended for. We use AutoCAD Electrical to draw power single line diagrams, motor control circuit schematics, switchboard general arrangements, and so on. In industrial contexts, all wires and terminals need to be individually numbered. A feature I am aware of is that AutoCAD Electrical can automatically track wire numbers (ferrule numbers) and terminal numbers, making sure these numbers are all unique and sequential. I would have really appreciated this on the last electrical design job I did, where we numbered all the wires and terminals by hand, with frequent mistakes. If you aren't designing something like what's shown above, AutoCAD Electrical probably isn't for you.
{ "source": [ "https://electronics.stackexchange.com/questions/161855", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/61661/" ] }
162,309
A.S. The question A question on pull up resistors answers only a part of my question as already mentioned in "EDIT" made shortly after this question was asked & answers here (below) are much detailed, in-context & easy to understand. Definitely not a duplicate; marking as duplicate for 2-3 points I am reading a book on Arduino & I just do not understand the concept of push-up resistor, following is a quote from the book: Why do we need the resistor R1? R1 guarantees that the Arduino’s digital input pin 7 is connected to a constant voltage of +5V whenever the push button is not pressed. If the push button is pressed, the signal on pin 7 drops to ground (GND), at the same time the Arduino’s +5V power is connected to GND, we avoid a shorted circuit by limiting the current that can flow from +5V to GND with a resistor (1 - 10 KΩ). Also, if there was no connection from pin 7 to +5V at all, the input pin would be “floating” whenever the pushbutton is not pressed. This means that it is connected neither to GND nor to +5V, picking up electrostatic noise leading to a false triggering of the input. Another book called it Arduino's pull-up resistance because it pulls current towards 5V , which confuses me even more - how can a resistor increase voltage, shouldn't the voltage drop? Edit - thanks to @Golaž for pointing to helpful material at A question on pull up resistors , in comments (this edit was inserted on Mar 30 at ~6) . So, what is this whole concept? And which term push-up/pull-up is correct? Also, with reference to that circuit above - What is floating pin? How does R1 avoid a shorted circuit ? Why does it count as short cicuit & not closed circuit. After all GND is a sink. Is a short circuit serious problem at mere 5V I have already read: What are the mechanisms at work in a pull-up or pull-down resistor circuits with a push-buttons and a GPIO? Push-pull/open drain; pull-up/pull-down But I still don't quite grasp it.
"Pull-up" is used more often in circuit design than "push-up". But I would imagine anyone would understand you either way. The pull-up resistor isn't increasing the voltage. It's simply connecting the 5V supply that already exists to the digital input pin of the Arduino. Digital input pins are designed to have very high internal resistance, so extremely little current will flow into the pin. If very little current is flowing through a resistor, the voltage on either side of the resistor will be about the same. So R1 will have approximately 5V on both sides of it. That means, most importantly, the voltage at the input pin will be 5V when the switch isn't being pressed. Note: The arrows represent the current flow. simulate this circuit – Schematic created using CircuitLab Whenever the Arduino measures the state of the digital input pin, it can only choose one of two options: high or low. Some external device must be connected to that pin (in your case, a switch) to apply either a high voltage or a low voltage. But what if there's nothing connected to the pin? You might be tempted to say it should read as a low voltage. Unfortunately, that's not correct. We call this condition "floating". The pin is not being actively driven high or low by an external device, so it's just floating in an unknown state. This is dangerous because the Arduino still must choose high or low when it measures the pin. It can't choose "neither" as an option. Which one will it choose? Who knows. In fact, the physical metal pin itself will act as a tiny antenna and may be affected by any nearby electrostatic field. Simply moving your hand in the vicinity of the chip may cause it to change states sporadically. Moral of the story: NEVER leave a digital input floating. The sole job of the pull-up resistor is to prevent the pin from floating when the switch isn't being pressed. Once the switch is actually pressed, the 5V supply has a path to ground through the pull-up resistor and the closed switch. But the resistor will limit the current to a reasonable amount, which avoids a short to ground. Using Ohm's Law, the voltage at the bottom of the resistor will now be very close to 0V. Since that's where the digital input pin is connected, the Arduino will read a low voltage. Now the firmware programmed into the Arduino can safely read the status of the digital input pin and determine when the button is being pressed. simulate this circuit So why don't we just connect the 5V supply directly to the digital input pin and forgo the pull-up resistor? As you indicated in your question, that will cause the 5V supply to short to ground when the switch is pressed. Depending on how big the power supply is, it could cause a massive current flow through the switch. It could damage the power supply and possibly melt the button. At the very least, the power supply will brown-out or black-out and the supply output will drop to nearly zero. If the power supply has smarts in it, it'll simply turn off. So yes, a short to ground is a serious problem at a "mere 5V". simulate this circuit
{ "source": [ "https://electronics.stackexchange.com/questions/162309", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/71057/" ] }
162,316
Several months ago i made a 10Amper power supply with 5 outputs, each output has a fuse holder for crystal 10A Fuses, they work as intended but its a chore to replace them every time i short them, i found this "10A reseteable fuse" however what are the chances of this relacing the glass fuses with no drawbacks at all?, I do know poly fuses take a while to act since they blow by the heat generated by the short circuit, putting at risk the load, however, im not quite sure what kind of fuses are these in the picture could they work out better than poly fuses? what kind of fuses are inside these devices?
"Pull-up" is used more often in circuit design than "push-up". But I would imagine anyone would understand you either way. The pull-up resistor isn't increasing the voltage. It's simply connecting the 5V supply that already exists to the digital input pin of the Arduino. Digital input pins are designed to have very high internal resistance, so extremely little current will flow into the pin. If very little current is flowing through a resistor, the voltage on either side of the resistor will be about the same. So R1 will have approximately 5V on both sides of it. That means, most importantly, the voltage at the input pin will be 5V when the switch isn't being pressed. Note: The arrows represent the current flow. simulate this circuit – Schematic created using CircuitLab Whenever the Arduino measures the state of the digital input pin, it can only choose one of two options: high or low. Some external device must be connected to that pin (in your case, a switch) to apply either a high voltage or a low voltage. But what if there's nothing connected to the pin? You might be tempted to say it should read as a low voltage. Unfortunately, that's not correct. We call this condition "floating". The pin is not being actively driven high or low by an external device, so it's just floating in an unknown state. This is dangerous because the Arduino still must choose high or low when it measures the pin. It can't choose "neither" as an option. Which one will it choose? Who knows. In fact, the physical metal pin itself will act as a tiny antenna and may be affected by any nearby electrostatic field. Simply moving your hand in the vicinity of the chip may cause it to change states sporadically. Moral of the story: NEVER leave a digital input floating. The sole job of the pull-up resistor is to prevent the pin from floating when the switch isn't being pressed. Once the switch is actually pressed, the 5V supply has a path to ground through the pull-up resistor and the closed switch. But the resistor will limit the current to a reasonable amount, which avoids a short to ground. Using Ohm's Law, the voltage at the bottom of the resistor will now be very close to 0V. Since that's where the digital input pin is connected, the Arduino will read a low voltage. Now the firmware programmed into the Arduino can safely read the status of the digital input pin and determine when the button is being pressed. simulate this circuit So why don't we just connect the 5V supply directly to the digital input pin and forgo the pull-up resistor? As you indicated in your question, that will cause the 5V supply to short to ground when the switch is pressed. Depending on how big the power supply is, it could cause a massive current flow through the switch. It could damage the power supply and possibly melt the button. At the very least, the power supply will brown-out or black-out and the supply output will drop to nearly zero. If the power supply has smarts in it, it'll simply turn off. So yes, a short to ground is a serious problem at a "mere 5V". simulate this circuit
{ "source": [ "https://electronics.stackexchange.com/questions/162316", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/52466/" ] }
163,141
I have a cheap set of Phillips SHE3000 earphones with a broken wire on the plug. Since I was repairing another set of headphones, with the same problem, I bought an extra plug, just to try repairing them too. Here's my problem - inside the cable for each channel there's not only the ground for it, but also some weird white hairs. It does not look like those would survive soldering. What are those and what should I do with them?
They are just essentially string to help support the cable. You should be fine soldering the cable. One thing to remember about the cables in headphones is that it is usually enamel coated copper wire. You usually need to heat it up to around 390*C to burn off the insulation before you can make a decent solder joint. The way I do this is to put a large blob of solder onto the end of a soldering iron, then push the end of the wire that needs to be tinned through the solder and pull it back out again. This usually neatly removes the enamel and tins the wire on all sides making soldering easy. Bonus info, when more than one of the wires is broken midway along the cable, the easiest thing to do is the cut each of the wires at staggered locations so that when you solder them you don't need to worry about insulating one wire from another as the solder joints will be at different locations and the enamel on the other wires will be intact on the other wires. If you are soldering on a new plug though, you don't need to worry about that.
{ "source": [ "https://electronics.stackexchange.com/questions/163141", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/71508/" ] }
163,860
Let's say you had a rather simple and small microcontroller and had no interfacing, no computer, no debugger, compiler, or assembler. Could you write your code in assembly, convert it (manually) to machine code, and then apply power to the appropriate pins using a voltage source? I understand you would need appropriate I/O and memory to really do anything, but if you were so inclined and had the time, could you do this? I guess, historically, how was this done when there was no computer/compiler/assembler to begin with? Feel free to link me to an outside resource. Thanks! :)
Could you write your code in assembly, convert it (manually) to machine code, Yes! Code can be written "out of your head" in binary, if you wish. Long (long long) ago this is how I started using (then) microprocessors. I and friends would write code in assembly language, compile it manually to machine code (something you can do "by inspection" after some practice) then enter it into the processor by various means. On one system we built we would set up the address on binary (on off) switches or use an auto increment feature of the processor, enter 8 data bits on binary switches and then press a "clock" switch to enter the data to memory. The equivalent functionality could be achieved with even fewer switches on a modern microcontroller using serial SPI programming - see below. ... and then apply power to the appropriate pins using a voltage source? Yes! But it would be incredibly slow to do! Many modern microcontrollers allow use of an "SPI" interface for programming. This typically consists of input and output data lines and a "clock" line, and usually a reset line. Many processors allow SPI clock and data to be "static" which means there is no limit on how long you can take to set up the data between bits. You could program such a processor using a data line and a clock line which were driven by manually operated switches. The clock line needs to be "bounce free" - you need to be able to set it high or low in a single transition per operation - so a minimum interface may need to include a Schmitt triggered gate. You may "get away with" just an RC delay and a push button switch, but a Schmitt triggered input is safer. The data line does not need to be bounce free as its state is only read at the clock edge. Some interfaces are interactive - data is output by the processor during programming (eg data out = MISO = Master In Serial Out on AVR processors). To read this you'd need to add eg an LED and a resistor (and just maybe a buffer or transistor if the drive capability was REALLY low). MC6800: From semi-fading memory (almost 40 years!) LDI A, $7F ...... 86 7F ...... 1000 0110 0111 1111 STA, $1234...... B7 12 34 ... 1011 0111 0001 0010 0011 0100 LDI X, $2734... CE 27 34 ... 1100 1110 0010 0111 0011 0100 ...
{ "source": [ "https://electronics.stackexchange.com/questions/163860", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/71828/" ] }
166,259
A typical grid uses 110..500 kilovolts lines to deliver electricity to substations which lower that to 6..20 kilovolts and then lines with that lower voltage get to consumers where yet other substations are located which finally lower those 6-20 kilovolts to consumer voltage (100 or 230 volts or whatever the local standard is). Those 110..500 kilovolt lines often pass through areas where those consumers are located. Consumers could be connected to those lines via transformers accepting say 110 kilovolts and outputting consumer voltage. Instead those lines run to faraway somewhere and then another powerline runs back with some lower voltage and a consumer is hooked to the latter. That's a lot of extra wiring. What's the reason for this design? Why not hook consumers to the closest powerline?
HV (66kV - 500kV) is... difficult to deal with. I will rattle off reasons I can think of from the top of my head. All figures that follow (weights, dollars) are order-of-magnitude guesstimates. Clearances Let's use 220kV as an example. The Australian HV substation standard AS 2067 nominates the following clearances required for 220kV equipment: Phase to earth - 2100mm. That is, no 220kV conductor may be within 2 metres of any earthed conductor (say, a transformer tank, or a steel pole.) Edit: Actually, I should have quoted the Non Flashover Distance (N) here. Phase to phase clearance - 2,415mm. That is, the 220kV aerial conductors must be at least 2.4m apart at all times. Horizontal safety clearance - 4,125mm. All live parts must be at least 4,125mm above any surface a person can stand on. Vertical safety clearance - 3,565 mm. Which is to say there is no such thing as a 'compact' 220kV substation. (Well, there is; substations based on gas-insulated switchgear can be very compact, but you don't want to know how much they cost.) The minimum size for a 220kV substation, containing the required equipment and maintaining all these clearances, is at least a 20m × 20m square, i.e. the size of a suburban block of land. It would also have to have structures at least 4 metres high, which is hard to blend into the suburban landscape. In addition to the above clearances required to prevent people getting directly electrocuted, you also have to contend with - Fire safety radius in case a transformer drops 10,000 litres of insulating oil and catches fire. From memory, at least 10 metres. Radius in case of electrical explosion. Typical threshold radius for receiving 'survivable' second-degree burns can exceed 10 metres for some energetic kinds of faults. Definitely no civilian housing allowed inside this radius. Protection A fault on the 220kV network must be cleared rapidly, or it will drive the whole grid into an unstable state (i.e. blackout.) The 'critical fault clearing time' to avoid a blackout is usually much less than 1 second. Very expensive protection schemes (line differential with optic fibre pilots, distance protection) are used to ensure this high speed of protection. These protection schemes must be installed at every terminal of the 220kV line. Once we account for the cost of - 220kV circuit breakers - about $200,000 each, minimum three required per substation - two for the incoming/outgoing circuit continuing past the substation, and one for the T-off = $600,000 two sets of three-phase protection current transformers rated 220kV, and "enough" continuous amps - about $50,000 a set (ballpark) = $100,000 two sets of protection relays - each with a redundant duplicate - about $20,000 each = $80,000. (Note: duplicate "X" and "Y" protection is standard for HV substations.) ... we are up to about $780,000, just in protection equipment, per substation. And we haven't even started buying transmission line termination hardware, surge diverters, busbar, support structures, earthworks, fencing, concrete, control PLC's, control hut... (Compare 22kV distribution transformer protection, which is usually just a set of three-phase expulsion dropout fuses, total cost maybe $2,000.) Transformers 220kV transformers are large, by dint of all the insulation required inside them to prevent flashover. There is no such thing as a "small" 220kV transformer - the smallest one I have seen is rated 60 MVA and weighs about 10 tons. Contrast typical pole-top transformers 22/0.415kV which are rated 500kVA or less. The weight is important because there is a maximum limit to what you can have on top of a wooden pole. I am no structural engineer, but you certainly wouldn't want to pole-mount anything more than a ton. Is that enough reasons?
{ "source": [ "https://electronics.stackexchange.com/questions/166259", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3552/" ] }
166,346
You may have seen the article This USB Drive Can Nuke A Computer where it shows how a flash drive shaped device can completely fry out all the components of a computer. This is very shocking (pun intended) for me as an Internet cafe owner. But what's even scarier is the thought that someone could do this same attack over the local Ethernet cables and take out all my computers including my router without looking that suspicious. My question is: Is the USB flash drive attack also possible over Ethernet and if so, will it affect all my computers . Also, is there anything I can do to protect against the fake USB flash drive/Ethernet attack?
Long before USB there was the Etherkiller . And yes, it can fry your equipment. But unless you have bargain-basement (and I mean real sh*t equipment, not just inexpensive) stuff it won't affect any further devices connected to it.
{ "source": [ "https://electronics.stackexchange.com/questions/166346", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/73023/" ] }
167,188
I am using an IR LED and a photo diode, and I have designed the PCB according to the specifications that I found on the datasheet - i.e. hole sizes for the pins being 0.6 mm. However, on these components there are 'knuckles' (see below image) and these are wider than the hole size they quote for the pins. So the issue is the component doesn't go low enough into the board because of these knuckles. Why are they there? How can I know how wide they are in order to make my holes account for this?
I have two possible explanations: The 'knuckles' are there intentionally to avoid the pins going all the way through the PCB. Most of the times it is not desired that the pins are going all the way through the PCB. They are a remnant of the production process (see picture below). First the pins are a part of one single metal sheet and are cut after the dies and the casings are added. The cutting leaves the knuckles.
{ "source": [ "https://electronics.stackexchange.com/questions/167188", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/50374/" ] }
167,350
This might be a ridiculous question to ask - but I have made a mistake in a new library I created in Eagle for a component. The drill diameter of the plated holes for the component leads should have been 0.04" but I missed the fact that the default diameter of pads inserted by Eagle is ~0.025". The PCB has come back and lo & behold, I cannot fit my component leads. What are my options (if I need to get a proto build done immediately)? The only option I can think of is: to file away the component leads until they fit into the hole. Is there a better way?
This is a one-off prototype, so doesn't need to withstand end-user mechanical abuse. I would probably trim the leads a bit, then set the ends of the leads on the pads, using the holes to align them. Now use solder blobs to hold the component in place. The leads aren't going through the holes, but the ends are sitting on top of them. The solder guarantees a connection and holds the part in place. The part will be held much more weakly than if the leads were going through the holes, but for testing your circuit it should be good enough. If you really need more mechanical strength in your prototype, glob a lot of hot glue around the pins extending all the way up to the bottom of the part. Before you forget, go into Eagle now and fix the hole and pad sizes for that part in your library. This can be easy to forget before the next revision when you're knee deep in changing other parts of the circuit.
{ "source": [ "https://electronics.stackexchange.com/questions/167350", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/43880/" ] }
167,458
Atmega16 datasheet says that it has a) 16 Kbytes of In-System Self-programmable Flash program memory and b) 512 Bytes EEPROM. Can a microcontoller have two separate ROMs which can be programmed through EEPROM technology and Flash technology? Or Is my inference(as given above) from the datasheet is wrong? I know that our program is stored in flash memory than why will anyone need EEPROM? What is its use if we have flash memory for the program? Also can any one explain what is the term "In-System Self-programmable" What I know : Flash technology can write the program in blocks of data whereas EEPROM can write data byte by byte.
Nowadays, Flash memory is used to hold program code, and EEPROM (Electrically Erasable Read-only Memory) is used to hold persistent data. Back some 30 years ago, before Flash came along, EEPROMs were used to hold program code. Actually ROM (Read-Only Memory) came first, then PROM (Programmable ROM, once only), EPROM (PROM Erasable with UV light), EEPROM, and finally Flash. ROMs are still used for very high-volume, low-cost applications (e.g. talking greeting cards). The important difference with current microcontrollers is that you cannot generally execute code out of EEPROM, and it is awkward for programs to store data in flash. (Data is stored in flash when for example you use the "const" keyword in a data declaration, or define a string, but that is handled behind the scenes by the compiler and linker.) The EEPROM area can be used to hold configuration or other data which you want to be available across reboots including if the microcontroller has lost power and is then powered back up. Functionally, you can think of the EEPROM as a very small hard drive or SD card. On microcontrollers without EEPROM, it is possible to store persistent data in flash memory, but this becomes difficult since microcontrollers were not really designed for this, and you have to find a special spot that will not interfere with the program code, and set this aside with the linker. Plus as mentioned below, you can usually update the EEPROM many times more than the flash. If you do program data in flash, this doesn't mean you can access the data as variables in your C program, because there is no way to tell the compiler where these variables are in your code (i.e. you can't bind a const variable to this area of flash.) So reading them has to be done through the special set of registers that are used to write them. Note this restriction applies to the data in EEPROM also, so it has no advantage in this regard. To program either flash or EEPROM, a block of memory first must be erased. Then it is programmed. For flash, writing is usually done a block at a time also. For EEPROMs, it can be done by blocks or a byte at a time, depending on the microcontroller. For both flash and EEPROMs, there is a maximum number of times you can update them before you wear out the memory. This number is given in the datasheet as a minimum guaranteed value. It is usually much higher for EEPROMs than for flash memory. For flash, I have seen numbers as low as 1000. For EEPROMs, I have seen numbers as high as 1,000,000. One advantage of EEPROMs over flash, is that you can erase them many more times than you can erase flash. "In-System Self-programmable" simply means the microcontroller can update its own flash while running. The feature is usually used to updated code in the field. The trick is that you need to leave some code in the system while the main program is being updated, called the bootloader. This scheme is used in the Arduino system to program the chip.
{ "source": [ "https://electronics.stackexchange.com/questions/167458", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/46113/" ] }
167,463
I have a 3.3V SoC that could draw up to around 250mA - 300mA. Is it a good idea to power it using 2x AA(A) batteries? Is it supposed to be an unstable setup without decoupling capacitors? Thanks
Nowadays, Flash memory is used to hold program code, and EEPROM (Electrically Erasable Read-only Memory) is used to hold persistent data. Back some 30 years ago, before Flash came along, EEPROMs were used to hold program code. Actually ROM (Read-Only Memory) came first, then PROM (Programmable ROM, once only), EPROM (PROM Erasable with UV light), EEPROM, and finally Flash. ROMs are still used for very high-volume, low-cost applications (e.g. talking greeting cards). The important difference with current microcontrollers is that you cannot generally execute code out of EEPROM, and it is awkward for programs to store data in flash. (Data is stored in flash when for example you use the "const" keyword in a data declaration, or define a string, but that is handled behind the scenes by the compiler and linker.) The EEPROM area can be used to hold configuration or other data which you want to be available across reboots including if the microcontroller has lost power and is then powered back up. Functionally, you can think of the EEPROM as a very small hard drive or SD card. On microcontrollers without EEPROM, it is possible to store persistent data in flash memory, but this becomes difficult since microcontrollers were not really designed for this, and you have to find a special spot that will not interfere with the program code, and set this aside with the linker. Plus as mentioned below, you can usually update the EEPROM many times more than the flash. If you do program data in flash, this doesn't mean you can access the data as variables in your C program, because there is no way to tell the compiler where these variables are in your code (i.e. you can't bind a const variable to this area of flash.) So reading them has to be done through the special set of registers that are used to write them. Note this restriction applies to the data in EEPROM also, so it has no advantage in this regard. To program either flash or EEPROM, a block of memory first must be erased. Then it is programmed. For flash, writing is usually done a block at a time also. For EEPROMs, it can be done by blocks or a byte at a time, depending on the microcontroller. For both flash and EEPROMs, there is a maximum number of times you can update them before you wear out the memory. This number is given in the datasheet as a minimum guaranteed value. It is usually much higher for EEPROMs than for flash memory. For flash, I have seen numbers as low as 1000. For EEPROMs, I have seen numbers as high as 1,000,000. One advantage of EEPROMs over flash, is that you can erase them many more times than you can erase flash. "In-System Self-programmable" simply means the microcontroller can update its own flash while running. The feature is usually used to updated code in the field. The trick is that you need to leave some code in the system while the main program is being updated, called the bootloader. This scheme is used in the Arduino system to program the chip.
{ "source": [ "https://electronics.stackexchange.com/questions/167463", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/72586/" ] }
167,692
I am developing a home automation project in which I'm using relays to control appliances. I need to control devices with 220V and 6A rating. Should I use relays to control these appliances as a long term solution? Relay I'm using is mechanical and is rated 220V 7A. If I keep it ON to control e.g. a fan, for more than a few hours on a daily basis, will the relay cause any problems? If yes then, what are other possible solutions?
Relays tend to be quite reliable in benign environments, however they have a limited lifetime. Typically something like 50,000-100,000 operations at full rated load. At lighter loads, the life will increase, generally up to many millions of operations with a negligible load (the so-called mechanical life). All this information will be clearly given in any decent datasheet. The markings on the relay are only limits for safety agencies and have little to do with the relay life. Not all datasheets show the life vs. switched current, even for resistive loads, so you may have to test samples to determine that characteristic if you are say, using a 30A relay to switch 5A maximum. Inductive loads, incandescent lamps, and motor loads will also shorten the life. Solid-state alternatives to relays have no easily defined wear-out mechanism, however they can easily die suddenly due to voltage surges, current surges (including momentary shorts) and from thermal cycling. They are also less resistant to heat, and tend to create a lot of it (a ballpark number is 1W per ampere of load current). Most remotely switched outlets and similar consumer devices (where the consumer can plug anything into them) use relays. If the load is relatively light and well defined (perhaps a lamp) then solid state may be a superior solution.
{ "source": [ "https://electronics.stackexchange.com/questions/167692", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/59831/" ] }
169,205
I have mechanically damaged a capacitor on an old motherboard and it made a PFFFT sound like some gas went out of it and then some liquid leaked. What is that? Is it toxic? I hope that it was not mercury! The capacitor is of a cylindric shape with two wires at bottom, about 7mm in diameter.
Yes it's toxic; No it's not mercury; Yes you'll live :) If it was a "wet" capacitor type, then most likely that was sulfuric acid or some organic or inorganic solvent. If it was a solid, then perhaps manganese dioxide. Whatever it was it isn't good for you so don't breath it, take a bath in it, or move to a planet full of it. But... one capacitor one time in your life will not make a difference in your overall health.
{ "source": [ "https://electronics.stackexchange.com/questions/169205", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ] }